WorldWideScience

Sample records for time segmentation approach

  1. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  3. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  4. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  5. Innovative visualization and segmentation approaches for telemedicine

    Science.gov (United States)

    Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2014-09-01

    In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.

  6. Segmentation of Nonstationary Time Series with Geometric Clustering

    DEFF Research Database (Denmark)

    Bocharov, Alexei; Thiesson, Bo

    2013-01-01

    We introduce a non-parametric method for segmentation in regimeswitching time-series models. The approach is based on spectral clustering of target-regressor tuples and derives a switching regression tree, where regime switches are modeled by oblique splits. Such models can be learned efficiently...... from data, where clustering is used to propose one single split candidate at each split level. We use the class of ART time series models to serve as illustration, but because of the non-parametric nature of our segmentation approach, it readily generalizes to a wide range of time-series models that go...

  7. AUTOMOTIVE MARKET- FROM A GENERAL TO A MARKET SEGMENTATION APPROACH

    Directory of Open Access Journals (Sweden)

    Liviana Andreea Niminet

    2013-12-01

    Full Text Available Automotive market and its corresponding industry are undoubtedly of outmost importance and therefore proper market segmentation is crucial for market players, potential competitors and customers as well. Time has proved that market economic analysis often shown flaws in determining the relevant market, by using solely or mainly the geographic aspect and disregarding the importance of segments on the automotive market. For these reasons we propose a new approach of the automotive market proving the importance of proper market segmentation and defining the strategic groups within the automotive market.

  8. Leisure market segmentation : an integrated preferences/constraints-based approach

    NARCIS (Netherlands)

    Stemerding, M.P.; Oppewal, H.; Beckers, T.A.M.; Timmermans, H.J.P.

    1996-01-01

    Traditional segmentation schemes are often based on a grouping of consumers with similar preference functions. The research steps, ultimately leading to such segmentation schemes, are typically independent. In the present article, a new integrated approach to segmentation is introduced, which

  9. Pyramidal approach to license plate segmentation

    Science.gov (United States)

    Postolache, Alexandru; Trecat, Jacques C.

    1996-07-01

    Car identification is a goal in traffic control, transport planning, travel time measurement, managing parking lot traffic and so on. Most car identification algorithms contain a standalone plate segmentation process followed by a plate contents reading. A pyramidal algorithm for license plate segmentation, looking for textured regions, has been developed on a PC based system running Unix. It can be used directly in applications not requiring real time. When input images are relatively small, real-time performance is in fact accomplished by the algorithm. When using large images, porting the algorithm to special digital signal processors can easily lead to preserving real-time performance. Experimental results, for stationary and moving cars in outdoor scenes, showed high accuracy and high scores in detecting the plate. The algorithm also deals with cases where many character strings are present in the image, and not only the one corresponding to the plate. This is done by the means of a constrained texture regions classification.

  10. Stability of latent class segments over time

    DEFF Research Database (Denmark)

    Mueller, Simone

    2011-01-01

    Dynamic stability, as the degree to which identified segments at a given time remain unchanged over time in terms of number, size and profile, is a desirable segment property which has received limited attention so far. This study addresses the question to what degree latent classes identified from...... logit model suggests significant changes in the price sensitivity and the utility from environmental claims between both experimental waves. A pooled scale adjusted latent class model is estimated jointly over both waves and the relative size of latent classes is compared across waves, resulting...... in significant differences in the size of two out of seven classes. These differences can largely be accounted for by the changes on the aggregated level. The relative size of latent classes is correlated at 0.52, suggesting a fair robustness. An ex-post characterisation of latent classes by behavioural...

  11. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  12. A Rough Set Approach for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Prabha Dhandayudam

    2014-04-01

    Full Text Available Customer segmentation is a process that divides a business's total customers into groups according to their diversity of purchasing behavior and characteristics. The data mining clustering technique can be used to accomplish this customer segmentation. This technique clusters the customers in such a way that the customers in one group behave similarly when compared to the customers in other groups. The customer related data are categorical in nature. However, the clustering algorithms for categorical data are few and are unable to handle uncertainty. Rough set theory (RST is a mathematical approach that handles uncertainty and is capable of discovering knowledge from a database. This paper proposes a new clustering technique called MADO (Minimum Average Dissimilarity between Objects for categorical data based on elements of RST. The proposed algorithm is compared with other RST based clustering algorithms, such as MMR (Min-Min Roughness, MMeR (Min Mean Roughness, SDR (Standard Deviation Roughness, SSDR (Standard deviation of Standard Deviation Roughness, and MADE (Maximal Attributes DEpendency. The results show that for the real customer data considered, the MADO algorithm achieves clusters with higher cohesion, lower coupling, and less computational complexity when compared to the above mentioned algorithms. The proposed algorithm has also been tested on a synthetic data set to prove that it is also suitable for high dimensional data.

  13. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  14. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  15. Social discourses of healthy eating. A market segmentation approach.

    Science.gov (United States)

    Chrysochou, Polymeros; Askegaard, Søren; Grunert, Klaus G; Kristensen, Dorthe Brogård

    2010-10-01

    This paper proposes a framework of discourses regarding consumers' healthy eating as a useful conceptual scheme for market segmentation purposes. The objectives are: (a) to identify the appropriate number of health-related segments based on the underlying discursive subject positions of the framework, (b) to validate and further describe the segments based on their socio-demographic characteristics and attitudes towards healthy eating, and (c) to explore differences across segments in types of associations with food and health, as well as perceptions of food healthfulness.316 Danish consumers participated in a survey that included measures of the underlying subject positions of the proposed framework, followed by a word association task that aimed to explore types of associations with food and health, and perceptions of food healthfulness. A latent class clustering approach revealed three consumer segments: the Common, the Idealists and the Pragmatists. Based on the addressed objectives, differences across the segments are described and implications of findings are discussed.

  16. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation

    Directory of Open Access Journals (Sweden)

    Rahul Deb Das

    2016-11-01

    Full Text Available Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS. Both applications rely on the sensor traces generated by travellers’ smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  17. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation.

    Science.gov (United States)

    Das, Rahul Deb; Winter, Stephan

    2016-11-23

    Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers' smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  18. Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.

    Science.gov (United States)

    Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A

    2011-01-01

    Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.

  19. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  20. Segmentation of time series with long-range fractal correlations

    Science.gov (United States)

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  1. Segmentation of time series with long-range fractal correlations.

    Science.gov (United States)

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  2. Travel Time Estimation on Urban Street Segment

    Directory of Open Access Journals (Sweden)

    Jelena Kajalić

    2018-02-01

    Full Text Available Level of service (LOS is used as the main indicator of transport quality on urban roads and it is estimated based on the travel speed. The main objective of this study is to determine which of the existing models for travel speed calculation is most suitable for local conditions. The study uses actual data gathered in travel time survey on urban streets, recorded by applying second by second GPS data. The survey is limited to traffic flow in saturated conditions. The RMSE method (Root Mean Square Error is used for research results comparison with relevant models: Akcelik, HCM (Highway Capacity Manual, Singapore model and modified BPR (the Bureau of Public Roads function (Dowling - Skabardonis. The lowest deviation in local conditions for urban streets with standardized intersection distance (400-500 m is demonstrated by Akcelik model. However, for streets with lower signal density (<1 signal/km the correlation between speed and degree of saturation is best presented by HCM and Singapore model. According to test results, Akcelik model was adopted for travel speed estimation which can be the basis for determining the level of service in urban streets with standardized intersection distance and coordinated signal timing under local conditions.

  3. Clinical implications of ST segment time-course recovery patterns ...

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    Journal home page: http://www.akspublication.com/ijmu. Original Work. 3. Copyrighted © by Dr. ... KEY WORDS: Exercise stress test; ST segment time course patterns. INTRODUCTIONᴪ .... using simple descriptive statistics (mean ± SD) and contingency .... two patients who had the recovery time of less than. 3 minutes, had ...

  4. A segmentation approach for a delineation of terrestrial ecoregions

    Science.gov (United States)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for

  5. Hyperspectral image segmentation using a cooperative nonparametric approach

    Science.gov (United States)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  6. TOURISM SEGMENTATION BASED ON TOURISTS PREFERENCES: A MULTIVARIATE APPROACH

    Directory of Open Access Journals (Sweden)

    Sérgio Dominique Ferreira

    2010-11-01

    Full Text Available Over the last decades, tourism became one of the most important sectors of the international economy. Specifically in Portugal and Brazil, its contribution to Gross Domestic Product (GDP and job creation is quite relevant. In this sense, to follow a strong marketing approach on the management of tourism resources of a country comes to be paramount. Such an approach should be based on innovations which help unveil the preferences of tourists with accuracy, turning it into a competitive advantage. In this context, the main objective of the present study is to illustrate the importance and benefits associated with the use of multivariate methodologies for market segmentation. Another objective of this work is to illustrate on the importance of a post hoc segmentation. In this work, the authors applied a Cluster Analysis, with a hierarchical method followed by an  optimization method. The main results of this study allow the identification of five clusters that are distinguished by assigning special importance to certain tourism attributes at the moment of choosing a specific destination. Thus, the authors present the advantages of post hoc segmentation based on tourists’ preferences, in opposition to an a priori segmentation based on socio-demographic characteristics.

  7. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  8. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    Science.gov (United States)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  9. Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks

    Science.gov (United States)

    2016-08-11

    a compute- intensive system such as a self - driving car that we have recently developed [28]. Such systems run computation-demanding algorithms...Applications. In RTSS, 2012. [12] J. Kim et al. Parallel Scheduling for Cyber-Physical Systems: Analysis and Case Study on a Self - Driving Car . In ICCPS...leveraging GPU can be modeled using a multi-segment self -suspending real-time task model. For example, a planning algorithm for autonomous driving can

  10. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  11. Interactive-cut: Real-time feedback segmentation for translational research.

    Science.gov (United States)

    Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher

    2014-06-01

    In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Markerless tracking in nuclear power plants. A line segment-based approach

    International Nuclear Information System (INIS)

    Ishii, Hirotake; Kimura, Taro; Tokumaru, Hiroki; Shimoda, Hiroshi; Koda, Yuya

    2017-01-01

    To develop augmented reality-based support systems, a tracking method that measures the camera's position and orientation in real time is indispensable. A relocalization is one step that is used to (re)start the tracking. A line-segment-based relocalization method that uses a RGB-D camera and coarse-to-fine approach was developed and evaluated for this study. In the preparation stage, the target environment is scanned with a RGB-D camera. Line segments are recognized. Then three-dimensional positions of the line segments are calculated, and statistics of the line segments are calculated and stored in a database. In the relocalization stage, a few images that closely resemble the current RGB-D camera image are chosen from the database by comparing the statistics of the line segments. Then the most similar image is chosen using Normalized Cross-Correlation. This coarse-to-fine approach reduces the computational load to find the most similar image. The method was evaluated in the water purification room of the Fugen nuclear power plant. Results showed that the success rate of the relocalization is 93.6% and processing time is 45.7 ms per frame on average, which is promising for practical use. (author)

  13. Real-time object detection and semantic segmentation for autonomous driving

    Science.gov (United States)

    Li, Baojun; Liu, Shun; Xu, Weichao; Qiu, Wei

    2018-02-01

    In this paper, we proposed a Highly Coupled Network (HCNet) for joint objection detection and semantic segmentation. It follows that our method is faster and performs better than the previous approaches whose decoder networks of different tasks are independent. Besides, we present multi-scale loss architecture to learn better representation for different scale objects, but without extra time in the inference phase. Experiment results show that our method achieves state-of-the-art results on the KITTI datasets. Moreover, it can run at 35 FPS on a GPU and thus is a practical solution to object detection and semantic segmentation for autonomous driving.

  14. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    Science.gov (United States)

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  15. Comparison of different deep learning approaches for parotid gland segmentation from CT images

    Science.gov (United States)

    Hänsch, Annika; Schwier, Michael; Gass, Tobias; Morgas, Tomasz; Haas, Benjamin; Klein, Jan; Hahn, Horst K.

    2018-02-01

    The segmentation of target structures and organs at risk is a crucial and very time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and often low contrast to surrounding structures, segmentation of the parotid gland is especially challenging. Motivated by the recent success of deep learning, we study different deep learning approaches for parotid gland segmentation. Particularly, we compare 2D, 2D ensemble and 3D U-Net approaches and find that the 2D U-Net ensemble yields the best results with a mean Dice score of 0.817 on our test data. The ensemble approach reduces false positives without the need for an automatic region of interest detection. We also apply our trained 2D U-Net ensemble to segment the test data of the 2015 MICCAI head and neck auto-segmentation challenge. With a mean Dice score of 0.861, our classifier exceeds the highest mean score in the challenge. This shows that the method generalizes well onto data from independent sites. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed to properly train a neural network. We evaluate the classifier performance after training with differently sized training sets (50-450) and find that 250 cases (without using extensive data augmentation) are sufficient to obtain good results with the 2D ensemble. Adding more samples does not significantly improve the Dice score of the segmentations.

  16. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  17. Marketing ambulatory care to women: a segmentation approach.

    Science.gov (United States)

    Harrell, G D; Fors, M F

    1985-01-01

    Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.

  18. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  19. Real-Time Facial Segmentation and Performance Capture from RGB Input

    OpenAIRE

    Saito, Shunsuke; Li, Tianye; Li, Hao

    2016-01-01

    We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking ...

  20. MULTISPECTRAL PANSHARPENING APPROACH USING PULSE-COUPLED NEURAL NETWORK SEGMENTATION

    Directory of Open Access Journals (Sweden)

    X. J. Li

    2018-04-01

    Full Text Available The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.

  1. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  2. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  3. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Identifying target groups for environmentally sustainable transport: assessment of different segmentation approaches

    DEFF Research Database (Denmark)

    Haustein, Sonja; Hunecke, Marcel

    2013-01-01

    Recently, the use of attitude-based market segmentation to promote environmentally sustainable transport has significantly increased. The segmentation of the population into meaningful groups sharing similar attitudes and preferences provides valuable information about how green measures should...... and behavioural segmentations are compared regarding marketing criteria. Although none of the different approaches can claim absolute superiority, attitudinal approaches show advantages in providing startingpoints for interventions to reduce car use....

  5. Segmenting articular cartilage automatically using a voxel classification approach

    DEFF Research Database (Denmark)

    Folkesson, Jenny; Dam, Erik B; Olsen, Ole F

    2007-01-01

    We present a fully automatic method for articular cartilage segmentation from magnetic resonance imaging (MRI) which we use as the foundation of a quantitative cartilage assessment. We evaluate our method by comparisons to manual segmentations by a radiologist and by examining the interscan...... reproducibility of the volume and area estimates. Training and evaluation of the method is performed on a data set consisting of 139 scans of knees with a status ranging from healthy to severely osteoarthritic. This is, to our knowledge, the only fully automatic cartilage segmentation method that has good...... agreement with manual segmentations, an interscan reproducibility as good as that of a human expert, and enables the separation between healthy and osteoarthritic populations. While high-field scanners offer high-quality imaging from which the articular cartilage have been evaluated extensively using manual...

  6. Measuring tourist satisfaction: a factor-cluster segmentation approach

    OpenAIRE

    Andriotis, Konstantinos; Agiomirgianakis, George; Mihiotis, Athanasios

    2008-01-01

    Tourist satisfaction has been considered as a tool for increasing destination competitiveness. In an attempt to gain a better understanding of tourists’ satisfaction in an island mass destination this study has taken Crete as a case with the aim to identify the underlying dimensions of tourists’ satisfaction, to investigate whether tourists could be grouped into distinct segments and to examine the significant difference between the segments and sociodemographic and travel arrangement charact...

  7. A decision-theoretic approach for segmental classification

    OpenAIRE

    Yau, Christopher; Holmes, Christopher C.

    2013-01-01

    This paper is concerned with statistical methods for the segmental classification of linear sequence data where the task is to segment and classify the data according to an underlying hidden discrete state sequence. Such analysis is commonplace in the empirical sciences including genomics, finance and speech processing. In particular, we are interested in answering the following question: given data $y$ and a statistical model $\\pi(x,y)$ of the hidden states $x$, what should we report as the ...

  8. Timing Embryo Segmentation: Dynamics and Regulatory Mechanisms of the Vertebrate Segmentation Clock

    Science.gov (United States)

    Resende, Tatiana P.; Andrade, Raquel P.; Palmeirim, Isabel

    2014-01-01

    All vertebrate species present a segmented body, easily observed in the vertebrate column and its associated components, which provides a high degree of motility to the adult body and efficient protection of the internal organs. The sequential formation of the segmented precursors of the vertebral column during embryonic development, the somites, is governed by an oscillating genetic network, the somitogenesis molecular clock. Herein, we provide an overview of the molecular clock operating during somite formation and its underlying molecular regulatory mechanisms. Human congenital vertebral malformations have been associated with perturbations in these oscillatory mechanisms. Thus, a better comprehension of the molecular mechanisms regulating somite formation is required in order to fully understand the origin of human skeletal malformations. PMID:24895605

  9. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  10. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    Science.gov (United States)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  11. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  12. Population segmentation: an approach to reducing childhood obesity inequalities.

    Science.gov (United States)

    Mahmood, Hashum; Lowe, Susan

    2017-05-01

    The aims of this study are threefold: (1) to investigate the relationship between socio-economic status (inequality) and childhood obesity prevalence within Birmingham local authority, (2) to identify any change in childhood obesity prevalence between deprivation quintiles and (3) to analyse individualised Birmingham National Child Measurement Programme (NCMP) data using a population segmentation tool to better inform obesity prevention strategies. Data from the NCMP for Birmingham (2010/2011 and 2014/2015) were analysed using the deprivation scores from the Income Domain Affecting Children Index (IDACI 2010). The percentage of children with excess weight was calculated for each local deprivation quintile. Population segmentation was carried out using the Experian's Mosaic Public Sector 6 (MPS6) segmentation tool. Childhood obesity levels have remained static at the national and Birmingham level. For Year 6 pupils, obesity levels have increased in the most deprived deprivation quintiles for boys and girls. The most affluent quintile shows a decreasing trend of obesity prevalence for boys and girls in both year groups. For the middle quintiles, the results show fluctuating trends. This research highlighted the link in Birmingham between obesity and socio-economic factors with the gap increasing between deprivation quintiles. Obesity is a complex problem that cannot simply be addressed through targeting most deprived populations, rather through a range of effective interventions tailored for the various population segments that reside within communities. Using population segmentation enables a more nuanced understanding of the potential barriers and levers within populations on their readiness for change. The segmentation of childhood obesity data will allow utilisation of social marketing methodology that will facilitate identification of suitable methods for interventions and motivate individuals to sustain behavioural change. Sequentially, it will also inform

  13. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    Directory of Open Access Journals (Sweden)

    Kailun Yang

    2018-05-01

    Full Text Available Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.

  14. A Nash-game approach to joint image restoration and segmentation

    OpenAIRE

    Kallel , Moez; Aboulaich , Rajae; Habbal , Abderrahmane; Moakher , Maher

    2014-01-01

    International audience; We propose a game theory approach to simultaneously restore and segment noisy images. We define two players: one is restoration, with the image intensity as strategy, and the other is segmentation with contours as strategy. Cost functions are the classical relevant ones for restoration and segmentation, respectively. The two players play a static game with complete information, and we consider as solution to the game the so-called Nash Equilibrium. For the computation ...

  15. Segmented arch or continuous arch technique? A rational approach

    Directory of Open Access Journals (Sweden)

    Sergei Godeiro Fernandes Rabelo Caldas

    2014-04-01

    Full Text Available This study aims at revising the biomechanical principles of the segmented archwire technique as well as describing the clinical conditions in which the rational use of scientific biomechanics is essential to optimize orthodontic treatment and reduce the side effects produced by the straight wire technique.

  16. An EM based approach for motion segmentation of video sequence

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico; Pan, Zhigeng; Skala, Vaclav

    2016-01-01

    Motions are important features for robot vision as we live in a dynamic world. Detecting moving objects is crucial for mobile robots and computer vision systems. This paper investigates an architecture for the segmentation of moving objects from image sequences. Objects are represented as groups of

  17. A spectral k-means approach to bright-field cell image segmentation.

    Science.gov (United States)

    Bradbury, Laura; Wan, Justin W L

    2010-01-01

    Automatic segmentation of bright-field cell images is important to cell biologists, but difficult to complete due to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). Standard approaches such as level set segmentation and active contours work well for fluorescent images where cells appear as round shape, but become less effective when optical artifacts such as halo exist in bright-field images. In this paper, we present a robust segmentation method which combines the spectral and k-means clustering techniques to locate cells in bright-field images. This approach models an image as a matrix graph and segment different regions of the image by computing the appropriate eigenvectors of the matrix graph and using the k-means algorithm. We illustrate the effectiveness of the method by segmentation results of C2C12 (muscle) cells in bright-field images.

  18. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  19. Assessing segment- and corridor-based travel-time reliability on urban freeways : final report.

    Science.gov (United States)

    2016-09-01

    Travel time and its reliability are intuitive performance measures for freeway traffic operations. The objective of this project was to quantify segment-based and corridor-based travel time reliability measures on urban freeways. To achieve this obje...

  20. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  1. A Hybrid Approach for Improving Image Segmentation: Application to Phenotyping of Wheat Leaves.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available In this article we propose a novel tool that takes an initial segmented image and returns a more accurate segmentation that accurately captures sharp features such as leaf tips, twists and axils. Our algorithm utilizes basic a-priori information about the shape of plant leaves and local image orientations to fit active contour models to important plant features that have been missed during the initial segmentation. We compare the performance of our approach with three state-of-the-art segmentation techniques, using three error metrics. The results show that leaf tips are detected with roughly one half of the original error, segmentation accuracy is almost always improved and more than half of the leaf breakages are corrected.

  2. Highly segmented, high resolution time-of-flight system

    Energy Technology Data Exchange (ETDEWEB)

    Nayak, T.K.; Nagamiya, S.; Vossnack, O.; Wu, Y.D.; Zajc, W.A. [Columbia Univ., New York, NY (United States); Miake, Y.; Ueno, S.; Kitayama, H.; Nagasaka, Y.; Tomizawa, K.; Arai, I.; Yagi, K [Univ. of Tsukuba, (Japan)

    1991-12-31

    The light attenuation and timing characteristics of time-of-flight counters constructed of 3m long scintillating fiber bundles of different shapes and sizes are presented. Fiber bundles made of 5mm diameter fibers showed good timing characteristics and less light attenuation. The results for a 1.5m long scintillator rod are also presented.

  3. Optimal timing of coronary invasive strategy in non-ST-segment elevation acute coronary syndromes

    DEFF Research Database (Denmark)

    Navarese, Eliano P; Gurbel, Paul A; Andreotti, Felicita

    2013-01-01

    The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations.......The optimal timing of coronary intervention in patients with non-ST-segment elevation acute coronary syndromes (NSTE-ACSs) is a matter of debate. Conflicting results among published studies partly relate to different risk profiles of the studied populations....

  4. Physical activity patterns across time-segmented youth sport flag football practice.

    Science.gov (United States)

    Schlechter, Chelsey R; Guagliano, Justin M; Rosenkranz, Richard R; Milliken, George A; Dzewaltowski, David A

    2018-02-08

    Youth sport (YS) reaches a large number of children world-wide and contributes substantially to children's daily physical activity (PA), yet less than half of YS time has been shown to be spent in moderate-to-vigorous physical activity (MVPA). Physical activity during practice is likely to vary depending on practice structure that changes across YS time, therefore the purpose of this study was 1) to describe the type and frequency of segments of time, defined by contextual characteristics of practice structure, during YS practices and 2) determine the influence of these segments on PA. Research assistants video-recorded the full duration of 28 practices from 14 boys' flag football teams (2 practices/team) while children concurrently (N = 111, aged 5-11 years, mean 7.9 ± 1.2 years) wore ActiGraph GT1M accelerometers to measure PA. Observers divided videos of each practice into continuous context time segments (N = 204; mean-segments-per-practice = 7.3, SD = 2.5) using start/stop points defined by change in context characteristics, and assigned a value for task (e.g., management, gameplay, etc.), member arrangement (e.g., small group, whole group, etc.), and setting demand (i.e., fosters participation, fosters exclusion). Segments were then paired with accelerometer data. Data were analyzed using a multilevel model with segment as unit of analysis. Whole practices averaged 34 ± 2.4% of time spent in MVPA. Free-play (51.5 ± 5.5%), gameplay (53.6 ± 3.7%), and warm-up (53.9 ± 3.6%) segments had greater percentage of time (%time) in MVPA compared to fitness (36.8 ± 4.4%) segments (p ≤ .01). Greater %time was spent in MVPA during free-play segments compared to scrimmage (30.2 ± 4.6%), strategy (30.6 ± 3.2%), and sport-skill (31.6 ± 3.1%) segments (p ≤ .01), and in segments that fostered participation (36.1 ± 2.7%) than segments that fostered exclusion (29.1 ± 3.0%; p ≤ .01

  5. Understanding heterogeneity among elderly consumers: an evaluation of segmentation approaches in the functional food market.

    Science.gov (United States)

    van der Zanden, Lotte D T; van Kleef, Ellen; de Wijk, René A; van Trijp, Hans C M

    2014-06-01

    It is beneficial for both the public health community and the food industry to meet nutritional needs of elderly consumers through product formats that they want. The heterogeneity of the elderly market poses a challenge, however, and calls for market segmentation. Although many researchers have proposed ways to segment the elderly consumer population, the elderly food market has received surprisingly little attention in this respect. Therefore, the present paper reviewed eight potential segmentation bases on their appropriateness in the context of functional foods aimed at the elderly: cognitive age, life course, time perspective, demographics, general food beliefs, food choice motives, product attributes and benefits sought, and past purchase. Each of the segmentation bases had strengths as well as weaknesses regarding seven evaluation criteria. Given that both product design and communication are useful tools to increase the appeal of functional foods, we argue that elderly consumers in this market may best be segmented using a preference-based segmentation base that is predictive of behaviour (for example, attributes and benefits sought), combined with a characteristics-based segmentation base that describes consumer characteristics (for example, demographics). In the end, the effectiveness of (combinations of) segmentation bases for elderly consumers in the functional food market remains an empirical matter. We hope that the present review stimulates further empirical research that substantiates the ideas presented in this paper.

  6. Commuters’ attitudes and norms related to travel time and punctuality: A psychographic segmentation to reduce congestion

    DEFF Research Database (Denmark)

    Haustein, Sonja; Thorhauge, Mikkel; Cherchi, Elisabetta

    2018-01-01

    three distinct commuter segments: (1) Unhurried timely commuters, who find it very important to arrive on time but less important to have a short travel time; (2) Self-determined commuters, who find it less important to arrive on lime and depend less on others for their transport choices; and (3) Busy...... commuters, who find it both important to arrive on time and to have a short travel time. Comparing the segments based on background variables shows that Self-determined commuters are younger and work more often on flextime, while Unhurried timely commuters have longer distances to work and commute more...... often by public transport. Results of a discrete departure time choice model, estimated based on data from a stated preference experiment, confirm the criterion validity of the segmentation. A scenario simulating a toll ring illustrates that mainly Self-determined commuters would change their departure...

  7. NEW APPROACHES TO CUSTOMER BASE SEGMENTATION FOR SMALL AND MEDIUM-SIZED ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Meleancă Raluca-Cristina

    2012-12-01

    Full Text Available The primary purpose of this paper is to explore current praxis and theory related to customer segmentation and to offer an approach which is best suited for small and medium sized enterprises. The proposed solution is the result of an exploratory research aiming to recognize the main variables which influence the practice of segmenting the customer base and to study the most applied alternatives available for all types of enterprises. The research has been performed by studying a large set of secondary data, scientific literature and case studies regarding smaller companies from the European Union. The result of the research consists in an original approach to customer base segmentation, which combines aspects belonging to different well spread practices and applies them to the specific needs of a small or medium company, which typically has limited marketing resources in general and targeted marketing resources in particular. The significance of the proposed customer base segmentation approach lies in the fact that, even though smaller enterprises are in most economies the greatest in number compared to large companies, most of the literature on targeting practices has focused primarily on big companies dealing with a very large clientele, while the case of the smaller companies has been to some extent unfairly neglected. Targeted marketing is becoming more and more important for all types of companies nowadays, as a result of technology advances which make targeted communication easier and less expensive than in the past and also due to the fact that broad-based media have decreased their impact over the years. For a very large proportion of smaller companies, directing their marketing budgets towards targeted campaigns is a clever initiative, as broad based approaches are in many cases less effective and much more expensive. Targeted marketing stratagems are generally related to high tech domains such as artificial intelligence, data mining

  8. Strategy-aligned fuzzy approach for market segment evaluation and selection: a modular decision support system by dynamic network process (DNP)

    Science.gov (United States)

    Mohammadi Nasrabadi, Ali; Hosseinpour, Mohammad Hossein; Ebrahimnejad, Sadoullah

    2013-05-01

    In competitive markets, market segmentation is a critical point of business, and it can be used as a generic strategy. In each segment, strategies lead companies to their targets; thus, segment selection and the application of the appropriate strategies over time are very important to achieve successful business. This paper aims to model a strategy-aligned fuzzy approach to market segment evaluation and selection. A modular decision support system (DSS) is developed to select an optimum segment with its appropriate strategies. The suggested DSS has two main modules. The first one is SPACE matrix which indicates the risk of each segment. Also, it determines the long-term strategies. The second module finds the most preferred segment-strategies over time. Dynamic network process is applied to prioritize segment-strategies according to five competitive force factors. There is vagueness in pairwise comparisons, and this vagueness has been modeled using fuzzy concepts. To clarify, an example is illustrated by a case study in Iran's coffee market. The results show that success possibility of segments could be different, and choosing the best ones could help companies to be sure in developing their business. Moreover, changing the priority of strategies over time indicates the importance of long-term planning. This fact has been supported by a case study on strategic priority difference in short- and long-term consideration.

  9. Risks in surgery-first orthognathic approach: complications of segmental osteotomies of the jaws. A systematic review.

    Science.gov (United States)

    Pelo, S; Saponaro, G; Patini, R; Staderini, E; Giordano, A; Gasparini, G; Garagiola, U; Azzuni, C; Cordaro, M; Foresta, E; Moro, A

    2017-01-01

    To date, no systematic review has been undertaken to identify the complications of segmental osteotomies. The aim of the present systematic review was to analyze the type and incidence of complications of segmental osteotomies, as well as the time of subjective and/or clinical onset of the intra- and post-operative problems. A search was conducted in two electronic databases (MEDLINE - Pubmed database and Scopus) for articles published in English between 1 January 2000 and 30 August 2015; only human studies were selected. Case report studies were excluded. Two independent researchers selected the studies and extracted the data. Two studies were selected, four additional publications were recovered from the bibliography search of the selected articles, and one additional article was added through a manual search. The results of this systematic review demonstrate a relatively low rate of complications in segmental osteotomies, suggesting this surgical approach is safe and reliable in routine orthognathic surgery. Due to the small number of studies included in this systematic review, the rate of complication related to surgery first approach may be slightly higher than those associated with traditional orthognathic surgery, since the rate of complications of segmental osteotomies must be added to the complication rate of basal osteotomies. A surgery-first approach could be considered riskier than a traditional one, but further studies that include a greater number of subjects should be conducted to confirm these findings.

  10. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Yaozhong Luo

    2017-01-01

    Full Text Available Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%, the second highest TPVF (85.34%, and the second lowest FPVF (4.48%.

  11. Psychographic segmentation: A new approach to reaching the Canadian public

    International Nuclear Information System (INIS)

    Guenette, F.

    1992-01-01

    The purpose of this paper is to review the Canadian nuclear industry's public information campaign, which began in 1987, and to describe a new approach to public opinion research that is guiding revised strategy. The authors have begun to implement research-based communications strategy and plan to track its effectiveness through additional, regular public opinion research. The tracking exercise is to fine-tune the campaign, support communications products, and evaluate the overall effectiveness of the strategy

  12. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  13. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  14. Dependence-Based Segmentation Approach for Detecting Morpheme Boundaries

    Directory of Open Access Journals (Sweden)

    Ahmed Khorsi

    2017-04-01

    Full Text Available The unsupervised morphology processing in the emerging mutant languages has the advantage over the human/supervised processing of being more agiler. The main drawback is, however, their accuracy. This article describes an unsupervised morphemes identification approach based on an intuitive and formal definition of event dependence. The input is no more than a plain text of the targeted language. Although the original objective of this work was classical Arabic, the test was conducted on an English set as well. Tests on these two languages show a very acceptable precision and recall. A deeper refinement of the output allowed 89% precision and 78% recall on Arabic.

  15. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis: A retrospective study.

    Science.gov (United States)

    Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li

    2017-08-01

    This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.

  16. A Delaunay Triangulation Approach For Segmenting Clumps Of Nuclei

    International Nuclear Information System (INIS)

    Wen, Quan; Chang, Hang; Parvin, Bahram

    2009-01-01

    Cell-based fluorescence imaging assays have the potential to generate massive amount of data, which requires detailed quantitative analysis. Often, as a result of fixation, labeled nuclei overlap and create a clump of cells. However, it is important to quantify phenotypic read out on a cell-by-cell basis. In this paper, we propose a novel method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump. Points of maximum curvature are used as vertices for Delaunay triangulation (DT), which provides a set of edge hypotheses for decomposing a clump of nuclei. Each hypothesis is subsequently tested against a constraint satisfaction network for a near optimum decomposition. The proposed method is compared with other traditional techniques such as the watershed method with/without markers. The experimental results show that our approach can overcome the deficiencies of the traditional methods and is very effective in separating severely touching nuclei.

  17. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  18. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  19. Simulation and real-time analysis of pulse shapes from segmented HPGe-detectors

    Energy Technology Data Exchange (ETDEWEB)

    Schlarb, Michael Christian

    2009-11-17

    is accomplished by searching the simulated signal basis for the best agreement with the experimental signal. The particular challenge lies in the binomial growth of the search space making an intelligent search algorithm compulsory. In order to reduce the search space, the starting time t{sub 0} for the pulse shapes can be determined independently by a neural network algorithm, developed in the scope of this work. The precision of 2 - 5ns(FWHM), which is far beyond the sampling time of the digitizers, directly influences the attainable position resolution. For the search of the positions the so-called 'Fully Informed Particle Swarm' (FIPS) was developed, implemented and has proofed to be very efficient. Depending on the number of interactions an accurate reconstruction of the positions is accomplished within several {mu}s to a few ms. Data from a simulated (d, p) reaction in inverse kinematics, using a {sup 48}Ti beam at an energy of 100 MeV, impinging on a deuterated titanium target were used to test the capabilities of the developed PSA algorithms in a realistic setting. In the ideal case of an extensive PSA an energy resolution of 2.8 keV (FWHM) for the 1382 keV line of {sup 49}Ti results but this approach works only on the limited amount of data in which only a single segment has been hit. Selecting the same events the FIPS-PSA Algorithm achieves 3.3 keV with an average computation time of {proportional_to} 0.9ms. The extensive grid search, by comparison takes 27ms. Including events with multiple hit segments increases the statistics roughly twofold and the resolution of FIPS-PSA does not deteriorate significantly at an average computing time of 2.2ms. (orig.)

  20. Simulation and real-time analysis of pulse shapes from segmented HPGe-detectors

    International Nuclear Information System (INIS)

    Schlarb, Michael Christian

    2009-01-01

    accomplished by searching the simulated signal basis for the best agreement with the experimental signal. The particular challenge lies in the binomial growth of the search space making an intelligent search algorithm compulsory. In order to reduce the search space, the starting time t 0 for the pulse shapes can be determined independently by a neural network algorithm, developed in the scope of this work. The precision of 2 - 5ns(FWHM), which is far beyond the sampling time of the digitizers, directly influences the attainable position resolution. For the search of the positions the so-called 'Fully Informed Particle Swarm' (FIPS) was developed, implemented and has proofed to be very efficient. Depending on the number of interactions an accurate reconstruction of the positions is accomplished within several μs to a few ms. Data from a simulated (d, p) reaction in inverse kinematics, using a 48 Ti beam at an energy of 100 MeV, impinging on a deuterated titanium target were used to test the capabilities of the developed PSA algorithms in a realistic setting. In the ideal case of an extensive PSA an energy resolution of 2.8 keV (FWHM) for the 1382 keV line of 49 Ti results but this approach works only on the limited amount of data in which only a single segment has been hit. Selecting the same events the FIPS-PSA Algorithm achieves 3.3 keV with an average computation time of ∝ 0.9ms. The extensive grid search, by comparison takes 27ms. Including events with multiple hit segments increases the statistics roughly twofold and the resolution of FIPS-PSA does not deteriorate significantly at an average computing time of 2.2ms. (orig.)

  1. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  2. Allocating time to future tasks: the effect of task segmentation on planning fallacy bias.

    Science.gov (United States)

    Forsyth, Darryl K; Burt, Christopher D B

    2008-06-01

    The scheduling component of the time management process was used as a "paradigm" to investigate the allocation of time to future tasks. In three experiments, we compared task time allocation for a single task with the summed time allocations given for each subtask that made up the single task. In all three, we found that allocated time for a single task was significantly smaller than the summed time allocated to the individual subtasks. We refer to this as the segmentation effect. In Experiment 3, we asked participants to give estimates by placing a mark on a time line, and found that giving time allocations in the form of rounded close approximations probably does not account for the segmentation effect. We discuss the results in relation to the basic processes used to allocate time to future tasks and the means by which planning fallacy bias might be reduced.

  3. Segmented Assimilation Theory and the Life Model: An Integrated Approach to Understanding Immigrants and Their Children

    Science.gov (United States)

    Piedra, Lissette M.; Engstrom, David W.

    2009-01-01

    The life model offers social workers a promising framework to use in assisting immigrant families. However, the complexities of adaptation to a new country may make it difficult for social workers to operate from a purely ecological approach. The authors use segmented assimilation theory to better account for the specificities of the immigrant…

  4. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    Science.gov (United States)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  5. Typology of consumer behavior in times of economic crisis: A segmentation study from Bulgaria

    Directory of Open Access Journals (Sweden)

    Katrandjiev Hristo

    2011-01-01

    Full Text Available This paper presents the second part of results from a survey-based market research of Bulgarian households. In the first part of the paper the author analyzes the changes of consumer behavior in times of economic crisis in Bulgaria. Here, the author presents market segmentation from the point of view of consumer behavior changes in times of economic crisis. Four segments (clusters were discovered, and profiled. The similarities/dissimilarities between clusters are presented through the technique of multidimensional scaling (MDS The research project is planned, organized and realized within the Scientific Research Program of University of National and World Economy, Sofia, Bulgaria.

  6. GPU-Accelerated Foreground Segmentation and Labeling for Real-Time Video Surveillance

    Directory of Open Access Journals (Sweden)

    Wei Song

    2016-09-01

    Full Text Available Real-time and accurate background modeling is an important researching topic in the fields of remote monitoring and video surveillance. Meanwhile, effective foreground detection is a preliminary requirement and decision-making basis for sustainable energy management, especially in smart meters. The environment monitoring results provide a decision-making basis for energy-saving strategies. For real-time moving object detection in video, this paper applies a parallel computing technology to develop a feedback foreground–background segmentation method and a parallel connected component labeling (PCCL algorithm. In the background modeling method, pixel-wise color histograms in graphics processing unit (GPU memory is generated from sequential images. If a pixel color in the current image does not locate around the peaks of its histogram, it is segmented as a foreground pixel. From the foreground segmentation results, a PCCL algorithm is proposed to cluster the foreground pixels into several groups in order to distinguish separate blobs. Because the noisy spot and sparkle in the foreground segmentation results always contain a small quantity of pixels, the small blobs are removed as noise in order to refine the segmentation results. The proposed GPU-based image processing algorithms are implemented using the compute unified device architecture (CUDA toolkit. The testing results show a significant enhancement in both speed and accuracy.

  7. How many segments are necessary to characterize delayed colonic transit time?

    Science.gov (United States)

    Bouchoucha, Michel; Devroede, Ghislain; Bon, Cyriaque; Raynaud, Jean-Jacques; Bejou, Bakhtiar; Benamouzig, Robert

    2015-10-01

    Measuring colonic transit time with radiopaque markers is simple, inexpensive, and very useful in constipated patients. Yet, the algorithm used to identify colonic segments is subjective, rather than founded on prior experimentation. The aim of the present study is to describe a rational way to determine the colonic partition in the measurement of colonic transit time. Colonic transit time was measured in seven segments: ascending colon, hepatic flexure, right and left transverse colon, splenic flexure, descending colon, and rectosigmoid in 852 patients with functional bowel and anorectal disorders. An unsupervised algorithm for modeling Gaussian mixtures served to estimate the number of subgroups from this oversegmented colonic transit time. After that, we performed a k-means clustering that separated the observations into homogenous groups of patients according to their oversegmented colonic transit time. The Gaussian mixture followed by the k-means clustering defined 4 populations of patients: "normal and fast transit" (n = 548) and three groups of patients with delayed colonic transit time "right delay" (n = 82) in which transit is delayed in the right part of the colon, "left delay" (n = 87) with transit delayed in the left part of colon and "outlet constipation" (n = 135) for patients with transit delayed in the terminal intestine. Only 3.7 % of patients were "erroneously" classified in the 4 groups recognized by clustering. This unsupervised analysis of segmental colonic transit time shows that the classical division of the colon and the rectum into three segments is sufficient to characterize delayed segmental colonic transit time.

  8. A Multidimensional Environmental Value Orientation Approach to Forest Recreation Area Tourism Market Segmentation

    Directory of Open Access Journals (Sweden)

    Cheng-Ping Wang

    2016-04-01

    Full Text Available This paper uses multidimensional environmental value orientations as the segmentation bases for analyzing a natural destination tourism market of the National Forest Recreation Areas in Taiwan. Cluster analyses identify two segments, Acceptance and Conditionality, within 1870 usable observations. Independent sample t test and crosstab analyses are applied to examine these segments’ forest value orientations, sociodemographic features, and service demands. The Acceptance group tends to be potential ecotourists, while still recognizing the commercial value of the natural resources. The Conditionality group may not possess a strong sense of ecotourism, given that its favored services can affect the environment. Overall, this article confirms the use of multidimensional environmental value orientation approaches can generate a comprehensive natural tourist segment comparison that benefits practical management decision making.

  9. Wireless Positioning Based on a Segment-Wise Linear Approach for Modeling the Target Trajectory

    DEFF Research Database (Denmark)

    Figueiras, Joao; Pedersen, Troels; Schwefel, Hans-Peter

    2008-01-01

    Positioning solutions in infrastructure-based wireless networks generally operate by exploiting the channel information of the links between the Wireless Devices and fixed networking Access Points. The major challenge of such solutions is the modeling of both the noise properties of the channel...... measurements and the user mobility patterns. One class of typical human being movement patterns is the segment-wise linear approach, which is studied in this paper. Current tracking solutions, such as the Constant Velocity model, hardly handle such segment-wise linear patterns. In this paper we propose...... a segment-wise linear model, called the Drifting Points model. The model results in an increased performance when compared with traditional solutions....

  10. Evaluation of a practical expert defined approach to patient population segmentation: a case study in Singapore

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    2017-11-01

    Full Text Available Abstract Background Segmenting the population into groups that are relatively homogeneous in healthcare characteristics or needs is crucial to facilitate integrated care and resource planning. We aimed to evaluate the feasibility of segmenting the population into discrete, non-overlapping groups using a practical expert and literature driven approach. We hypothesized that this approach is feasible utilizing the electronic health record (EHR in SingHealth. Methods In addition to well-defined segments of “Mostly healthy”, “Serious acute illness but curable” and “End of life” segments that are also present in the Ministry of Health Singapore framework, patients with chronic diseases were segmented into “Stable chronic disease”, “Complex chronic diseases without frequent hospital admissions”, and “Complex chronic diseases with frequent hospital admissions”. Using the electronic health record (EHR, we applied this framework to all adult patients who had a healthcare encounter in the Singapore Health Services Regional Health System in 2012. ICD-9, 10 and polyclinic codes were used to define chronic diseases with a comprehensive look-back period of 5 years. Outcomes (hospital admissions, emergency attendances, specialist outpatient clinic attendances and mortality were analyzed for years 2012 to 2015. Results Eight hundred twenty five thousand eight hundred seventy four patients were included in this study with the majority being healthy without chronic diseases. The most common chronic disease was hypertension. Patients with “complex chronic disease” with frequent hospital admissions segment represented 0.6% of the eligible population, but accounted for the highest hospital admissions (4.33 ± 2.12 admissions; p < 0.001 and emergency attendances (ED (3.21 ± 3.16 ED visits; p < 0.001 per patient, and a high mortality rate (16%. Patients with metastatic disease accounted for the highest specialist outpatient

  11. AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-09-01

    Full Text Available Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

  12. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    Science.gov (United States)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  13. A rectangle bin packing optimization approach to the signal scheduling problem in the FlexRay static segment

    Institute of Scientific and Technical Information of China (English)

    Rui ZHAO; Gui-he QIN; Jia-qiao LIU

    2016-01-01

    As FlexRay communication protocol is extensively used in distributed real-time applications on vehicles, signal scheduling in FlexRay network becomes a critical issue to ensure the safe and efficient operation of time-critical applications. In this study, we propose a rectangle bin packing optimization approach to schedule communication signals with timing constraints into the FlexRay static segment at minimum bandwidth cost. The proposed approach, which is based on integer linear program-ming (ILP), supports both the slot assignment mechanisms provided by the latest version of the FlexRay specification, namely, the single sender slot multiplexing, and multiple sender slot multiplexing mechanisms. Extensive experiments on a synthetic and an automotive X-by-wire system case study demonstrate that the proposed approach has a well optimized performance.

  14. Real-time recursive motion segmentation of video data on a programmable device

    NARCIS (Netherlands)

    Wittebrood, R.B; Haan, de G.

    2001-01-01

    We previously reported on a recursive algorithm enabling real-time object-based motion estimation (OME) of standard definition video on a digital signal processor (DSP). The algorithm approximates the motion of the objects in the image with parametric motion models and creates a segmentation mask by

  15. Sub-nanosecond time-of-flight for segmented silicon detectors

    International Nuclear Information System (INIS)

    Souza, R.T. de; Alexander, A.; Brown, K.; Floyd, B.; Gosser, Z.Q.; Hudan, S.; Poehlman, J.; Rudolph, M.J.

    2011-01-01

    Development of a multichannel time-of-flight system for readout of a segmented, ion-passivated, ion-implanted silicon detector is described. This system provides sub-nanosecond resolution (δt∼370ps) even for low energy α particles which deposit E≤7.687MeV in the detector.

  16. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  17. Different approaches to synovial membrane volume determination by magnetic resonance imaging: manual versus automated segmentation

    DEFF Research Database (Denmark)

    Østergaard, Mikkel

    1997-01-01

    Automated fast (5-20 min) synovial membrane volume determination by MRI, based on pre-set post-gadolinium-DTPA enhancement thresholds, was evaluated as a substitute for a time-consuming (45-120 min), previously validated, manual segmentation method. Twenty-nine knees [rheumatoid arthritis (RA) 13...

  18. A Real-Time Solution to the Image Segmentation Problem: CNN-Movels

    OpenAIRE

    Iannizzotto, Giancarlo; Lanzafame, Pietro; Rosa, Francesco La

    2007-01-01

    In this work we have described a re-formulation of a 2D still-image segmentation algorithm, implemented on a single-layer CNN, previously proposed (Iannizzotto, 2003). This algorithm is able to step-over limitation inherent to the class of active contours: sensitivity to insignificant false edges or "edge fragmentation". The approach features an iterative process of uniform shrinking and deformation of the active contour. Guided by statistical properties of edgeness of the image pixels, the c...

  19. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    Science.gov (United States)

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  1. A benefit segmentation approach for innovation-oriented university-business collaboration

    DEFF Research Database (Denmark)

    Kesting, Tobias; Gerstlberger, Wolfgang; Baaken, Thomas

    2018-01-01

    to deal with this situation by academic engagement, hereby providing external research support for businesses. Relying on the market segmentation approach, promoting beneficial exchange relations between academia and businesses enables the integration of both perspectives and may contribute to solving......Increasing competition in the light of globalisation imposes challenges on both academia and businesses. Universities have to compete for additional financial means, while companies, particular in high technology business environments, are facing a stronger pressure to innovate. Universities seek...

  2. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches.

    Science.gov (United States)

    Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David

    2016-04-01

    Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD

  3. Time management: a realistic approach.

    Science.gov (United States)

    Jackson, Valerie P

    2009-06-01

    Realistic time management and organization plans can improve productivity and the quality of life. However, these skills can be difficult to develop and maintain. The key elements of time management are goals, organization, delegation, and relaxation. The author addresses each of these components and provides suggestions for successful time management.

  4. Enhancement of nerve structure segmentation by a correntropy-based pre-image approach

    Directory of Open Access Journals (Sweden)

    J. Gil-González

    2017-05-01

    Full Text Available Peripheral Nerve Blocking (PNB is a commonly used technique for performing regional anesthesia and managing pain. PNB comprises the administration of anesthetics in the proximity of a nerve. In this sense, the success of PNB procedures depends on an accurate location of the target nerve. Recently, ultrasound images (UI have been widely used to locate nerve structures for PNB, since they enable a noninvasive visualization of the target nerve and the anatomical structures around it. However, UI are affected by speckle noise, which makes it difficult to accurately locate a given nerve. Thus, it is necessary to perform a filtering step to attenuate the speckle noise without eliminating relevant anatomical details that are required for high-level tasks, such as segmentation of nerve structures. In this paper, we propose an UI improvement strategy with the use of a pre-image-based filter. In particular, we map the input images by a nonlinear function (kernel. Specifically, we employ a correntropybased mapping as kernel functional to code higher-order statistics of the input data under both nonlinear and non-Gaussian conditions. We validate our approach against an UI dataset focused on nerve segmentation for PNB. Likewise, our Correntropy-based Pre-Image Filtering (CPIF is applied as a pre-processing stage to segment nerve structures in a UI. The segmentation performance is measured in terms of the Dice coefficient. According to the results, we observe that CPIF finds a suitable approximation for UI by highlighting discriminative nerve patterns.

  5. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  6. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring

    Directory of Open Access Journals (Sweden)

    Miso Ju

    2018-05-01

    Full Text Available Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96% and execution time (i.e., real-time execution, even with low-contrast images obtained using a Kinect depth sensor.

  7. Approaches to groundwater travel time

    International Nuclear Information System (INIS)

    Kaplan, P.; Klavetter, E.; Peters, R.

    1989-01-01

    One of the objectives of performance assessment for the Yucca Mountain Project is to estimate the groundwater travel time at Yucca Mountain, Nevada, to determine whether the site complies with the criteria specified in the Code of Federal Regulations, Title 10 CFR 60.113 (a). The numerical standard for performance in these criteria is based on the groundwater travel time along the fastest path of likely radionuclide transport from the disturbed zone to the accessible environment. The concept of groundwater travel time as proposed in the regulations, does not have a unique mathematical statement. The purpose of this paper is to discuss the ambiguities associated with the regulatory specification of groundwater travel time, two different interpretations of groundwater travel time, and the effect of the two interpretations on estimates of the groundwater travel time

  8. Approaches to groundwater travel time

    International Nuclear Information System (INIS)

    Kaplan, P.; Klavetter, E.; Peters, R.

    1989-01-01

    One of the objectives of performance assessment for the Yucca Mountain Project is to estimate the groundwater travel time at Yucca Mountain, Nevada, to determine whether the site complies with the criteria specified in the Code of Federal Regulations. The numerical standard for performance in these criteria is based on the groundwater travel time along the fastest path of likely radionuclide transport from the disturbed zone to the accessible environment. The concept of groundwater travel time, as proposed in the regulations, does not have a unique mathematical statement. The purpose of this paper is to discuss (1) the ambiguities associated with the regulatory specification of groundwater travel time, (2) two different interpretations of groundwater travel time, and (3) the effect of the two interpretations on estimates of the groundwater travel time. 3 refs., 2 figs., 2 tabs

  9. Segmentation of Brain Lesions in MRI and CT Scan Images: A Hybrid Approach Using k-Means Clustering and Image Morphology

    Science.gov (United States)

    Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar

    2018-04-01

    Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.

  10. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning.

    Science.gov (United States)

    Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla

    2017-06-01

    Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [ 11 C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife ® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTV MRI . A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment

  11. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Johannes Stegmaier

    Full Text Available Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  12. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach.

    Science.gov (United States)

    Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M

    2016-06-01

    The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties

  13. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242 (United States); Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, The University of Iowa, Iowa City, Iowa 52242 (United States); Chang, Tangel; Plichta, Kristin A. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa 52242 (United States); Smith, Brian J. [Department of Biostatistics, University of Iowa, Iowa City, Iowa 52242 (United States); Sunderland, John J.; Graham, Michael M. [Department of Radiology, University of Iowa, Iowa City, Iowa 52242 (United States); Sonka, Milan [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiation Oncology, The University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Buatti, John M. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa 52242 (United States); Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States)

    2016-06-15

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in

  14. Examining the Approaches of Customer Segmentation in a Cosmetic Company: A Case Study on L'oreal Malaysia SDN BHD

    OpenAIRE

    Ong, Poh Choo

    2010-01-01

    Purpose – The purpose of this study is to examine the market segmentation approaches available and identify which segmentation approaches best suit for L’Oreal Malaysia. Design/methodology/approach – Questionnaires were distributed to 80 L’Oreal cosmetic users in Malaysia and 55 completed questionnaires were analyzed. Besides, two interviews being conducted at L’Oreal Malaysia office and the result were analyzed too. Findings – The results were as follows. First, analysis of L’Oreal cos...

  15. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  16. Segmentation of turbo generator and reactor coolant pump vibratory patterns: a syntactic pattern recognition approach

    International Nuclear Information System (INIS)

    Tira, Z.

    1993-02-01

    This study was undertaken in the context of turbogenerator and reactor coolant pump vibration surveillance. Vibration meters are used to monitor equipment condition. An anomaly will modify the signal mean. At the present time, the expert system DIVA, developed to automate diagnosis, requests the operator to identify the nature of the pattern change thus indicated. In order to minimize operator intervention, we have to automate on the one hand classification and on the other hand, detection and segmentation of the patterns. The purpose of this study is to develop a new automatic system for the segmentation and classification of signals. The segmentation is based on syntactic pattern recognition. For the classification, a decision tree is used. The signals to process are the rms values of the vibrations measured on rotating machines. These signals are randomly sampled. All processing is automatic and no a priori statistical knowledge on the signals is required. The segmentation performances are assessed by tests on vibratory signals. (author). 31 figs

  17. Research on trust calculation of wireless sensor networks based on time segmentation

    Science.gov (United States)

    Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin

    2017-05-01

    Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network

  18. Filling Landsat ETM+ SLC-off gaps using a segmentation model approach

    Science.gov (United States)

    Maxwell, Susan

    2004-01-01

    The purpose of this article is to present a methodology for filling Landsat Scan Line Corrector (SLC)-off gaps with same-scene spectral data guided by a segmentation model. Failure of the SLC on the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) instrument resulted in a loss of approximately 25 percent of the spectral data. The missing data span across most of the image with scan gaps varying in size from two pixels near the center of the image to 14 pixels along the east and west edges. Even with the scan gaps, the radiometric and geometric qualities of the remaining portions of the image still meet design specifications and therefore contain useful information (see http:// landsat7.usgs.gov for additional information). The U.S. Geological Survey EROS Data Center (EDC) is evaluating several techniques to fill the gaps in SLC-off data to enhance the usability of the imagery (Howard and Lacasse 2004) (PE&RS, August 2004). The method presented here uses a segmentation model approach that allows for same-scene spectral data to be used to fill the gaps. The segment model is generated from a complete satellite image with no missing spectral data (e.g., Landsat 5, Landsat 7 SLCon, SPOT). The model is overlaid on the Landsat SLC-off image, and the missing data within the gaps are then estimated using SLC-off spectral data that intersect the segment boundary. A major advantage of this approach is that the gaps are filled using spectral data derived from the same SLC-off satellite image.

  19. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  20. Study of the vocal signal in the amplitude-time representation. Speech segmentation and recognition algorithms

    International Nuclear Information System (INIS)

    Baudry, Marc

    1978-01-01

    This dissertation exposes an acoustical and phonetical study of vocal signal. The complex pattern of the signal is segmented into simple sub-patterns and each one of these sub-patterns may be segmented again into another more simplest patterns with lower level. Application of pattern recognition techniques facilitates on one hand this segmentation and on the other hand the definition of the structural relations between the sub-patterns. Particularly, we have developed syntactic techniques in which the rewriting rules, context-sensitive, are controlled by predicates using parameters evaluated on the sub-patterns themselves. This allow to generalize a pure syntactic analysis by adding a semantic information. The system we expose, realizes pre-classification and a partial identification of the phonemes as also the accurate detection of each pitch period. The voice signal is analysed directly using the amplitude-time representation. This system has been implemented on a mini-computer and it works in the real time. (author) [fr

  1. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  2. Classifying and profiling Social Networking Site users: a latent segmentation approach.

    Science.gov (United States)

    Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel

    2011-09-01

    Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.

  3. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  4. Spatial context learning approach to automatic segmentation of pleural effusion in chest computed tomography images

    Science.gov (United States)

    Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.

    2016-03-01

    Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.

  5. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    Science.gov (United States)

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    Science.gov (United States)

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  7. CONSIDERING TRAVEL TIME RELIABILITY AND SAFETY FOR EVALUATION OF CONGESTION RELIEF SCHEMES ON EXPRESSWAY SEGMENTS

    Directory of Open Access Journals (Sweden)

    Babak MEHRAN

    2009-01-01

    Full Text Available Evaluation of the efficiency of congestion relief schemes on expressways has generally been based on average travel time analysis. However, road authorities are much more interested in knowing the possible impacts of improvement schemes on safety and travel time reliability prior to implementing them in real conditions. A methodology is presented to estimate travel time reliability based on modeling travel time variations as a function of demand, capacity and weather conditions. For a subject expressway segment, patterns of demand and capacity were generated for each 5-minute interval over a year by using the Monte-Carlo simulation technique, and accidents were generated randomly according to traffic conditions. A whole year analysis was performed by comparing demand and available capacity for each scenario and shockwave analysis was used to estimate the queue length at each time interval. Travel times were estimated from refined speed-flow relationships and buffer time index was estimated as a measure of travel time reliability. it was shown that the estimated reliability measures and predicted number of accidents are very close to observed values through empirical data. After validation, the methodology was applied to assess the impact of two alternative congestion relief schemes on a subject expressway segment. one alternative was to open the hard shoulder to traffic during the peak period, while the other was to reduce the peak period demand by 15%. The extent of improvements in travel conditions and safety, likewise the reduction in road users' costs after implementing each improvement scheme were estimated. it was shown that both strategies can result in up to 23% reduction in the number of occurred accidents and significant improvements in travel time reliability. Finally, the advantages and challenging issues of selecting each improvement scheme were discussed.

  8. Segmentation Method of Time-Lapse Microscopy Images with the Focus on Biocompatibility Assessment

    Czech Academy of Sciences Publication Activity Database

    Soukup, Jindřich; Císař, P.; Šroubek, Filip

    2016-01-01

    Roč. 22, č. 3 (2016), s. 497-506 ISSN 1431-9276 R&D Projects: GA ČR GA13-29225S Grant - others:GA MŠk(CZ) LO1205; GA UK(CZ) 914813/2013; GA UK(CZ) SVV-2016-260332; CENAKVA(CZ) CZ.1.05/2.1.00/01.0024 Institutional support: RVO:67985556 Keywords : phase contrast microscopy * segmentation * biocompatibility assessment * time-lapse * cytotoxicity testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.891, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/soukupj-0460642.pdf

  9. [Determination of total and segmental colonic transit time in constipated children].

    Science.gov (United States)

    Zhang, Shu-cheng; Wang, Wei-lin; Bai, Yu-zuo; Yuan, Zheng-wei; Wang, Wei

    2003-03-01

    To determine the total and segmental colonic transit time of normal Chinese children and to explore its value in constipation in children. The subjects involved in this study were divided into 2 groups. One group was control, which had 33 healthy children (21 males and 12 females) aged 2 - 13 years (mean 5 years). The other was constipation group, which had 25 patients (15 males and 10 females) aged 3 - 14 years (mean 7 years) with constipation according to Benninga's criteria. Written informed consent was obtained from the parents of each subject. In this study the simplified method of radio opaque markers was used to determine the total gastrointestinal transit time and segmental colonic transit time of the normal and constipated children, and in part of these patients X-ray defecography was also used. The total gastrointestinal transit time (TGITT), right colonic transit time (RCTT), left colonic transit time (LCTT) and rectosigmoid colonic transit time (RSTT) of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. In the constipated children, the TGITT, LCTT and RSTT were significantly longer than those in controls (92.2 +/- 55.5 h vs 28.7 +/- 7.7 h, P < 0.001; 16.9 +/- 12.6 h vs 6.5 +/- 3.8 h, P < 0.01; 61.5 +/- 29.0 h vs 13.4 +/- 5.6 h, P < 0.001), while the RCTT had no significant difference. X-ray defecography demonstrated one rectocele, one perineal descent syndrome and one puborectal muscle syndrome, respectively. The TGITT, RCTT, LCTT and RSTT of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. With the segmental colonic transit time, constipation can be divided into four types: slow-transit constipation, outlet obstruction, mixed type and normal transit constipation. X-ray defecography can demonstrate the anatomical or dynamic abnormalities within the anorectal area, with which constipation can be further divided into different subtypes, and

  10. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches

    Directory of Open Access Journals (Sweden)

    Maggi Kelly

    2013-08-01

    Full Text Available Light detection and ranging (lidar data is increasingly being used for ecosystem monitoring across geographic scales. This work concentrates on delineating individual trees in topographically-complex, mixed conifer forest across the California’s Sierra Nevada. We delineated individual trees using vector data and a 3D lidar point cloud segmentation algorithm, and using raster data with an object-based image analysis (OBIA of a canopy height model (CHM. The two approaches are compared to each other and to ground reference data. We used high density (9 pulses/m2, discreet lidar data and WorldView-2 imagery to delineate individual trees, and to classify them by species or species types. We also identified a new method to correct artifacts in a high-resolution CHM. Our main focus was to determine the difference between the two types of approaches and to identify the one that produces more realistic results. We compared the delineations via tree detection, tree heights, and the shape of the generated polygons. The tree height agreement was high between the two approaches and the ground data (r2: 0.93–0.96. Tree detection rates increased for more dominant trees (8–100 percent. The two approaches delineated tree boundaries that differed in shape: the lidar-approach produced fewer, more complex, and larger polygons that more closely resembled real forest structure.

  11. The impact of policy guidelines on hospital antibiotic use over a decade: a segmented time series analysis.

    Directory of Open Access Journals (Sweden)

    Sujith J Chandy

    Full Text Available Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use.Antibiotic use was calculated monthly as defined daily doses (DDD per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as 'Segments,' divided based on differing modes of guideline development and implementation: Segment 1--Baseline prior to antibiotic guidelines development; Segment 2--During preparation of guidelines and booklet dissemination; Segment 3--Dormant period with no guidelines dissemination; Segment 4--Booklet dissemination of revised guidelines; Segment 5--Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend.Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18, 0.21 (SE = 0.08 and 0.31 (SE = 0.06 for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10 and declined in Segment 5 (-0.37; SE = 0.11. Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001 for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use.Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy

  12. STEM employment in the new economy: A labor market segmentation approach

    Science.gov (United States)

    Torres-Olave, Blanca M.

    The present study examined the extent to which the U.S. STEM labor market is stratified in terms of quality of employment. Through a series of cluster analyses and Chi-square tests on data drawn from the 2008 Survey of Income Program Participation (SIPP), the study found evidence of segmentation in the highly-skilled STEM and non-STEM samples, which included workers with a subbaccalaureate diploma or above. The cluster analyses show a pattern consistent with Labor Market Segmentation theory: Higher wages are associated with other primary employment characteristics, including health insurance and pension benefits, as well as full-time employment. In turn, lower wages showed a tendency to cluster with secondary employment characteristics, such as part-time employment, multiple employment, and restricted access to health insurance and pension benefits. The findings also suggest that women have a higher likelihood of being employed in STEM jobs with secondary characteristics. The findings reveal a far more variegated employment landscape than is usually presented in national reports of the STEM workforce. There is evidence that, while STEM employment may be more resilient than non-STEM employment to labor restructuring trends in the new economy, the former is far from immune to secondary labor characteristics. There is a need for ongoing dialogue between STEM education (at all levels), employers, policymakers, and other stakeholders to truly understand not only the barriers to equity in employment relations, but also the mechanisms that create and maintain segmentation and how they may impact women, underrepresented minorities, and the foreign-born.

  13. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    Science.gov (United States)

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  14. A local contrast based approach to threshold segmentation for PET target volume delineation

    International Nuclear Information System (INIS)

    Drever, Laura; Robinson, Don M.; McEwan, Alexander; Roa, Wilson

    2006-01-01

    Current radiation therapy techniques, such as intensity modulated radiation therapy and three-dimensional conformal radiotherapy rely on the precise delivery of high doses of radiation to well-defined volumes. CT, the imaging modality that is most commonly used to determine treatment volumes cannot, however, easily distinguish between cancerous and normal tissue. The ability of positron emission tomography (PET) to more readily differentiate between malignant and healthy tissues has generated great interest in using PET images to delineate target volumes for radiation treatment planning. At present the accurate geometric delineation of tumor volumes is a subject open to considerable interpretation. The possibility of using a local contrast based approach to threshold segmentation to accurately delineate PET target cross sections is investigated using well-defined cylindrical and spherical volumes. Contrast levels which yield correct volumetric quantification are found to be a function of the activity concentration ratio between target and background, target size, and slice location. Possibilities for clinical implementation are explored along with the limits posed by this form of segmentation

  15. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  16. Classification of semiurban landscapes from very high-resolution satellite images using a regionalized multiscale segmentation approach

    Science.gov (United States)

    Kavzoglu, Taskin; Erdemir, Merve Yildiz; Tonbul, Hasan

    2017-07-01

    In object-based image analysis, obtaining representative image objects is an important prerequisite for a successful image classification. The major threat is the issue of scale selection due to the complex spatial structure of landscapes portrayed as an image. This study proposes a two-stage approach to conduct regionalized multiscale segmentation. In the first stage, an initial high-level segmentation is applied through a "broadscale," and a set of image objects characterizing natural borders of the landscape features are extracted. Contiguous objects are then merged to create regions by considering their normalized difference vegetation index resemblance. In the second stage, optimal scale values are estimated for the extracted regions, and multiresolution segmentation is applied with these settings. Two satellite images with different spatial and spectral resolutions were utilized to test the effectiveness of the proposed approach and its transferability to different geographical sites. Results were compared to those of image-based single-scale segmentation and it was found that the proposed approach outperformed the single-scale segmentations. Using the proposed methodology, significant improvement in terms of segmentation quality and classification accuracy (up to 5%) was achieved. In addition, the highest classification accuracies were produced using fine-scale values.

  17. ASSESSING INTERNATIONAL MARKET SEGMENTATION APPROACHES: RELATED LITERATURE AT A GLANCE AND SUGGESSTIONS FOR GLOBAL COMPANIES

    OpenAIRE

    Nacar, Ramazan; Uray, Nimet

    2015-01-01

    With the increasing role of globalization, international market segmentation has become a critical success factor for global companies, which aim for international market expansion. Despite the practice of numerous methods and bases for international market segmentation, international market segmentation is still a complex and an under-researched area. By considering all these issues, underdeveloped and under-researched international market segmentation bases such as social, cultural, psychol...

  18. Comparison of Lower Limb Segments Kinematics in a Taekwondo Kick. An Approach to the Proximal to Distal Motion

    Directory of Open Access Journals (Sweden)

    Estevan Isaac

    2015-09-01

    Full Text Available In taekwondo, there is a lack of consensus about how the kick sequence occurs. The aim of this study was to analyse the peak velocity (resultant and value in each plane of lower limb segments (thigh, shank and foot, and the time to reach this peak velocity in the kicking lower limb during the execution of the roundhouse kick technique. Ten experienced taekwondo athletes (five males and five females; mean age of 25.3 ±5.1 years; mean experience of 12.9 ±5.3 years participated voluntarily in this study performing consecutive kicking trials to a target located at their sternum height. Measurements for the kinematic analysis were performed using two 3D force plates and an eight camera motion capture system. The results showed that the proximal segment reached a lower peak velocity (resultant and in each plane than distal segments (except the peak velocity in the frontal plane where the thigh and shank presented similar values, with the distal segment taking the longest to reach this peak velocity (p < 0.01. Also, at the instant every segment reached the peak velocity, the velocity of the distal segment was higher than the proximal one (p < 0.01. It provides evidence about the sequential movement of the kicking lower limb segments. In conclusion, during the roundhouse kick in taekwondo inter-segment motion seems to be based on a proximo-distal pattern.

  19. The Market Concept of the 21st Century: a New Approach to Consumer Segmentation

    Directory of Open Access Journals (Sweden)

    Maria Igorevna Sokolova

    2016-01-01

    Full Text Available World economic development in the 21st century keeps tendencies and contradictions of the previous century. Economic growth in a number of the countries and, as a result, growth of consumption adjoins to an aggravation of global problems of the present. It not only ecology and climatic changes that undoubtedly worth the attention of world community, but also the aggravation of social problems. Among the last the question of poverty takes the central place. Poverty is a universal problem, in solution of which take part local authorities, the international organizations, commercial and noncommercial structures. It is intolerable to ignore a catastrophic situation in fight against this problem. It is necessary to look for ways of resolving it not only by using the existing methods, but also developing new approaches. One of the most significant tendencies in the sphere of fight against poverty is the development of the commercial enterprises working in the population segment with a low income level which by means of the activity help millions of people worldwide to get out of poverty. In other words, attraction of the commercial capital by an economic justification of profitability and prospects of investments into the companies working in the population segment with a low income level can be one of the methods allowing to solve effectively a poverty problem. This approach includes this population in economic activity, makes them by full-fledged participants of the market, which benefits to the creation of potential for economic growth and is a key step to getting out of poverty.

  20. Sectional anatomy aid for improvement of decompression surgery approach to vertical segment of facial nerve.

    Science.gov (United States)

    Feng, Yan; Zhang, Yi Qun; Liu, Min; Jin, Limin; Huangfu, Mingmei; Liu, Zhenyu; Hua, Peiyan; Liu, Yulong; Hou, Ruida; Sun, Yu; Li, You Qiong; Wang, Yu Fa; Feng, Jia Chun

    2012-05-01

    The aim of this study was to find a surgical approach to a vertical segment of the facial nerve (VFN) with a relatively wide visual field and small lesion by studying the location and structure of VFN with cross-sectional anatomy. High-resolution spiral computed tomographic multiplane reformation was used to reform images that were parallel to the Frankfort horizontal plane. To locate the VFN, we measured the distances as follows: from the VFN to the paries posterior bony external acoustic meatus on 5 typical multiplane reformation images, to the promontorium tympani and the root of the tympanic ring on 2 typical images. The mean distances from the VFN to the paries posterior bony external acoustic meatus are as follows: 4.47 mm on images showing the top of the external acoustic meatus, 4.20 mm on images with the best view of the window niche, 3.35 mm on images that show the widest external acoustic meatus, 4.22 mm on images with the inferior margin of the sulcus tympanicus, and 5.49 mm on images that show the bottom of the external acoustic meatus. The VFN is approximately 4.20 mm lateral to the promontorium tympani on images with the best view of the window niche and 4.12 mm lateral to the root of the tympanic ring on images with the inferior margin of the sulcus tympanicus. The other results indicate that the area and depth of the surgical wound from the improved approach would be much smaller than that from the typical approach. The surgical approach to the horizontal segment of the facial nerve through the external acoustic meatus and the tympanic cavity could be improved by grinding off the external acoustic meatus to show the VFN. The VFN can be found by taking the promontorium tympani and tympanic ring as references. This improvement is of high potential to expand the visual field to the facial nerve, remarkably without significant injury to the patients compared with the typical approach through the mastoid process.

  1. Simplified assessment of segmental gastrointestinal transit time with orally small amount of barium

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Weitang; Zhang, Zhiyong; Liu, Jinbo; Li, Zhen; Song, Junmin; Wu, Changcai [Department of Colorectal Surgery, The First Affiliated Hospital and Institute of Clinical Medicine, Zhengzhou University, 450052 Zhengzhou (China); Wang, Guixian, E-mail: guixianwang@hotmail.com [Department of Colorectal Surgery, The First Affiliated Hospital and Institute of Clinical Medicine, Zhengzhou University, 450052 Zhengzhou (China)

    2012-09-15

    Objective: To determine the effectiveness and advantage of small amount of barium in the measurement of gastrointestinal transmission function in comparison with radio-opaque pallets. Methods: Protocal 1: 8 healthy volunteers (male 6, female 2) with average age 40 ± 6.1 were subjected to the examination of radio-opaque pellets and small amount of barium with the interval of 1 week. Protocol 2: 30 healthy volunteers in group 1 (male 8, female 22) with average age 42.5 ± 8.1 and 50 patients with chronic functional constipation in group 2 (male 11, female 39) with average age 45.7 ± 7.8 were subjected to the small amount of barium examination. The small amount of barium was made by 30 g barium dissolved in 200 ml breakfast. After taking breakfast which contains barium, objectives were followed with abdominal X-ray at 4, 8, 12, 24, 48, 72, 96 h until the barium was evacuated totally. Results: Small amount of barium presented actual chyme or stool transit. The transit time of radio-opaque pallets through the whole gastrointestinal tract was significantly shorter than that of barium (37 ± 8 h vs. 47 ± 10 h, P < 0.05) in healthy people. The transit times of barium in constipation patients were markedly prolonged in colon (61.1 ± 22 vs. 37.3 ± 11, P < 0.01) and rectum (10.8 ± 3.7 vs. 2.3 ± 0.8 h, P < 0.01) compared with unconstipated volunteers. Transit times in individual gastrointestinal segments were also recorded by using small amount of barium, which allowed identifying the subtypes of constipation. Conclusion: The small amount barium examination is a convenient and low cost method to provide the most useful and reliable information on the transmission function of different gastrointestinal segments and able to classify the subtypes of slow transit constipation.

  2. Simplified assessment of segmental gastrointestinal transit time with orally small amount of barium

    International Nuclear Information System (INIS)

    Yuan, Weitang; Zhang, Zhiyong; Liu, Jinbo; Li, Zhen; Song, Junmin; Wu, Changcai; Wang, Guixian

    2012-01-01

    Objective: To determine the effectiveness and advantage of small amount of barium in the measurement of gastrointestinal transmission function in comparison with radio-opaque pallets. Methods: Protocal 1: 8 healthy volunteers (male 6, female 2) with average age 40 ± 6.1 were subjected to the examination of radio-opaque pellets and small amount of barium with the interval of 1 week. Protocol 2: 30 healthy volunteers in group 1 (male 8, female 22) with average age 42.5 ± 8.1 and 50 patients with chronic functional constipation in group 2 (male 11, female 39) with average age 45.7 ± 7.8 were subjected to the small amount of barium examination. The small amount of barium was made by 30 g barium dissolved in 200 ml breakfast. After taking breakfast which contains barium, objectives were followed with abdominal X-ray at 4, 8, 12, 24, 48, 72, 96 h until the barium was evacuated totally. Results: Small amount of barium presented actual chyme or stool transit. The transit time of radio-opaque pallets through the whole gastrointestinal tract was significantly shorter than that of barium (37 ± 8 h vs. 47 ± 10 h, P < 0.05) in healthy people. The transit times of barium in constipation patients were markedly prolonged in colon (61.1 ± 22 vs. 37.3 ± 11, P < 0.01) and rectum (10.8 ± 3.7 vs. 2.3 ± 0.8 h, P < 0.01) compared with unconstipated volunteers. Transit times in individual gastrointestinal segments were also recorded by using small amount of barium, which allowed identifying the subtypes of constipation. Conclusion: The small amount barium examination is a convenient and low cost method to provide the most useful and reliable information on the transmission function of different gastrointestinal segments and able to classify the subtypes of slow transit constipation

  3. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  4. Segmented Spiral Waves and Anti-phase Synchronization in a Model System with Two Identical Time-Delayed Coupled Layers

    International Nuclear Information System (INIS)

    Yuan Guoyong; Yang Shiping; Wang Guangrui; Chen Shigang

    2008-01-01

    In this paper, we consider a model system with two identical time-delayed coupled layers. Synchronization and anti-phase synchronization are exhibited in the reactive system without diffusion term. New segmented spiral waves, which are constituted by many thin trips, are found in each layer of two identical time-delayed coupled layers, and are different from the segmented spiral waves in a water-in-oil aerosol sodium bis(2-ethylhexyl) sulfosuccinate (AOT) micro-emulsion (ME) (BZ-AOT system), which consists of many small segments. 'Anti-phase spiral wave synchronization' can be realized between the first layer and the second one. For different excitable parameters, we also give the minimum values of the coupling strength to generate segmented spiral waves and the tip orbits of spiral waves in the whole bilayer.

  5. Integrating social marketing into sustainable resource management at Padre Island National Seashore: an attitude-based segmentation approach.

    Science.gov (United States)

    Lai, Po-Hsin; Sorice, Michael G; Nepal, Sanjay K; Cheng, Chia-Kuen

    2009-06-01

    High demand for outdoor recreation and increasing diversity in outdoor recreation participants have imposed a great challenge on the National Park Service (NPS), which is tasked with the mission to provide open access for quality outdoor recreation and maintain the ecological integrity of the park system. In addition to management practices of education and restrictions, building a sense of natural resource stewardship among visitors may also facilitate the NPS ability to react to this challenge. The purpose of our study is to suggest a segmentation approach that is built on the social marketing framework and aimed at influencing visitor behaviors to support conservation. Attitude toward natural resource management, an indicator of natural resource stewardship, is used as the basis for segmenting park visitors. This segmentation approach is examined based on a survey of 987 visitors to the Padre Island National Seashore (PAIS) in Texas in 2003. Results of the K-means cluster analysis identify three visitor segments: Conservation-Oriented, Development-Oriented, and Status Quo visitors. This segmentation solution is verified using respondents' socio-demographic backgrounds, use patterns, experience preferences, and attitudes toward a proposed regulation. Suggestions are provided to better target the three visitor segments and facilitate a sense of natural resource stewardship among them.

  6. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  7. A clustering approach to segmenting users of internet-based risk calculators.

    Science.gov (United States)

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  8. An Algorithm for Real-Time Pulse Waveform Segmentation and Artifact Detection in Photoplethysmograms.

    Science.gov (United States)

    Fischer, Christoph; Domer, Benno; Wibmer, Thomas; Penzel, Thomas

    2017-03-01

    Photoplethysmography has been used in a wide range of medical devices for measuring oxygen saturation, cardiac output, assessing autonomic function, and detecting peripheral vascular disease. Artifacts can render the photoplethysmogram (PPG) useless. Thus, algorithms capable of identifying artifacts are critically important. However, the published PPG algorithms are limited in algorithm and study design. Therefore, the authors developed a novel embedded algorithm for real-time pulse waveform (PWF) segmentation and artifact detection based on a contour analysis in the time domain. This paper provides an overview about PWF and artifact classifications, presents the developed PWF analysis, and demonstrates the implementation on a 32-bit ARM core microcontroller. The PWF analysis was validated with data records from 63 subjects acquired in a sleep laboratory, ergometry laboratory, and intensive care unit in equal parts. The output of the algorithm was compared with harmonized experts' annotations of the PPG with a total duration of 31.5 h. The algorithm achieved a beat-to-beat comparison sensitivity of 99.6%, specificity of 90.5%, precision of 98.5%, and accuracy of 98.3%. The interrater agreement expressed as Cohen's kappa coefficient was 0.927 and as F-measure was 0.990. In conclusion, the PWF analysis seems to be a suitable method for PPG signal quality determination, real-time annotation, data compression, and calculation of additional pulse wave metrics such as amplitude, duration, and rise time.

  9. A fourth order PDE based fuzzy c- means approach for segmentation of microscopic biopsy images in presence of Poisson noise for cancer detection.

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev

    2017-07-01

    For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other

  10. Novel Burst Suppression Segmentation in the Joint Time-Frequency Domain for EEG in Treatment of Status Epilepticus

    Directory of Open Access Journals (Sweden)

    Jaeyun Lee

    2016-01-01

    Full Text Available We developed a method to distinguish bursts and suppressions for EEG burst suppression from the treatments of status epilepticus, employing the joint time-frequency domain. We obtained the feature used in the proposed method from the joint use of the time and frequency domains, and we estimated the decision as to whether the measured EEG was a burst segment or suppression segment by the maximum likelihood estimation. We evaluated the performance of the proposed method in terms of its accordance with the visual scores and estimation of the burst suppression ratio. The accuracy was higher than the sole use of the time or frequency domains, as well as conventional methods conducted in the time domain. In addition, probabilistic modeling provided a more simplified optimization than conventional methods. Burst suppression quantification necessitated precise burst suppression segmentation with an easy optimization; therefore, the excellent discrimination and the easy optimization of burst suppression by the proposed method appear to be beneficial.

  11. A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.

    Science.gov (United States)

    Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy

    2010-01-18

    We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.

  12. Visibility graphlet approach to chaotic time series

    Energy Technology Data Exchange (ETDEWEB)

    Mutua, Stephen [Business School, University of Shanghai for Science and Technology, Shanghai 200093 (China); Computer Science Department, Masinde Muliro University of Science and Technology, P.O. Box 190-50100, Kakamega (Kenya); Gu, Changgui, E-mail: gu-changgui@163.com, E-mail: hjyang@ustc.edu.cn; Yang, Huijie, E-mail: gu-changgui@163.com, E-mail: hjyang@ustc.edu.cn [Business School, University of Shanghai for Science and Technology, Shanghai 200093 (China)

    2016-05-15

    Many novel methods have been proposed for mapping time series into complex networks. Although some dynamical behaviors can be effectively captured by existing approaches, the preservation and tracking of the temporal behaviors of a chaotic system remains an open problem. In this work, we extended the visibility graphlet approach to investigate both discrete and continuous chaotic time series. We applied visibility graphlets to capture the reconstructed local states, so that each is treated as a node and tracked downstream to create a temporal chain link. Our empirical findings show that the approach accurately captures the dynamical properties of chaotic systems. Networks constructed from periodic dynamic phases all converge to regular networks and to unique network structures for each model in the chaotic zones. Furthermore, our results show that the characterization of chaotic and non-chaotic zones in the Lorenz system corresponds to the maximal Lyapunov exponent, thus providing a simple and straightforward way to analyze chaotic systems.

  13. Time-optimized high-resolution readout-segmented diffusion tensor imaging.

    Directory of Open Access Journals (Sweden)

    Gernot Reishofer

    Full Text Available Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min generates results comparable to the un-regularized data with three averages (48 min. This significant reduction in scan time renders high resolution (1 × 1 × 2.5 mm(3 diffusion tensor imaging of the entire brain applicable in a clinical context.

  14. A transfer-learning approach to image segmentation across scanners by maximizing distribution similarity

    DEFF Research Database (Denmark)

    van Opbroek, Annegreet; Ikram, M. Arfan; Vernooij, Meike W.

    2013-01-01

    Many successful methods for biomedical image segmentation are based on supervised learning, where a segmentation algorithm is trained based on manually labeled training data. For supervised-learning algorithms to perform well, this training data has to be representative for the target data. In pr...

  15. An approach to melodic segmentation and classification based on filtering with the Haar-wavelet

    DEFF Research Database (Denmark)

    Velarde, Gissel; Weyde, Tillman; Meredith, David

    2013-01-01

    -based segmentation when used to recognize the parent works of segments from Bach’s Two-Part Inventions (BWV 772–786). When used to classify 360 Dutch folk tunes into 26 tune families, the performance of the method is comparable to the use of pitch signals, but not as good as that of string-matching methods based...

  16. A Market Segmentation Approach for Higher Education Based on Rational and Emotional Factors

    Science.gov (United States)

    Angulo, Fernando; Pergelova, Albena; Rialp, Josep

    2010-01-01

    Market segmentation is an important topic for higher education administrators and researchers. For segmenting the higher education market, we have to understand what factors are important for high school students in selecting a university. Extant literature has probed the importance of rational factors such as teaching staff, campus facilities,…

  17. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    International Nuclear Information System (INIS)

    Fledelius, W; Worm, E; Høyer, M; Grau, C; Poulsen, P R

    2014-01-01

    Gold markers implanted in or near a tumor can be used as x-ray visible landmarks for image based tumor localization. The aim of this study was to develop and demonstrate fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in cone-beam CT (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2–3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces with low mean contrast-to-noise ratio were excluded as markers were not visible due to MV scatter. Online segmentation times measured for a limited dataset were used for estimating real-time segmentation times for all images. The percentage of detected markers was 94.8% (kV), 96.1% (MV), and 98.6% (CBCT). For the detected markers, the real-time segmentation was erroneous in 0.2–0.31% of the cases. The mean segmentation time per marker was 5.6 ms [2.1–12 ms] (kV), 5.5 ms [1.6–13 ms] (MV), and 6.5 ms [1.8–15 ms] (CBCT). Fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in CBCT projections was demonstrated for a large dataset. (paper)

  18. Soft computing approach to 3D lung nodule segmentation in CT.

    Science.gov (United States)

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Inverse statistical approach in heartbeat time series

    International Nuclear Information System (INIS)

    Ebadi, H; Shirazi, A H; Mani, Ali R; Jafari, G R

    2011-01-01

    We present an investigation on heart cycle time series, using inverse statistical analysis, a concept borrowed from studying turbulence. Using this approach, we studied the distribution of the exit times needed to achieve a predefined level of heart rate alteration. Such analysis uncovers the most likely waiting time needed to reach a certain change in the rate of heart beat. This analysis showed a significant difference between the raw data and shuffled data, when the heart rate accelerates or decelerates to a rare event. We also report that inverse statistical analysis can distinguish between the electrocardiograms taken from healthy volunteers and patients with heart failure

  20. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain

    OpenAIRE

    Srivastava, Subodh; Sharma, Neeraj; Singh, S. K.; Srivastava, R.

    2014-01-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based uns...

  1. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  2. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    Science.gov (United States)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  3. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  4. Implementing a real-time chain of segmentation of images on a multi-FPGA architecture

    Science.gov (United States)

    Akil, Mohamed; Zahirazami, Shahram

    1998-03-01

    In this paper we present the study and the implementation of an optimized chain of segmentation operators. We implemented this chain in real time, consisting of a Deriche contour detection, double threshold, closing of contours and finally region labeling, on a multi-FPGA architecture. This architecture has four processing FPGAs and four memory modules. Deriche operator, closing of contours and labeling occupy each one an FPGA. Double threshold and detection of the extremities filled partially the forth FPGA. The slowest component of the chain is Deriche operator which can go up to 11.4 Mhz, assuring the process of an image every 40 ms. Deriche operator tries to extract the contours by assuming that a contour is a step super positioned by a white gaussian noise. Our implementation consists of a smoothing part of four second order filters and a Sobel as a derivation part. The second order filters are causal and non-causal horizontal and vertical operators. The gradient image passes through a double threshold filter to select the real contours and the crests and the background pixels. Closing of contours eliminates the false crests and finally the labeling gives a unique label to each closed region. The latency of the chain is in the order of three images. This implementation shows the efficiency of the chain and also it demonstrates the capabilities of our architecture as a prototyping system.

  5. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    Science.gov (United States)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  6. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  7. Statistics-based segmentation using a continuous-scale naive Bayes approach

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Midtiby, Henrik Skov; Kruger, Norbert

    2014-01-01

    Segmentation is a popular preprocessing stage in the field of machine vision. In agricultural applications it can be used to distinguish between living plant material and soil in images. The normalized difference vegetation index (NDVI) and excess green (ExG) color features are often used...... segmentation over the normalized vegetation difference index and excess green. The inputs to this color feature are the R, G, B, and near-infrared color wells, their chromaticities, and NDVI, ExG, and excess red. We apply the developed technique to a dataset consisting of 20 manually segmented images captured...

  8. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2015-12-01

    Full Text Available RANdom SAmple Consensus (RANSAC is a widely adopted method for LiDAR point cloud segmentation because of its robustness to noise and outliers. However, RANSAC has a tendency to generate false segments consisting of points from several nearly coplanar surfaces. To address this problem, we formulate the weighted RANSAC approach for the purpose of point cloud segmentation. In our proposed solution, the hard threshold voting function which considers both the point-plane distance and the normal vector consistency is transformed into a soft threshold voting function based on two weight functions. To improve weighted RANSAC’s ability to distinguish planes, we designed the weight functions according to the difference in the error distribution between the proper and improper plane hypotheses, based on which an outlier suppression ratio was also defined. Using the ratio, a thorough comparison was conducted between these different weight functions to determine the best performing function. The selected weight function was then compared to the existing weighted RANSAC methods, the original RANSAC, and a representative region growing (RG method. Experiments with two airborne LiDAR datasets of varying densities show that the various weighted methods can improve the segmentation quality differently, but the dedicated designed weight functions can significantly improve the segmentation accuracy and the topology correctness. Moreover, its robustness is much better when compared to the RG method.

  9. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-01-01

    Full Text Available Segmentation of the left atrium (LA from cardiac magnetic resonance imaging (MRI datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs and active contour model (ACM approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC and average surface-to-surface distance (S2S, were computed as 0.9227±0.0598 and 1.14±1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively.

  10. A hybrid segmentation approach for geographic atrophy in fundus auto-fluorescence images for diagnosis of age-related macular degeneration.

    Science.gov (United States)

    Lee, Noah; Laine, Andrew F; Smith, R Theodore

    2007-01-01

    Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.

  11. A rapid Kano-based approach to identify optimal user segments

    DEFF Research Database (Denmark)

    Atlason, Reynir Smari; Stefansson, Arnaldur Smari; Wietz, Miriam

    2018-01-01

    The Kano model of customer satisfaction provides product developers valuable information about if, and then how much a given functional requirement (FR) will impact customer satisfaction if implemented within a product, system or a service. A limitation of the Kano model is that it does not allow...... developers to visualise which combined sets of FRs would provide the highest satisfaction between different customer segments. In this paper, a stepwise method to address this shortcoming is presented. First, a traditional Kano analysis is conducted for the different segments of interest. Second, for each FR...... to the biggest target group. The proposed extension should assist product developers within to more effectively evaluate which FRs should be implemented when considering more than one combined customer segment. It shows which segments provide the highest possibility for high satisfaction of combined FRs. We...

  12. A cross-metathesis approach to the stereocontrolled synthesis of the AB ring segment of ciguatoxin

    OpenAIRE

    Kadota, Isao; Abe, Takashi; Uni, Miyuki; Takamura, Hiroyoshi; Yamamoto, Yoshinori

    2008-01-01

    Synthesis of the AB ring segments of ciguatoxin is described. The present synthesis includes a Lewis acid mediated cyclization of allylstannane with aldehyde, cross-metathesis reaction introducing the side chain, and Grieco-Nishizawa dehydration on the A ring.

  13. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  14. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    Energy Technology Data Exchange (ETDEWEB)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W. [Heidelberg University, Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Zoellner, F.G. [Heidelberg University, Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Mannheim (Germany); Zahn, K. [University of Heidelberg, Department of Paediatric Surgery, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany); Schaible, T. [Heidelberg University, Department of Paediatrics, University Medical Center Mannheim, Medical Faculty Mannheim, Mannheim (Germany)

    2016-12-15

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  15. Region of interest-based versus whole-lung segmentation-based approach for MR lung perfusion quantification in 2-year-old children after congenital diaphragmatic hernia repair

    International Nuclear Information System (INIS)

    Weis, M.; Sommer, V.; Hagelstein, C.; Schoenberg, S.O.; Neff, K.W.; Zoellner, F.G.; Zahn, K.; Schaible, T.

    2016-01-01

    With a region of interest (ROI)-based approach 2-year-old children after congenital diaphragmatic hernia (CDH) show reduced MR lung perfusion values on the ipsilateral side compared to the contralateral. This study evaluates whether results can be reproduced by segmentation of whole-lung and whether there are differences between the ROI-based and whole-lung measurements. Using dynamic contrast-enhanced (DCE) MRI, pulmonary blood flow (PBF), pulmonary blood volume (PBV) and mean transit time (MTT) were quantified in 30 children after CDH repair. Quantification results of an ROI-based (six cylindrical ROIs generated of five adjacent slices per lung-side) and a whole-lung segmentation approach were compared. In both approaches PBF and PBV were significantly reduced on the ipsilateral side (p always <0.0001). In ipsilateral lungs, PBF of the ROI-based and the whole-lung segmentation-based approach was equal (p=0.50). In contralateral lungs, the ROI-based approach significantly overestimated PBF in comparison to the whole-lung segmentation approach by approximately 9.5 % (p=0.0013). MR lung perfusion in 2-year-old children after CDH is significantly reduced ipsilaterally. In the contralateral lung, the ROI-based approach significantly overestimates perfusion, which can be explained by exclusion of the most ventral parts of the lung. Therefore whole-lung segmentation should be preferred. (orig.)

  16. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  17. Alcopops, taxation and harm: a segmented time series analysis of emergency department presentations.

    Science.gov (United States)

    Gale, Marianne; Muscatello, David J; Dinh, Michael; Byrnes, Joshua; Shakeshaft, Anthony; Hayen, Andrew; MacIntyre, Chandini Raina; Haber, Paul; Cretikos, Michelle; Morton, Patricia

    2015-05-06

    In Australia, a Goods and Services Tax (GST) introduced in 2000 led to a decline in the price of ready-to-drink (RTD) beverages relative to other alcohol products. The 2008 RTD ("alcopops") tax increased RTD prices. The objective of this study was to estimate the change in incidence of Emergency Department (ED) presentations for acute alcohol problems associated with each tax. Segmented regression analyses were performed on age and sex-specific time series of monthly presentation rates for acute alcohol problems to 39 hospital emergency departments across New South Wales, Australia over 15 years, 1997 to 2011. Indicator variables represented the introduction of each tax. Retail liquor turnover controlled for large-scale economic factors such as the global financial crisis that may have influenced demand. Under-age (15-17 years) and legal age (18 years and over) drinkers were included. The GST was associated with a statistically significant increase in ED presentations for acute alcohol problems among 18-24 year old females (0 · 14/100,000/month, 95% CI 0 · 05-0 · 22). The subsequent alcopops tax was associated with a statistically significant decrease in males 15-50 years, and females 15-65 years, particularly in 18-24 year old females (-0 · 37/100,000/month, 95% CI -0 · 45 to -0 · 29). An increase in retail turnover of liquor was positively and statistically significantly associated with ED presentations for acute alcohol problems across all age and sex strata. Reduced tax on RTDs was associated with increasing ED presentations for acute alcohol problems among young women. The alcopops tax was associated with declining presentations in young to middle-aged persons of both sexes, including under-age drinkers.

  18. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  19. Reducing consumption of confectionery foods: A post-hoc segmentation analysis using a social cognition approach.

    Science.gov (United States)

    Naughton, Paul; McCarthy, Mary; McCarthy, Sinéad

    2017-10-01

    Considering confectionary consumption behaviour this cross-sectional study used social cognition variables to identify distinct segments in terms of their motivation and efforts to decrease their consumption of such foods with the aim of informing targeted social marketing campaigns. Using Latent Class analysis on a sample of 500 adults four segments were identified: unmotivated, triers, successful actors, and thrivers. The unmotivated and triers segments reported low levels of perceived need and perceived behavioural control (PBC) in addition to high levels of habit and hedonic hunger with regards their consumption of confectionery foods. Being a younger adult was associated with higher odds of being in the unmotivated and triers segments and being female was associated with higher odds of being in the triers and successful actors segments. The findings indicate that in the absence of strong commitment to eating low amounts of confectionery foods (i.e. perceived need) people will continue to overconsume free sugars regardless of motivation to change. It is therefore necessary to identify relevant messages or 'triggers' related to sugar consumption that resonate with young adults in particular. For those motivated to change, counteracting unhealthy eating habits and the effects of hedonic hunger may necessitate changes to food environments in order to make the healthy choice more appealing and accessible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. DEMO maintenance scenarios: scheme for time estimations and preliminary estimates for blankets arranged in multi-module-segments

    International Nuclear Information System (INIS)

    Nagy, D.

    2007-01-01

    Previous conceptual studies made clear that the ITER blanket concept and segmentation is not suitable for the environment of a potential fusion power plant (DEMO). One promising concept to be used instead is the so-called Multi-Module-Segment (MMS) concept. Each MMS consists of a number of blankets arranged on a strong back plate thus forming ''banana'' shaped in-board (IB) and out-board (OB) segments. With respect to port size, weight, or other limiting aspects the IB and OB MMS are segmented in toroidal direction. The number of segments to be replaced would be below 100. For this segmentation concept a new maintenance scenario had to be worked out. The aim of this paper is to present a promising MMS maintenance scenario, a flexible scheme for time estimations under varying boundary conditions and preliminary time estimates. According to the proposed scenario two upper, vertical arranged maintenance ports have to be opened for blanket maintenance on opposite sides of the tokamak. Both ports are central to a 180 degree sector and the MMS are removed and inserted through both ports. In-vessel machines are operating to transport the elements in toroidal direction and also to insert and attach the MMS to the shield. Outside the vessel the elements have to be transported between the tokamak and the hot cell to be refurbished. Calculating the maintenance time for such a scenario is rather challenging due to the numerous parallel processes involved. For this reason a flexible, multi-level calculation scheme has been developed in which the operations are organized into three levels: At the lowest level the basic maintenance steps are determined. These are organized into maintenance sequences that take into account parallelisms in the system. Several maintenance sequences constitute the maintenance phases which correspond to a certain logistics scenario. By adding the required times of the maintenance phases the total maintenance time is obtained. The paper presents

  2. Discontinuity Preserving Image Registration through Motion Segmentation: A Primal-Dual Approach

    Directory of Open Access Journals (Sweden)

    Silja Kiriyanthan

    2016-01-01

    Full Text Available Image registration is a powerful tool in medical image analysis and facilitates the clinical routine in several aspects. There are many well established elastic registration methods, but none of them can so far preserve discontinuities in the displacement field. These discontinuities appear in particular at organ boundaries during the breathing induced organ motion. In this paper, we exploit the fact that motion segmentation could play a guiding role during discontinuity preserving registration. The motion segmentation is embedded in a continuous cut framework guaranteeing convexity for motion segmentation. Furthermore we show that a primal-dual method can be used to estimate a solution to this challenging variational problem. Experimental results are presented for MR images with apparent breathing induced sliding motion of the liver along the abdominal wall.

  3. Automatic segmentation of coronary vessels from digital subtracted angiograms: a knowledge-based approach

    International Nuclear Information System (INIS)

    Stansfield, S.A.

    1986-01-01

    This paper presents a rule-based expert system for identifying and isolating coronary vessels in digital angiograms. The system is written in OPS5 and LISP and uses low level processors written in C. The system embodies both stages of the vision hierarchy: The low level image processing stage works concurrently with edges (or lines) and regions to segment the input image. Its knowledge is that of segmentation, grouping, and shape analysis. The high level stage then uses its knowledge of cardiac anatomy and physiology to interpret the result and to eliminate those structures not desired in the output. (Auth.)

  4. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    Science.gov (United States)

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  6. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson

    2015-01-01

    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  7. Total and segmental colon transit time in constipated children assessed by scintigraphy with 111In-DTPA given orally.

    Science.gov (United States)

    Vattimo, A; Burroni, L; Bertelli, P; Messina, M; Meucci, D; Tota, G

    1993-12-01

    Serial colon scintigraphy using 111In-DTPA (2 MBq) given orally was performed in 39 children referred for constipation, and the total and segmental colon transit times were measured. The bowel movements during the study were recorded and the intervals between defecations (ID) were calculated. This method proved able to identify children with normal colon morphology (no. = 32) and those with dolichocolon (no. = 7). Normal children were not included for ethical reasons and we used the normal range determined by others using x-ray methods (29 +/- 4 hours). Total and segmental colon transit times were found to be prolonged in all children with dolichocolon (TC: 113.55 +/- 41.20 hours; RC: 39.85 +/- 26.39 hours; LC: 43.05 +/- 18.30 hours; RS: 30.66 +/- 26.89 hours). In the group of children with a normal colon shape, 13 presented total and segmental colon transit times within the referred normal value (TC: 27.79 +/- 4.10 hours; RC: 9.11 +/- 2.53 hours; LC: 9.80 +/- 3.50 hours; RS: 8.88 +/- 4.09 hours) and normal bowel function (ID: 23.37 +/- 5.93 hours). In the remaining children, 5 presented prolonged retention in the rectum (RS: 53.36 +/- 29.66 hours), and 14 a prolonged transit time in all segments. A good correlation was found between the transit time and bowel function. From the point of view of radiation dosimetry, the most heavily irradiated organs were the lower large intestine and the ovaries, and the level of radiation burden depended on the colon transit time. We can conclude that the described method results safe, accurate and fully diagnostic.

  8. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  9. Segmenting Multiple Sclerosis Lesions using a Spatially Constrained K-Nearest Neighbour approach

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Larsen, Rasmus; Sørensen, Per Soelberg

    2012-01-01

    We propose a method for the segmentation of Multiple Sclerosis lesions. The method is based on probability maps derived from a K-Nearest Neighbours classication. These are used as a non parametric likelihood in a Bayesian formulation with a prior that assumes connectivity of neighbouring voxels. ...

  10. A new approach to assymmetric feedback in a segmented broad area diode laser

    DEFF Research Database (Denmark)

    Jensen, Ole Bjarlin; Thestrup Nielsen, Birgitte; Petersen, Paul Michael

    2009-01-01

    We present the demonstration of a non-critical setup for asymmetric feedback in a segmented broad area diode laser. We compare the dependence of the beam quality on the position of the dispersive element for standard spectral beam combining and our new non-critical setup. We find that our new...

  11. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    Energy Technology Data Exchange (ETDEWEB)

    Griv, Evgeny [Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Jiang, Ing-Guey [Department of Physics, National Tsing-Hua University, Kuang-Fu Road 101, Hsin-Chu 30013, Taiwan (China); Hou, Li-Gang, E-mail: griv@bgu.ac.il [National Astronomical Observatories, Chinese Academy of Sciences, Jia-20, Beijing 100012 (China)

    2017-08-01

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysis of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.

  12. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    International Nuclear Information System (INIS)

    Griv, Evgeny; Jiang, Ing-Guey; Hou, Li-Gang

    2017-01-01

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysis of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.

  13. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    Science.gov (United States)

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the

  14. Semantic Segmentation of Real-time Sensor Data Stream for Complex Activity Recognition

    OpenAIRE

    Triboan, Darpan; Chen, Liming; Chen, Feng; Wang, Zumin

    2016-01-01

    Department of Information Engineering, Dalian University, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Data segmentation plays a critical role in performing human activity recognition (HAR) in the ambient assistant living (AAL) systems. It is particularly important for complex activity recognition when the events occur in short bursts with attributes of multiple sub-tasks. Althou...

  15. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    Science.gov (United States)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  16. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    DEFF Research Database (Denmark)

    Fledelius, Walther; Worm, Esben Schjødt; Høyer, Morten

    2014-01-01

    (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2-3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used...... for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion...... range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces...

  17. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  18. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction

    Directory of Open Access Journals (Sweden)

    Jinke Wang

    2016-01-01

    Full Text Available This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD 11.15±69.63 cm3, volume overlap error (VOE 3.5057±1.3719%, average surface distance (ASD 0.7917±0.2741 mm, root mean square distance (RMSD 1.6957±0.6568 mm, maximum symmetric absolute surface distance (MSD 21.3430±8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  19. Regional Approach to Luxury Market Segmentation: The Case Of Western Balkans

    OpenAIRE

    Melika Husic-Mehmedovic; Emir Agic

    2015-01-01

    Nature of the luxury brand requires limited market in order to maintain exclusivity. Individual countries in the Western Balkans are not lucrative per se, therefore, regional segmentation is needed in the case of luxury brands. Countries of Western Balkan, i.e. Bosnia and Herzegovina, Croatia, Serbia and Slovenia are all post-socialist, post-war countries currently going through major transitions. Â Rather small markets are yet to be established in its final form politically, economically, so...

  20. STEM Employment in the New Economy: A Labor Market Segmentation Approach

    Science.gov (United States)

    Torres-Olave, Blanca M.

    2013-01-01

    The present study examined the extent to which the U.S. STEM labor market is stratified in terms of quality of employment. Through a series of cluster analyses and Chi-square tests on data drawn from the 2008 Survey of Income Program Participation (SIPP), the study found evidence of segmentation in the highly-skilled STEM and non-STEM samples,…

  1. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  2. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    Science.gov (United States)

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All

  3. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    Science.gov (United States)

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido; Calabrò, Rocco S

    2015-10-01

    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed. © The Author(s) 2015.

  4. Agreement of Anterior Segment Parameters Obtained From Swept-Source Fourier-Domain and Time-Domain Anterior Segment Optical Coherence Tomography.

    Science.gov (United States)

    Chansangpetch, Sunee; Nguyen, Anwell; Mora, Marta; Badr, Mai; He, Mingguang; Porco, Travis C; Lin, Shan C

    2018-03-01

    To assess the interdevice agreement between swept-source Fourier-domain and time-domain anterior segment optical coherence tomography (AS-OCT). Fifty-three eyes from 41 subjects underwent CASIA2 and Visante OCT imaging. One hundred eighty-degree axis images were measured with the built-in two-dimensional analysis software for the swept-source Fourier-domain AS-OCT (CASIA2) and a customized program for the time-domain AS-OCT (Visante OCT). In both devices, we examined the angle opening distance (AOD), trabecular iris space area (TISA), angle recess area (ARA), anterior chamber depth (ACD), anterior chamber width (ACW), and lens vault (LV). Bland-Altman plots and intraclass correlation (ICC) were performed. Orthogonal linear regression assessed any proportional bias. ICC showed strong correlation for LV (0.925) and ACD (0.992) and moderate agreement for ACW (0.801). ICC suggested good agreement for all angle parameters (0.771-0.878) except temporal AOD500 (0.743) and ARA750 (nasal 0.481; temporal 0.481). There was a proportional bias in nasal ARA750 (slope 2.44, 95% confidence interval [CI]: 1.95-3.18), temporal ARA750 (slope 2.57, 95% CI: 2.04-3.40), and nasal TISA500 (slope 1.30, 95% CI: 1.12-1.54). Bland-Altman plots demonstrated in all measured parameters a minimal mean difference between the two devices (-0.089 to 0.063); however, evidence of constant bias was found in nasal AOD250, nasal AOD500, nasal AOD750, nasal ARA750, temporal AOD500, temporal AOD750, temporal ARA750, and ACD. Among the parameters with constant biases, CASIA2 tends to give the larger numbers. Both devices had generally good agreement. However, there were proportional and constant biases in most angle parameters. Thus, it is not recommended that values be used interchangeably.

  5. Quantization of musical time: A connectionist approach

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1989-01-01

    Musical time can be considered to be the product of two time scales: the discrete time intervals of a metrical structure and the continuous time scales of tempo changes and expressive timing (Clarke 1987a). In musical notation both kinds are present, although the notation of continuous time is less

  6. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    Science.gov (United States)

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Various design approaches to achieve electric field-driven segmented folding actuation of electroactive polymer (EAP) sheets

    Science.gov (United States)

    Ahmed, Saad; Hong, Jonathan; Zhang, Wei; Kopatz, Jessica; Ounaies, Zoubeida; Frecker, Mary

    2018-03-01

    Electroactive polymer (EAPs) based technologies have shown promise in areas such as artificial muscles, aerospace, medical and soft robotics. In this work, we demonstrate ways to harness on-demand segmented folding actuation from pure bending of relaxor-ferroelectric P(VDF-TrFE-CTFE) based films, using various design approaches, such as `stiffener' and `notch' based approaches. The in-plane actuation of the P(VDF-TrFE-CTFE) is converted into bending actuation using unimorph configurations, where one passive substrate layer is attached to the active polymer. First, we experimentally show that placement of thin metal strips as stiffener in between active EAPs and passive substrates leads to segmented actuation as opposed to pure bending actuation; stiffeners made of different materials, such as nickel, copper and aluminum, are studied which reveals that a higher Young's modulus favors more pronounced segmented actuation. Second, notched samples are prepared by mounting passive substrate patches of various materials on top of the passive layers of the unimorph EAP actuators. Effect of notch materials, size of the notches and position of the notches on the folding actuation are studied. The motion of the human finger inspires a finger-like biomimetic actuator, which is realized by assigning multiple notches on the structure; finite element analysis (FEA) is also performed using COMSOL Multiphysics software for the notched finger actuator. Finally, a versatile soft-gripper is developed using the notched approach to demonstrate the capability of a properly designed EAP actuator to hold objects of various sizes and shapes.

  8. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  9. Recapturing time: a practical approach to time management for physicians.

    Science.gov (United States)

    Gordon, Craig E; Borkan, Steven C

    2014-05-01

    Increasing pressures on physicians demand effective time management and jeopardise professional satisfaction. Effective time management potentially increases productivity, promotes advancement, limits burnout and improves both professional and personal satisfaction. However, strategies for improving time management are lacking in the current medical literature. Adapting time management techniques from the medical and non-medical literature may improve physician time management habits. These techniques can be divided into four categories: (1) setting short and long-term goals; (2) setting priorities among competing responsibilities; (3) planning and organising activities; and (4) minimising 'time wasters'. Efforts to improve time management can increase physician productivity and enhance career satisfaction.

  10. Towards a real time computation of the dose in a phantom segmented into homogeneous meshes

    International Nuclear Information System (INIS)

    Blanpain, B.

    2009-10-01

    Automatic radiation therapy treatment planning necessitates a very fast computation of the dose delivered to the patient. We propose to compute the dose by segmenting the patient's phantom into homogeneous meshes, and by associating, to the meshes, projections to dose distributions pre-computed in homogeneous phantoms, along with weights managing heterogeneities. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. This method is very fast, in particular when there are few points of interest (several hundreds). In this case, results are obtained in less than one second. With such performances, practical realization of automatic treatment planning becomes practically feasible. (author)

  11. Segmental-dependent membrane permeability along the intestine following oral drug administration: Evaluation of a triple single-pass intestinal perfusion (TSPIP) approach in the rat.

    Science.gov (United States)

    Dahan, Arik; West, Brady T; Amidon, Gordon L

    2009-02-15

    In this paper we evaluate a modified approach to the traditional single-pass intestinal perfusion (SPIP) rat model in investigating segmental-dependent permeability along the intestine following oral drug administration. Whereas in the traditional model one single segment of the intestine is perfused, we have simultaneously perfused three individual segments of each rat intestine: proximal jejunum, mid-small intestine and distal ileum, enabling to obtain tripled data from each rat compared to the traditional model. Three drugs, with different permeabilities, were utilized to evaluate the model: metoprolol, propranolol and cimetidine. Data was evaluated in comparison to the traditional method. Metoprolol and propranolol showed similar P(eff) values in the modified model in all segments. Segmental-dependent permeability was obtained for cimetidine, with lower P(eff) in the distal parts. Similar P(eff) values for all drugs were obtained in the traditional method, illustrating that the modified model is as accurate as the traditional, throughout a wide range of permeability characteristics, whether the permeability is constant or segment-dependent along the intestine. Three-fold higher statistical power to detect segmental-dependency was obtained in the modified approach, as each subject serves as his own control. In conclusion, the Triple SPIP model can reduce the number of animals utilized in segmental-dependent permeability research without compromising the quality of the data obtained.

  12. Water distribution network segmentation based on group multi-criteria decision approach

    Directory of Open Access Journals (Sweden)

    Marcele Elisa Fontana

    Full Text Available Abstract A correct Network Segmentation (NS is necessary to perform proper maintenance activities in water distribution networks (WDN. For this, usually, isolation valves are allocating near the ends of pipes, blocking the flow of water. However, the allocation of valves increases costs substantially for the water supply companies. Additionally, other criteria should be taking account to analyze the benefits of the valves allocation. Thus, the problem is to define an alternative of NS which shows a good compromise in these different criteria. Moreover, usually, in this type of decision, there is more than one decision-maker involved, who can have different viewpoints. Therefore, this paper presents a model to support group decision-making, based on a multi-criteria method, in order to support the decision making procedure in the NS problem. As result, the model is able to find a solution that shows the best compromise regarding the benefits, costs, and the decision makers' preferences.

  13. A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.

    Science.gov (United States)

    Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem

    2017-12-30

    Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.

  14. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  15. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  16. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  17. Using a service sector segmented approach to identify community stakeholders who can improve access to suicide prevention services for veterans.

    Science.gov (United States)

    Matthieu, Monica M; Gardiner, Giovanina; Ziegemeier, Ellen; Buxton, Miranda

    2014-04-01

    Veterans in need of social services may access many different community agencies within the public and private sectors. Each of these settings has the potential to be a pipeline for attaining needed health, mental health, and benefits services; however, many service providers lack information on how to conceptualize where Veterans go for services within their local community. This article describes a conceptual framework for outreach that uses a service sector segmented approach. This framework was developed to aid recruitment of a provider-based sample of stakeholders (N = 70) for a study on improving access to the Department of Veterans Affairs and community-based suicide prevention services. Results indicate that although there are statistically significant differences in the percent of Veterans served by the different service sectors (F(9, 55) = 2.71, p = 0.04), exposure to suicidal Veterans and providers' referral behavior is consistent across the sectors. Challenges to using this framework include isolating the appropriate sectors for targeted outreach efforts. The service sector segmented approach holds promise for identifying and referring at-risk Veterans in need of services. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  18. An Effective Approach of Teeth Segmentation within the 3D Cone Beam Computed Tomography Image Based on Deformable Surface Model

    Directory of Open Access Journals (Sweden)

    Xutang Zhang

    2016-01-01

    Full Text Available In order to extract the pixels of teeth from 3D Cone Beam Computed Tomography (CBCT image, in this paper, a novel 3D segmentation approach based on deformable surface mode is developed for 3D tooth model reconstruction. Different forces are formulated to handle the segmentation problem by using different strategies. First, the proposed method estimates the deformation force of vertex model by simulating the deformation process of a bubble under the action of internal pressure and external force field. To handle the blurry boundary, a “braking force” is proposed deriving from the 3D gradient information calculated by transforming the Sobel operator into three-dimension representation. In addition, a “border reinforcement” strategy is developed for handling the cases with complicate structures. Moreover, the proposed method combines affine cell image decomposition (ACID grid reparameterization technique to handle the unstable changes of topological structure and deformability during the deformation process. The proposed method was performed on 510 CBCT images. To validate the performance, the results were compared with those of two other well-studied methods. Experimental results show that the proposed approach had a good performance in handling the cases with complicate structures and blurry boundaries well, is effective to converge, and can successfully achieve the reconstruction task of various types of teeth in oral cavity.

  19. Stability of multilead ST-segment "fingerprints" over time after percutaneous transluminal coronary angioplasty and its usefulness in detecting reocclusion.

    Science.gov (United States)

    Krucoff, M W; Parente, A R; Bottner, R K; Renzi, R H; Stark, K S; Shugoll, R A; Ahmed, S W; DeMichele, J; Stroming, S L; Green, C E

    1988-06-01

    Multilead ST-segment recordings taken during percutaneous transluminal coronary angioplasty (PTCA) could function as an individualized noninvasive template or "fingerprint," useful in evaluating transient ischemic episodes after leaving the catheterization laboratory. To evaluate the reproducibility of such ST-segment patterns over time, these changes were analyzed in patients grouped according to the time between occlusion and reocclusion. For the patients in group 1, the study required comparing their "fingerprints" in repeat balloon inflation during PTCA (reocclusion in less than 1 hour), for those in group 2, comparing ST "fingerprints" during PTCA with ST changes during spontaneous early myocardial infarction (reocclusion in 24 hours) and in group 3, comparing ST "fingerprints" with ST changes during repeat PTCA for restenosis greater than 1 month after the initial PTCA. The ST "fingerprints" among the 20 patients in group 1 were identical in 14 cases (70%) and clearly related in another 4 (20%). Of the 23 patients in group 2, 12 (52%) had the same and 8 (35%) had related patterns. Of 19 patients in group 3, 8 (42% had the same pattern and 8 (42%) had related patterns. Thus, ST fingerprints were the same or clearly related with reocclusion in the same patient from less than 1 hour to greater than 1 month after initial occlusion in 87% of patients overall, in 90% in less than 1 hour, in 87% in less than 24 hours and in 84% greater than 1 month later. Multilead pattern ST-segment "fingerprints" may serve as a noninvasive marker for detecting site-specific reocclusion.

  20. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    International Nuclear Information System (INIS)

    Kshetri, R; Bhattacharya, P

    2014-01-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event

  1. Three-dimensional segmented poincare plot analysis - A new approach of cardiovascular and cardiorespiratory regulation analysis.

    Science.gov (United States)

    Fischer, Claudia; Voss, Andreas

    2014-01-01

    Hypertensive pregnancy disorders affect 6 to 8 percent of all pregnancies which can cause severe complications for the mother and the fetus. The aim of this study was to develop a new method suitable for a three dimensional coupling analysis. Therefore, the three-dimensional segmented Poincaré plot analysis (SPPA3) is introduced that represents the Poincare analysis based on a cubic box model representation. The box representing the three dimensional phase space is (based on the SPPA method) subdivided into 12×12×12 equal cubelets according to the predefined range of signals and all single probabilities of occurring points in a specific cubelet related to the total number of points are calculated. From 10 healthy non-pregnant women, 66 healthy pregnant women and 56 hypertensive pregnant women suffering from chronic hypertension, gestational hypertension and preeclampsia, 30 minutes of beat-to-beat intervals (BBI), noninvasive blood pressure and respiration (RESP) were continuously recorded and analyzed. Couplings between the different signals were analyzed. The ability of SPPA3 for a screening could be confirmed by multivariate discriminant analysis differentiating between all pregnant woman and preeclampsia (index BBI3_SBP9_RESP6/ BBI8_SBP11_RESP4 leads to an area under the ROC curve of AUC=91.2%). In conclusion, SPPA3 could be a useful method for enhanced risk stratification in pregnant women.

  2. Investigation of biomechanical behavior of lumbar vertebral segments with dynamic stabilization device using finite element approach

    Science.gov (United States)

    Deoghare, Ashish B.; Kashyap, Siddharth; Padole, Pramod M.

    2013-03-01

    Degenerative disc disease is a major source of lower back pain and significantly alters the biomechanics of the lumbar spine. Dynamic stabilization device is a remedial technique which uses flexible materials to stabilize the affected lumbar region while preserving the natural anatomy of the spine. The main objective of this research work is to investigate the stiffness variation of dynamic stabilization device under various loading conditions under compression, axial rotation and flexion. Three dimensional model of the two segment lumbar spine is developed using computed tomography (CT) scan images. The lumbar structure developed is analyzed in ANSYS workbench. Two types of dynamic stabilization are considered: one with stabilizing device as pedicle instrumentation and second with stabilization device inserted around the inter-vertebral disc. Analysis suggests that proper positioning of the dynamic stabilization device is of paramount significance prior to the surgery. Inserting the device in the posterior region indicates the adverse effects as it shows increase in the deformation of the inter-vertebral disc. Analysis executed by positioning stabilizing device around the inter-vertebral disc yields better result for various stiffness values under compression and other loadings. [Figure not available: see fulltext.

  3. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    Science.gov (United States)

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  4. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    Directory of Open Access Journals (Sweden)

    Christian Held

    2013-01-01

    Full Text Available Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline′s modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  5. Saccular aneurysm of segmental branch of the main renal artery: approach to diagnosis and treatment

    International Nuclear Information System (INIS)

    Karaman, B.; Hamcan, S.; Bozkurt, Y.; Kara, K.; Aslan, A.

    2012-01-01

    Full text: Introduction: Renal artery aneurysms rarely detected clinical situation. Mostly determined by symptoms such as hematuria, hypertension and flank pain. Generally detected during investigation of symptoms or incidentally. Objectives and tasks: We aim to present the findings of CTA and DSA of the 58-year-old male patient with flank pain, hematuria and hypertension complaints. Materials and methods: We performed CTA and selective renal angiography to 58-year-old male patient with complaints of hypertension, flank pain and hematuria. Results: Approximately 11.5x 13.5 mm size of saccular aneurysm at the upper segmental branch of the left renal artery and focal cortical infarct detected in CT abdomen of the patient before treatment. The aneurysm was confirmed with selective renal angiography examination and treated with Cardiatis stent in the same procedure. Conclusion: Primary goal of treatment of renal artery aneurysms is to prevent complications such as rupture and thrombosis. Renal artery aneurysms have been treated with open surgery previously. Parenchyma preventive and minimally invasive treatments such as Cardiatis stent placement successfully uses currently

  6. Association of Attorney Advertising and FDA Action with Prescription Claims: A Time Series Segmented Regression Analysis.

    Science.gov (United States)

    Tippett, Elizabeth C; Chen, Brian K

    2015-12-01

    Attorneys sponsor television advertisements that include repeated warnings about adverse drug events to solicit consumers for lawsuits against drug manufacturers. The relationship between such advertising, safety actions by the US Food and Drug Administration (FDA), and healthcare use is unknown. To investigate the relationship between attorney advertising, FDA actions, and prescription drug claims. The study examined total users per month and prescription rates for seven drugs with substantial attorney advertising volume and FDA or other safety interventions during 2009. Segmented regression analysis was used to detect pre-intervention trends, post-intervention level changes, and changes in post-intervention trends relative to the pre-intervention trends in the use of these seven drugs, using advertising volume, media hits, and the number of Medicare enrollees as covariates. Data for these variables were obtained from the Center for Medicare and Medicaid Services, Kantar Media, and LexisNexis. Several types of safety actions were associated with reductions in drug users and/or prescription rates, particularly for fentanyl, varenicline, and paroxetine. In most cases, attorney advertising volume rose in conjunction with major safety actions. Attorney advertising volume was positively correlated with prescription rates in five of seven drugs, likely because advertising volume began rising before safety actions, when prescription rates were still increasing. On the other hand, attorney advertising had mixed associations with the number of users per month. Regulatory and safety actions likely reduced the number of users and/or prescription rates for some drugs. Attorneys may have strategically chosen to begin advertising adverse drug events prior to major safety actions, but we found little evidence that attorney advertising reduced drug use. Further research is needed to better understand how consumers and physicians respond to attorney advertising.

  7. Rolling Element Bearing Performance Degradation Assessment Using Variational Mode Decomposition and Gath-Geva Clustering Time Series Segmentation

    Directory of Open Access Journals (Sweden)

    Yaolong Li

    2017-01-01

    Full Text Available By focusing on the issue of rolling element bearing (REB performance degradation assessment (PDA, a solution based on variational mode decomposition (VMD and Gath-Geva clustering time series segmentation (GGCTSS has been proposed. VMD is a new decomposition method. Since it is different from the recursive decomposition method, for example, empirical mode decomposition (EMD, local mean decomposition (LMD, and local characteristic-scale decomposition (LCD, VMD needs a priori parameters. In this paper, we will propose a method to optimize the parameters in VMD, namely, the number of decomposition modes and moderate bandwidth constraint, based on genetic algorithm. Executing VMD with the acquired parameters, the BLIMFs are obtained. By taking the envelope of the BLIMFs, the sensitive BLIMFs are selected. And then we take the amplitude of the defect frequency (ADF as a degradative feature. To get the performance degradation assessment, we are going to use the method called Gath-Geva clustering time series segmentation. Afterwards, the method is carried out by two pieces of run-to-failure data. The results indicate that the extracted feature could depict the process of degradation precisely.

  8. Complex network approach to fractional time series

    Energy Technology Data Exchange (ETDEWEB)

    Manshour, Pouya [Physics Department, Persian Gulf University, Bushehr 75169 (Iran, Islamic Republic of)

    2015-10-15

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.

  9. New Approach for Segmentation and Quantification of Two-Dimensional Gel Electrophoresis Images

    DEFF Research Database (Denmark)

    Anjo, Antonio dos; Laurell Blom Møller, Anders; Ersbøll, Bjarne Kjær

    2011-01-01

    Motivation: Detection of protein spots in two-dimensional gel electrophoresis images (2-DE) is a very complex task and current approaches addressing this problem still suffer from significant shortcomings. When quantifying a spot, most of the current software applications include a lot of backgro...

  10. An automated approach for segmentation of intravascular ultrasound images based on parametric active contour models

    International Nuclear Information System (INIS)

    Vard, Alireza; Jamshidi, Kamal; Movahhedinia, Naser

    2012-01-01

    This paper presents a fully automated approach to detect the intima and media-adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches.

  11. A novel textile characterisation approach using an embedded sensor system and segmented textile manipulation

    Science.gov (United States)

    Fial, Julian; Carosella, Stefan; Langheinz, Mario; Wiest, Patrick; Middendorf, Peter

    2018-05-01

    This paper investigates the application of sensors on carbon fibre textiles for the purpose of textile characterisation and draping process optimisation. The objective is to analyse a textile's condition during the draping operation and actively manipulate boundary conditions in order to create better preform qualities. Various realisations of textile integrated sensors are presented, focusing on the measurement of textile strain. Furthermore, a complex textile characterisation approach is presented where these sensors shall be implemented in.

  12. The Effect of Time and Fusion Length on Motion of the Unfused Lumbar Segments in Adolescent Idiopathic Scoliosis.

    Science.gov (United States)

    Marks, Michelle C; Bastrom, Tracey P; Petcharaporn, Maty; Shah, Suken A; Betz, Randal R; Samdani, Amer; Lonner, Baron; Miyanji, Firoz; Newton, Peter O

    2015-11-01

    The purpose of this study was to assess L4-S1 inter-vertebral coronal motion of the unfused distal segments of the spine in patients with adolescent idiopathic scoliosis (AIS) after instrumented fusion with regards to postoperative time and fusion length, independently. Coronal motion was assessed by standardized radiographs acquired in maximum right and left bending positions. The intervertebral angles were measured via digital radiographic measuring software and the motion from the levels of L4-S1 was summed. The entire cohort was included to evaluate the effect of follow-up time on residual motion. Patients were grouped into early (10 years) follow-up groups. A subset of patients (n = 35) with a primary thoracic curve and a nonstructural modifier type "C" lumbar curve were grouped as either selective fusion (lowest instrumented vertebra [LIV] of L1 and above) or longer fusion (LIV of L2 and below) and effect on motion was evaluated. The data for 259 patients are included. The distal residual unfused motion (from L4 to S1) remained unchanged across early, midterm, to long-term follow-up. In the selective fusion subset of patients, a significant increase in motion from L4 to S1 was seen in the patients who were fused long versus the selectively fused patients, irrespective of length of follow-up time. Motion in the unfused distal lumbar segments did not vary within the >10-year follow-up period. However, in patients with a primary thoracic curve and a nonstructural lumbar curve, the choice to fuse longer versus shorter may have significant consequences. The summed motion from L4 to S1 is 50% greater in patients fused longer compared with those patients with a selective fusion, in which postoperative motion is shared by more unfused segments. The implications of this focal increased motion are unknown, and further research is warranted but can be surmised. Copyright © 2015 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  13. Posterior Segment Intraocular Foreign Body: Extraction Surgical Techniques, Timing, and Indications for Vitrectomy

    Directory of Open Access Journals (Sweden)

    Dante A. Guevara-Villarreal

    2016-01-01

    Full Text Available Ocular penetrating injury with Intraocular Foreign Body (IOFB is a common form of ocular injury. Several techniques to remove IOFB have been reported by different authors. The aim of this publication is to review different timing and surgical techniques related to the extraction of IOFB. Material and Methods. A PubMed search on “Extraction of Intraocular Foreign Body,” “Timing for Surgery Intraocular Foreign Body,” and “Surgical Technique Intraocular Foreign Body” was made. Results. Potential advantages of immediate and delayed IOFB removal have been reported with different results. Several techniques to remove IOFB have been reported by different authors with good results. Conclusion. The most important factor at the time to perform IOFB extraction is the experience of the surgeon.

  14. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  15. A multiple kernel classification approach based on a Quadratic Successive Geometric Segmentation methodology with a fault diagnosis case.

    Science.gov (United States)

    Honório, Leonardo M; Barbosa, Daniele A; Oliveira, Edimar J; Garcia, Paulo A Nepomuceno; Santos, Murillo F

    2018-03-01

    This work presents a new approach for solving classification and learning problems. The Successive Geometric Segmentation technique is applied to encapsulate large datasets by using a series of Oriented Bounding Hyper Box (OBHBs). Each OBHB is obtained through linear separation analysis and each one represents a specific region in a pattern's solution space. Also, each OBHB can be seen as a data abstraction layer and be considered as an individual Kernel. Thus, it is possible by applying a quadratic discriminant function, to assemble a set of nonlinear surfaces separating each desirable pattern. This approach allows working with large datasets using high speed linear analysis tools and yet providing a very accurate non-linear classifier as final result. The methodology was tested using the UCI Machine Learning repository and a Power Transformer Fault Diagnosis real scenario problem. The results were compared with different approaches provided by literature and, finally, the potential and further applications of the methodology were also discussed. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  17. A Combinatorial Approach to Time Asymmetry

    Directory of Open Access Journals (Sweden)

    Martin Tamm

    2016-03-01

    Full Text Available In this paper, simple models for the multiverse are analyzed. Each universe is viewed as a path in a graph, and by considering very general statistical assumptions, essentially originating from Boltzmann, we can make the set of all such paths into a finite probability space. We can then also attempt to compute the probabilities for different kinds of behavior and in particular under certain conditions argue that an asymmetric behavior of the entropy should be much more probable than a symmetric one. This offers an explanation for the asymmetry of time as a broken symmetry in the multiverse. The focus here is on simple models which can be analyzed using methods from combinatorics. Although the computational difficulties rapidly become enormous when the size of the model grows, this still gives hints about how a full-scale model should behave.

  18. The detection of local irreversibility in time series based on segmentation

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  19. Segmentation and Time-of-Day Patterns in Foreign Exchange Markets

    OpenAIRE

    Angelo Ranaldo

    2007-01-01

    This paper sheds light on a puzzling pattern in foreign exchange markets: Domestic currencies appreciate (depreciate) systematically during foreign (domestic) working hours. These time-of-day patterns are statistically and economically highly significant. They pervasively persist across many years, even after accounting for calendar effects. This phenomenon is difficult to reconcile with the random walk and market efficiency hypothesis. Microstructural and behavioural explanations suggest tha...

  20. An Unsupervised Approach to Activity Recognition and Segmentation based on Object-Use Fingerprints

    DEFF Research Database (Denmark)

    Gu, Tao; Chen, Shaxun; Tao, Xianping

    2010-01-01

    Human activity recognition is an important task which has many potential applications. In recent years, researchers from pervasive computing are interested in deploying on-body sensors to collect observations and applying machine learning techniques to model and recognize activities. Supervised...... machine learning techniques typically require an appropriate training process in which training data need to be labeled manually. In this paper, we propose an unsupervised approach based on object-use fingerprints to recognize activities without human labeling. We show how to build our activity models...... a trace and detect the boundary of any two adjacent activities. We develop a wearable RFID system and conduct a real-world trace collection done by seven volunteers in a smart home over a period of 2 weeks. We conduct comprehensive experimental evaluations and comparison study. The results show that our...

  1. Time course of cortisol loss in hair segments under immersion in hot water.

    Science.gov (United States)

    Li, Jifeng; Xie, Qiaozhen; Gao, Wei; Xu, Youyun; Wang, Shuang; Deng, Huihua; Lu, Zuhong

    2012-02-18

    Hair cortisol is supposed to be a good biomarker of chronic stress. Major loss of hair cortisol in long-term exposure to environmental factors affected strongly its proper assessment of chronic stress in human. However, there was no research on time course of hair cortisol loss during the long-term exposure. Hair samples with longer than 1cm in the posterior vertex region were cut as close as possible to the scalp. The 1-cm hair samples were treated by ultraviolet irradiation or immersion in shampoo solution or water immersion at 40, 65 and 80°C. Hair cortisol content was determined with high performance liquid chromatography tandem mass spectrometry. Ultraviolet irradiation and immersion in shampoo solution and hot water gave rise to the significant cortisol loss in hair. Hair cortisol content was sharply decreased with water immersion duration during initial stage and slowly decreased in the following stage. The 2-stage loss process with water immersion duration modeled to some extent time course of hair cortisol loss in long-term exposure to external environments. Cortisol from hair samples closest to the scalp in the posterior vertex could represent more accurately central hypothalamo-pituitary-adrenal activity. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Time variations in the mechanical characteristics of local crustal segments according to seismic observations

    Science.gov (United States)

    Kocharyan, G. G.; Gamburtseva, N. G.; Sanina, I. A.; Danilova, T. V.; Nesterkina, M. A.; Gorbunova, E. M.; Ivanchenko, G. N.

    2011-04-01

    The results of the seismic observations made with two different experimental setups are presented. In the first case, the signals produced by underground nuclear explosions at the Semipalatinsk Test Site were measured on a linear profile, which allowed one to definitely outline the areas where the mechanical properties of rocks experienced considerable time variations. In the second case, the waves excited by the open-pit mine blasts recorded at a small-aperture seismic array at the Mikhnevo Geophysical Station (Institute of Geosphere Dynamics, Russian Academy of Sciences) on the East European Platform favored the estimation of variations in the integral characteristics of the seismic path. Measurements in aseismic regions characterized by diverse geological structure and different tectonic conditions revealed similar effects of the strong dependency of seismic parameters on the time of explosions. Here, the variations experienced by the maximum amplitudes of oscillations and irrelevant to seasonal changes or local conditions reached a factor of two. The generic periods of these variations including the distinct annual rhythm are probably the fragments of a lower-frequency process. The obtained results suggest that these variations are due to changes in the stressstrain state of active fault zones, which, in turn, can be associated with the macroscale motion of large blocks triggered by tidal strains, tectonic forces and, possibly, variations in the rate of the Earth's rotation.

  3. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  4. Approaching time is important for assessment of endoscopic surgical skills.

    Science.gov (United States)

    Tokunaga, Masakazu; Egi, Hiroyuki; Hattori, Minoru; Yoshimitsu, Masanori; Sumitani, Daisuke; Kawahara, Tomohiro; Okajima, Masazumi; Ohdan, Hideki

    2012-05-01

    This study aimed to verify whether the approaching time (the time taken to reach the target point from another point, a short distance apart, during point-to-point movement in endoscopic surgery), assessed using the Hiroshima University Endoscopic Surgical Assessment Device (HUESAD), could distinguish the skill level of surgeons. Expert surgeons (who had performed more than 50 endoscopic surgeries) and novice surgeons (who had no experience in performing endoscopic surgery) were tested using the HUESAD. The approaching time, total time, and intermediate time (total time--approaching time) were measured and analyzed using the trajectory of the tip of the instrument. The approaching time and total time were significantly shorter in the expert group than in the novice group (p time did not significantly differ between the groups (p > 0.05). The approaching time, which is a component of the total time, is very mportant in the measurement of the total time to assess endoscopic surgical skills. Further, the approaching time was useful for skill assessment by the HUESAD for evaluating the skill of surgeons performing endoscopic surgery.

  5. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  6. Scleral Fixation of Posteriorly Dislocated Intraocular Lenses by 23-Gauge Vitrectomy without Anterior Segment Approach

    Directory of Open Access Journals (Sweden)

    Jeroni Nadal

    2015-01-01

    Full Text Available Background. To evaluate visual outcomes, corneal changes, intraocular lens (IOL stability, and complications after repositioning posteriorly dislocated IOLs and sulcus fixation with polyester sutures. Design. Prospective consecutive case series. Setting. Institut Universitari Barraquer. Participants. 25 eyes of 25 patients with posteriorly dislocated IOL. Methods. The patients underwent 23-gauge vitrectomy via the sulcus to rescue dislocated IOLs and fix them to the scleral wall with a previously looped nonabsorbable polyester suture. Main Outcome Measures. Best corrected visual acuity (BCVA LogMAR, corneal astigmatism, endothelial cell count, IOL stability, and postoperative complications. Results. Mean follow-up time was 18.8 ± 10.9 months. Mean surgery time was 33 ± 2 minutes. Mean BCVA improved from 0.30 ± 0.48 before surgery to 0.18 ± 0.60 (p=0.015 at 1 month, which persisted to 12 months (0.18 ± 0.60. Neither corneal astigmatism nor endothelial cell count showed alterations 1 year after surgery. Complications included IOL subluxation in 1 eye (4%, vitreous hemorrhage in 2 eyes (8%, transient hypotony in 2 eyes (8%, and cystic macular edema in 1 eye (4%. No patients presented retinal detachment. Conclusion. This surgical technique proved successful in the management of dislocated IOL. Functional results were good and the complications were easily resolved.

  7. Scleral Fixation of Posteriorly Dislocated Intraocular Lenses by 23-Gauge Vitrectomy without Anterior Segment Approach.

    Science.gov (United States)

    Nadal, Jeroni; Kudsieh, Bachar; Casaroli-Marano, Ricardo P

    2015-01-01

    Background. To evaluate visual outcomes, corneal changes, intraocular lens (IOL) stability, and complications after repositioning posteriorly dislocated IOLs and sulcus fixation with polyester sutures. Design. Prospective consecutive case series. Setting. Institut Universitari Barraquer. Participants. 25 eyes of 25 patients with posteriorly dislocated IOL. Methods. The patients underwent 23-gauge vitrectomy via the sulcus to rescue dislocated IOLs and fix them to the scleral wall with a previously looped nonabsorbable polyester suture. Main Outcome Measures. Best corrected visual acuity (BCVA) LogMAR, corneal astigmatism, endothelial cell count, IOL stability, and postoperative complications. Results. Mean follow-up time was 18.8 ± 10.9 months. Mean surgery time was 33 ± 2 minutes. Mean BCVA improved from 0.30 ± 0.48 before surgery to 0.18 ± 0.60 (p = 0.015) at 1 month, which persisted to 12 months (0.18 ± 0.60). Neither corneal astigmatism nor endothelial cell count showed alterations 1 year after surgery. Complications included IOL subluxation in 1 eye (4%), vitreous hemorrhage in 2 eyes (8%), transient hypotony in 2 eyes (8%), and cystic macular edema in 1 eye (4%). No patients presented retinal detachment. Conclusion. This surgical technique proved successful in the management of dislocated IOL. Functional results were good and the complications were easily resolved.

  8. Simulation Approach for Timing Analysis of Genetic Logic Circuits

    DEFF Research Database (Denmark)

    Baig, Hasan; Madsen, Jan

    2017-01-01

    in a manner similar to electronic logic circuits, but they are much more stochastic and hence much harder to characterize. In this article, we introduce an approach to analyze the threshold value and timing of genetic logic circuits. We show how this approach can be used to analyze the timing behavior...... of single and cascaded genetic logic circuits. We further analyze the timing sensitivity of circuits by varying the degradation rates and concentrations. Our approach can be used not only to characterize the timing behavior but also to analyze the timing constraints of cascaded genetic logic circuits...

  9. Classification and evaluation strategies of auto-segmentation approaches for PET: Report of AAPM task group No. 211

    Science.gov (United States)

    Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.

    2017-01-01

    Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies

  10. A Bayesian network based framework for real-time crash prediction on the basic freeway segments of urban expressways.

    Science.gov (United States)

    Hossain, Moinul; Muromachi, Yasunori

    2012-03-01

    The concept of measuring the crash risk for a very short time window in near future is gaining more practicality due to the recent advancements in the fields of information systems and traffic sensor technology. Although some real-time crash prediction models have already been proposed, they are still primitive in nature and require substantial improvements to be implemented in real-life. This manuscript investigates the major shortcomings of the existing models and offers solutions to overcome them with an improved framework and modeling method. It employs random multinomial logit model to identify the most important predictors as well as the most suitable detector locations to acquire data to build such a model. Afterwards, it applies Bayesian belief net (BBN) to build the real-time crash prediction model. The model has been constructed using high resolution detector data collected from Shibuya 3 and Shinjuku 4 expressways under the jurisdiction of Tokyo Metropolitan Expressway Company Limited, Japan. It has been specifically built for the basic freeway segments and it predicts the chance of formation of a hazardous traffic condition within the next 4-9 min for a particular 250 meter long road section. The performance evaluation results reflect that at an average threshold value the model is able to successful classify 66% of the future crashes with a false alarm rate less than 20%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Atlas-Based Segmentation Improves Consistency and Decreases Time Required for Contouring Postoperative Endometrial Cancer Nodal Volumes

    International Nuclear Information System (INIS)

    Young, Amy V.; Wortham, Angela; Wernick, Iddo; Evans, Andrew; Ennis, Ronald D.

    2011-01-01

    Purpose: Accurate target delineation of the nodal volumes is essential for three-dimensional conformal and intensity-modulated radiotherapy planning for endometrial cancer adjuvant therapy. We hypothesized that atlas-based segmentation ('autocontouring') would lead to time savings and more consistent contours among physicians. Methods and Materials: A reference anatomy atlas was constructed using the data from 15 postoperative endometrial cancer patients by contouring the pelvic nodal clinical target volume on the simulation computed tomography scan according to the Radiation Therapy Oncology Group 0418 trial using commercially available software. On the simulation computed tomography scans from 10 additional endometrial cancer patients, the nodal clinical target volume autocontours were generated. Three radiation oncologists corrected the autocontours and delineated the manual nodal contours under timed conditions while unaware of the other contours. The time difference was determined, and the overlap of the contours was calculated using Dice's coefficient. Results: For all physicians, manual contouring of the pelvic nodal target volumes and editing the autocontours required a mean ± standard deviation of 32 ± 9 vs. 23 ± 7 minutes, respectively (p = .000001), a 26% time savings. For each physician, the time required to delineate the manual contours vs. correcting the autocontours was 30 ± 3 vs. 21 ± 5 min (p = .003), 39 ± 12 vs. 30 ± 5 min (p = .055), and 29 ± 5 vs. 20 ± 5 min (p = .0002). The mean overlap increased from manual contouring (0.77) to correcting the autocontours (0.79; p = .038). Conclusion: The results of our study have shown that autocontouring leads to increased consistency and time savings when contouring the nodal target volumes for adjuvant treatment of endometrial cancer, although the autocontours still required careful editing to ensure that the lymph nodes at risk of recurrence are properly included in the target volume.

  12. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  13. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  14. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  15. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    Science.gov (United States)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  16. Classification and evaluation strategies of auto-segmentation approaches for PET: Report of AAPM task group No. 211.

    Science.gov (United States)

    Hatt, Mathieu; Lee, John A; Schmidtlein, Charles R; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P; Mawlawi, Osama R; Nestle, Ursula; Pugachev, Andrei B; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S

    2017-06-01

    on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. © 2017 American Association of Physicists in Medicine.

  17. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr — Temporal segmentation algorithms

    Science.gov (United States)

    Robert E. Kennedy; Zhiqiang Yang; Warren B. Cohen

    2010-01-01

    We introduce and test LandTrendr (Landsat-based detection of Trends in Disturbance and Recovery), a new approach to extract spectral trajectories of land surface change from yearly Landsat time-series stacks (LTS). The method brings together two themes in time-series analysis of LTS: capture of short-duration events and smoothing of long-term trends. Our strategy is...

  18. a segmentation approach

    African Journals Online (AJOL)

    kirstam

    a visitor survey was conducted at the Cape Town International Jazz ... after controlling for customer education and income, and for service quality (Lynn ... US perception that black diners tip less, are confirmed or contradicted in the context ... and tipping behaviour as well as the findings from cross-cultural tipping and market.

  19. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  20. SEGMENTATION OF ENVIRONMENTAL TIME LAPSE IMAGE SEQUENCES FOR THE DETERMINATION OF SHORE LINES CAPTURED BY HAND-HELD SMARTPHONE CAMERAS

    Directory of Open Access Journals (Sweden)

    M. Kröhnert

    2017-09-01

    Full Text Available The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.

  1. Segmentation of Environmental Time Lapse Image Sequences for the Determination of Shore Lines Captured by Hand-Held Smartphone Cameras

    Science.gov (United States)

    Kröhnert, M.; Meichsner, R.

    2017-09-01

    The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.

  2. A combined segmented anode gas ionization chamber and time-of-flight detector for heavy ion elastic recoil detection analysis

    Science.gov (United States)

    Ström, Petter; Petersson, Per; Rubel, Marek; Possnert, Göran

    2016-10-01

    A dedicated detector system for heavy ion elastic recoil detection analysis at the Tandem Laboratory of Uppsala University is presented. Benefits of combining a time-of-flight measurement with a segmented anode gas ionization chamber are demonstrated. The capability of ion species identification is improved with the present system, compared to that obtained when using a single solid state silicon detector for the full ion energy signal. The system enables separation of light elements, up to Neon, based on atomic number while signals from heavy elements such as molybdenum and tungsten are separated based on mass, to a sample depth on the order of 1 μm. The performance of the system is discussed and a selection of material analysis applications is given. Plasma-facing materials from fusion experiments, in particular metal mirrors, are used as a main example for the discussion. Marker experiments using nitrogen-15 or oxygen-18 are specific cases for which the described improved species separation and sensitivity are required. Resilience to radiation damage and significantly improved energy resolution for heavy elements at low energies are additional benefits of the gas ionization chamber over a solid state detector based system.

  3. Sagittal Plane Correction Using the Lateral Transpsoas Approach: A Biomechanical Study on the Effect of Cage Angle and Surgical Technique on Segmental Lordosis.

    Science.gov (United States)

    Melikian, Rojeh; Yoon, Sangwook Tim; Kim, Jin Young; Park, Kun Young; Yoon, Caroline; Hutton, William

    2016-09-01

    Cadaveric biomechanical study. To determine the degree of segmental correction that can be achieved through lateral transpsoas approach by varying cage angle and adding anterior longitudinal ligament (ALL) release and posterior element resection. Lordotic cage insertion through the lateral transpsoas approach is being used increasingly for restoration of sagittal alignment. However, the degree of correction achieved by varying cage angle and ALL release and posterior element resection is not well defined. Thirteen lumbar motion segments between L1 and L5 were dissected into single motion segments. Segmental angles and disk heights were measured under both 50 N and 500 N compressive loads under the following conditions: intact specimen, discectomy (collapsed disk simulation), insertion of parallel cage, 10° cage, 30° cage with ALL release, 30° cage with ALL release and spinous process (SP) resection, 30° cage with ALL release, SP resection, facetectomy, and compression with pedicle screws. Segmental lordosis was not increased by either parallel or 10° cages as compared with intact disks, and contributed small amounts of lordosis when compared with the collapsed disk condition. Placement of 30° cages with ALL release increased segmental lordosis by 10.5°. Adding SP resection increased lordosis to 12.4°. Facetectomy and compression with pedicle screws further increased lordosis to approximately 26°. No interventions resulted in a decrease in either anterior or posterior disk height. Insertion of a parallel or 10° cage has little effect on lordosis. A 30° cage insertion with ALL release resulted in a modest increase in lordosis (10.5°). The addition of SP resection and facetectomy was needed to obtain a larger amount of correction (26°). None of the cages, including the 30° lordotic cage, caused a decrease in posterior disk height suggesting hyperlordotic cages do not cause foraminal stenosis. N/A.

  4. Skinner-Rusk approach to time-dependent mechanics

    NARCIS (Netherlands)

    Cortés, Jorge; Martínez, Sonia; Cantrijn, Frans

    2002-01-01

    The geometric approach to autonomous classical mechanical systems in terms of a canonical first-order system on the Whitney sum of the tangent and cotangent bundle, developed by Skinner and Rusk, is extended to the time-dependent framework.

  5. High-resolution, time-resolved MRA provides superior definition of lower-extremity arterial segments compared to 2D time-of-flight imaging.

    Science.gov (United States)

    Thornton, F J; Du, J; Suleiman, S A; Dieter, R; Tefera, G; Pillai, K R; Korosec, F R; Mistretta, C A; Grist, T M

    2006-08-01

    To evaluate a novel time-resolved contrast-enhanced (CE) projection reconstruction (PR) magnetic resonance angiography (MRA) method for identifying potential bypass graft target vessels in patients with Class II-IV peripheral vascular disease. Twenty patients (M:F = 15:5, mean age = 58 years, range = 48-83 years), were recruited from routine MRA referrals. All imaging was performed on a 1.5 T MRI system with fast gradients (Signa LX; GE Healthcare, Waukesha, WI). Images were acquired with a novel technique that combined undersampled PR with a time-resolved acquisition to yield an MRA method with high temporal and spatial resolution. The method is called PR hyper time-resolved imaging of contrast kinetics (PR-hyperTRICKS). Quantitative and qualitative analyses were used to compare two-dimensional (2D) time-of-flight (TOF) and PR-hyperTRICKS in 13 arterial segments per lower extremity. Statistical analysis was performed with the Wilcoxon signed-rank test. Fifteen percent (77/517) of the vessels were scored as missing or nondiagnostic with 2D TOF, but were scored as diagnostic with PR-hyperTRICKS. Image quality was superior with PR-hyperTRICKS vs. 2D TOF (on a four-point scale, mean rank = 3.3 +/- 1.2 vs. 2.9 +/- 1.2, P < 0.0001). PR-hyperTRICKS produced images with high contrast-to-noise ratios (CNR) and high spatial and temporal resolution. 2D TOF images were of inferior quality due to moderate spatial resolution, inferior CNR, greater flow-related artifacts, and absence of temporal resolution. PR-hyperTRICKS provides superior preoperative assessment of lower limb ischemia compared to 2D TOF.

  6. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  7. Cell segmentation in time-lapse fluorescence microscopy with temporally varying sub-cellular fusion protein patterns.

    Science.gov (United States)

    Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M

    2009-01-01

    Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.

  8. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  9. A Kalman-filter based approach to identification of time-varying gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Jie Xiong

    Full Text Available MOTIVATION: Conventional identification methods for gene regulatory networks (GRNs have overwhelmingly adopted static topology models, which remains unchanged over time to represent the underlying molecular interactions of a biological system. However, GRNs are dynamic in response to physiological and environmental changes. Although there is a rich literature in modeling static or temporally invariant networks, how to systematically recover these temporally changing networks remains a major and significant pressing challenge. The purpose of this study is to suggest a two-step strategy that recovers time-varying GRNs. RESULTS: It is suggested in this paper to utilize a switching auto-regressive model to describe the dynamics of time-varying GRNs, and a two-step strategy is proposed to recover the structure of time-varying GRNs. In the first step, the change points are detected by a Kalman-filter based method. The observed time series are divided into several segments using these detection results; and each time series segment belonging to two successive demarcating change points is associated with an individual static regulatory network. In the second step, conditional network structure identification methods are used to reconstruct the topology for each time interval. This two-step strategy efficiently decouples the change point detection problem and the topology inference problem. Simulation results show that the proposed strategy can detect the change points precisely and recover each individual topology structure effectively. Moreover, computation results with the developmental data of Drosophila Melanogaster show that the proposed change point detection procedure is also able to work effectively in real world applications and the change point estimation accuracy exceeds other existing approaches, which means the suggested strategy may also be helpful in solving actual GRN reconstruction problem.

  10. Improving Segmentation of 3D Retina Layers Based on Graph Theory Approach for Low Quality OCT Images

    Directory of Open Access Journals (Sweden)

    Stankiewicz Agnieszka

    2016-06-01

    Full Text Available This paper presents signal processing aspects for automatic segmentation of retinal layers of the human eye. The paper draws attention to the problems that occur during the computer image processing of images obtained with the use of the Spectral Domain Optical Coherence Tomography (SD OCT. Accuracy of the retinal layer segmentation for a set of typical 3D scans with a rather low quality was shown. Some possible ways to improve quality of the final results are pointed out. The experimental studies were performed using the so-called B-scans obtained with the OCT Copernicus HR device.

  11. Fuzzy segmentation of cerebral tissues in a 3-D MR image: a possibilistic approach versus other methods

    International Nuclear Information System (INIS)

    Barra, V.; Boire, J.Y.

    1999-01-01

    An algorithm for the segmentation of a single sequence of 3-D magnetic resonance images into cerebrospinal Fluid (CSF), Grey (GM) and White Matter (WM) classes is proposed. The method is a possibilistic clustering algorithm on the wavelet coefficients of the voxels. Possibilistic logic allows for modeling the uncertainty and the impreciseness inherent in MR images of the brain, while the wavelet representation allows to take into account both spatial and textural information. The procedure is fast, unsupervised and totally independent of statistical assumptions. In method is validated on a phantom, and then compared with other very used brain tissues segmentation algorithms. (authors)

  12. IceTrendr: a linear time-series approach to monitoring glacier environments using Landsat

    Science.gov (United States)

    Nelson, P.; Kennedy, R. E.; Nolin, A. W.; Hughes, J. M.; Braaten, J.

    2017-12-01

    Arctic glaciers in Alaska and Canada have experienced some of the greatest ice mass loss of any region in recent decades. A challenge to understanding these changing ecosystems, however, is developing globally-consistent, multi-decadal monitoring of glacier ice. We present a toolset and approach that captures, labels, and maps glacier change for use in climate science, hydrology, and Earth science education using Landsat Time Series (LTS). The core step is "temporal segmentation," wherein a yearly LTS is cleaned using pre-processing steps, converted to a snow/ice index, and then simplified into the salient shape of the change trajectory ("temporal signature") using linear segmentation. Such signatures can be characterized as simple `stable' or `transition of glacier ice to rock' to more complex multi-year changes like `transition of glacier ice to debris-covered glacier ice to open water to bare rock to vegetation'. This pilot study demonstrates the potential for interactively mapping, visualizing, and labeling glacier changes. What is truly innovative is that IceTrendr not only maps the changes but also uses expert knowledge to label the changes and such labels can be applied to other glaciers exhibiting statistically similar temporal signatures. Our key findings are that the IceTrendr concept and software can provide important functionality for glaciologists and educators interested in studying glacier changes during the Landsat TM timeframe (1984-present). Issues of concern with using dense Landsat time-series approaches for glacier monitoring include many missing images during the period 1984-1995 and that automated cloud mask are challenged and require the user to manually identify cloud-free images. IceTrendr is much more than just a simple "then and now" approach to glacier mapping. This process is a means of integrating the power of computing, remote sensing, and expert knowledge to "tell the story" of glacier changes.

  13. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  14. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  15. A segmented cell approach for studying the effects of serpentine flow field parameters on PEMFC current distribution

    International Nuclear Information System (INIS)

    Reshetenko, Tatyana V.; Bender, Guido; Bethune, Keith; Rocheleau, Richard

    2013-01-01

    Highlights: ► Effects of a flow field design on PEMFC were investigated. ► A segmented cell was used to study 6- and 10-channel serpentine flow fields. ► 10-Channel flow field improved a fuel cell's performance at high current. ► Performance distribution was more uniform for 10-channel than for 6-channel flow field. ► The performance improvement was due to an increased pressure drop. -- Abstract: A serpentine flow field is a commonly used design in proton exchange membrane fuel cells (PEMFCs). Consequently, optimization of the flow field parameters is critically needed. A segmented cell system was used to study the impact of the flow field's parameters on the current distribution in a PEMFC, and the data obtained were analyzed in terms of voltage overpotentials. 6-Channel and 10-channel serpentine flow field designs were investigated. At low current the segments performance was found to slightly decrease for a 10-channel serpentine flow field. However, increasing the number of channels increased the fuel cell performance when operating at high current and the cell performance became more uniform downstream. The observed improvement in fuel cell performance was attributed to a decrease in mass transfer voltage losses (permeability and diffusion), due to an increased pressure drop. Spatially distributed electrochemical impedance spectroscopy (EIS) data showed differences in the local segment impedance response and confirmed the performance distribution and the impact of the flow field design

  16. Building stewardship with recreation users: an approach of market segmentation to meet the goal of public-lands management

    Science.gov (United States)

    Po-Hsin Lai; Chia-Kuen Cheng; David Scott

    2007-01-01

    Participation in outdoor recreation has been increasing at a rate far exceeding the population growth since the 1980s. The growing demand for outdoor recreation amenities has imposed a great challenge on resource management agencies of public lands. This study proposed a segmentation framework to identify different outdoor recreation groups based on their attitudes...

  17. Segmentation Based Classification of 3D Urban Point Clouds: A Super-Voxel Based Approach with Evaluation

    Directory of Open Access Journals (Sweden)

    Laurent Trassoudaine

    2013-03-01

    Full Text Available Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.

  18. A minimalist approach to conceptualization of time in quantum theory

    International Nuclear Information System (INIS)

    Kitada, Hitoshi; Jeknić-Dugić, Jasmina; Arsenijević, Momir; Dugić, Miroljub

    2016-01-01

    Ever since Schrödinger, Time in quantum theory is postulated Newtonian for every reference frame. With the help of certain known mathematical results, we show that the concept of the so-called Local Time allows avoiding the postulate. In effect, time appears as neither fundamental nor universal on the quantum-mechanical level while being consistently attributable to every, at least approximately, closed quantum system as well as to every of its (conservative or not) subsystems. - Highlights: • The concept of universal time is an implicit assumption in the quantum foundations. • A minimalist approach to quantum foundations does not favor the universal time. • Rather the so-called concept of local time is emphasized as an alternative. • Hence a new mathematically consistent conceptualization of time in quantum physics.

  19. Segmented regression analysis of interrupted time series data to assess outcomes of a South American road traffic alcohol policy change.

    Science.gov (United States)

    Nistal-Nuño, Beatriz

    2017-09-01

    In Chile, a new law introduced in March 2012 decreased the legal blood alcohol concentration (BAC) limit for driving while impaired from 1 to 0.8 g/l and the legal BAC limit for driving under the influence of alcohol from 0.5 to 0.3 g/l. The goal is to assess the impact of this new law on mortality and morbidity outcomes in Chile. A review of national databases in Chile was conducted from January 2003 to December 2014. Segmented regression analysis of interrupted time series was used for analyzing the data. In a series of multivariable linear regression models, the change in intercept and slope in the monthly incidence rate of traffic deaths and injuries and association with alcohol per 100,000 inhabitants was estimated from pre-intervention to postintervention, while controlling for secular changes. In nested regression models, potential confounding seasonal effects were accounted for. All analyses were performed at a two-sided significance level of 0.05. Immediate level drops in all the monthly rates were observed after the law from the end of the prelaw period in the majority of models and in all the de-seasonalized models, although statistical significance was reached only in the model for injures related to alcohol. After the law, the estimated monthly rate dropped abruptly by -0.869 for injuries related to alcohol and by -0.859 adjusting for seasonality (P < 0.001). Regarding the postlaw long-term trends, it was evidenced a steeper decreasing trend after the law in the models for deaths related to alcohol, although these differences were not statistically significant. A strong evidence of a reduction in traffic injuries related to alcohol was found following the law in Chile. Although insufficient evidence was found of a statistically significant effect for the beneficial effects seen on deaths and overall injuries, potential clinically important effects cannot be ruled out. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd

  20. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  1. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  2. Intelligent assembly time analysis, using a digital knowledge based approach

    NARCIS (Netherlands)

    Jin, Y.; Curran, R.; Butterfield, J.; Burke, R.; Welch, B.

    2009-01-01

    The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for

  3. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    Science.gov (United States)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  4. Smokers with ST-segment elevation myocardial infarction and short time to treatment have equal effects of PCI and fibrinolysis

    DEFF Research Database (Denmark)

    Rasmussen, Thomas; Kelbæk, Henning Skov; Madsen, Jan Kyst

    2012-01-01

    The purpose of this study was to examine the effect of primary percutaneous coronary intervention (PCI) compared to fibrinolysis in smokers and non-smokers with ST-segment elevation myocardial infarction (STEMI). Smokers seem to have less atherosclerosis but are more prone to thrombotic disease....... Compared to non-smokers, they have higher rates of early, complete reperfusion when treated with fibrinolysis for MI....

  5. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  6. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  7. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    Energy Technology Data Exchange (ETDEWEB)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammadreza [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Medical Imaging Systems Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Rad, Hamidreza Saligheh [Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-07-29

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  8. A fully automated and reproducible level-set segmentation approach for generation of MR-based attenuation correction map of PET images in the brain employing single STE-MR imaging modality

    International Nuclear Information System (INIS)

    Kazerooni, Anahita Fathi; Aarabi, Mohammad Hadi; Ay, Mohammadreza; Rad, Hamidreza Saligheh

    2014-01-01

    Generating MR-based attenuation correction map (μ-map) for quantitative reconstruction of PET images still remains a challenge in hybrid PET/MRI systems, mainly because cortical bone structures are indistinguishable from proximal air cavities in conventional MR images. Recently, development of short echo-time (STE) MR imaging sequences, has shown promise in differentiating cortical bone from air. However, on STE-MR images, the bone appears with discontinuous boundaries. Therefore, segmentation techniques based on intensity classification, such as thresholding or fuzzy C-means, fail to homogeneously delineate bone boundaries, especially in the presence of intrinsic noise and intensity inhomogeneity. Consequently, they cannot be fully automatized, must be fine-tuned on the case-by-case basis, and require additional morphological operations for segmentation refinement. To overcome the mentioned problems, in this study, we introduce a new fully automatic and reproducible STE-MR segmentation approach exploiting level-set in a clustering-based intensity inhomogeneity correction framework to reliably delineate bone from soft tissue and air.

  9. Ultra Innovative Approach to Integrate Cellphone Customer Market Segmentation Model Using Self Organizing Maps and K-Means Methodology

    Directory of Open Access Journals (Sweden)

    mohammad reza karimi alavijeh

    2016-07-01

    Full Text Available The utilization of 3G and 4G is rapidly increasing and also cellphone users are briskly changing their consumption behavior, using preferences and shopping manner. Accordingly, cellphone manufacturers should create an accurate insight of their target market and provide a “special offer” to their target consumers. In order to reach a correct understanding of the target market, consumption behavior and lifestyle of the submarkets we found the appropriate number of community clusters after criticizing the traditional methods and introducing market segmentation techniques which were based on neural networks. By utilizing the fuzzy Delphi technique, variables of target market segmentation were found. Finally, the obtained clusters and segmentations of the market were refined by using the techniques of K-means and aggregation (Agglomerative. The population of this research included the consumers of mobile in Tehran with a sample of 130 specimens after collecting data through questionnaires, results demonstrated that the Tehran cellphone market was comprised of 5 Clusters, each one are capable of implementing marketing strategy and marketing mix separately with taking into account the competitive advantages of ICT companies to maximize their demand and margin.

  10. A novel time series link prediction method: Learning automata approach

    Science.gov (United States)

    Moradabadi, Behnaz; Meybodi, Mohammad Reza

    2017-09-01

    Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.

  11. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  12. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    Science.gov (United States)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  13. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Diurnal Alterations of Refraction, Anterior Segment Biometrics, and Intraocular Pressure in Long-Time Dehydration due to Religious Fasting.

    Science.gov (United States)

    Baser, Gonen; Cengiz, Hakan; Uyar, Murat; Seker Un, Emine

    2016-01-01

    To investigate the effects of dehydration due to fasting on diurnal changes of intraocular pressure, anterior segment biometrics, and refraction. The intraocular pressures, anterior segment biometrics (axial length: AL; Central corneal thickness: CCT; Lens thickness: LT; Anterior chamber depth: ACD), and refractive measurements of 30 eyes of 15 fasting healthy male volunteers were recorded at 8:00 in the morning and 17:00 in the evening in the Ramadan of 2013 and two months later. The results were compared and the statistical analyses were performed using the Rstudio software version 0.98.501. The variables were investigated using visual (histograms, probability plots) and analytical methods (Kolmogorov-Smirnov/Shapiro-Wilk test) to determine whether or not they were normally distributed. The refractive values remained stable in the fasting as well as in the control period (p = 0.384). The axial length measured slightly shorter in the fasting period (p = 0.001). The corneal thickness presented a diurnal variation, in which the cornea measured thinner in the evening. The difference between the fasting and control period was not statistically significant (p = 0.359). The major differences were observed in the anterior chamber depth and IOP. The ACD was shallower in the evening during the fasting period, where it was deeper in the control period. The diurnal IOP difference was greater in the fasting period than the control period. Both were statistically significant (p = 0.001). The LT remained unchanged in both periods. The major difference was shown in the anterior chamber shallowing in the evening hours and IOP. Our study contributes the hypothesis that the posterior segment of the eye is more responsible for the axial length alterations and normovolemia has a more dominant influence on diurnal IOP changes.

  15. Finite-Time Approach to Microeconomic and Information Exchange Processes

    Directory of Open Access Journals (Sweden)

    Serghey A. Amelkin

    2009-07-01

    Full Text Available Finite-time approach allows one to optimize regimes of processes in macrosystems when duration of the processes is restricted. Driving force of the processes is difference of intensive variables: temperatures in thermodynamics, values in economics, etc. In microeconomic systems two counterflow fluxes appear due to the only driving force. They are goods and money fluxes. Another possible case is two fluxes with the same direction. The processes of information exchange can be described by this formalism.

  16. Time delay correlations in chaotic scattering and random matrix approach

    International Nuclear Information System (INIS)

    Lehmann, N.; Savin, D.V.; Sokolov, V.V.; Sommers, H.J.

    1994-01-01

    We study the correlations in the time delay a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. The time delay correlation function, through being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the S-matrix and the real energy axis. 28 refs.; 4 figs

  17. Numerical approaches to time evolution of complex quantum systems

    International Nuclear Information System (INIS)

    Fehske, Holger; Schleede, Jens; Schubert, Gerald; Wellein, Gerhard; Filinov, Vladimir S.; Bishop, Alan R.

    2009-01-01

    We examine several numerical techniques for the calculation of the dynamics of quantum systems. In particular, we single out an iterative method which is based on expanding the time evolution operator into a finite series of Chebyshev polynomials. The Chebyshev approach benefits from two advantages over the standard time-integration Crank-Nicholson scheme: speedup and efficiency. Potential competitors are semiclassical methods such as the Wigner-Moyal or quantum tomographic approaches. We outline the basic concepts of these techniques and benchmark their performance against the Chebyshev approach by monitoring the time evolution of a Gaussian wave packet in restricted one-dimensional (1D) geometries. Thereby the focus is on tunnelling processes and the motion in anharmonic potentials. Finally we apply the prominent Chebyshev technique to two highly non-trivial problems of current interest: (i) the injection of a particle in a disordered 2D graphene nanoribbon and (ii) the spatiotemporal evolution of polaron states in finite quantum systems. Here, depending on the disorder/electron-phonon coupling strength and the device dimensions, we observe transmission or localisation of the matter wave.

  18. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    Science.gov (United States)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  19. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...

  20. A Bayesian Approach to Real-Time Earthquake Phase Association

    Science.gov (United States)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  1. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  2. An effective approach of lesion segmentation within the breast ultrasound image based on the cellular automata principle.

    Science.gov (United States)

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong

    2012-10-01

    In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.

  3. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images.

    Science.gov (United States)

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Bodenstedt, Sebastian; Sanchez, Alexandro; Stock, Christian; Kenngott, Hannes Gotz; Eisenmann, Mathias; Speidel, Stefanie

    2014-01-01

    Machine learning algorithms are gaining increasing interest in the context of computer-assisted interventions. One of the bottlenecks so far, however, has been the availability of training data, typically generated by medical experts with very limited resources. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. In this work, we investigate the potential of crowdsourcing for segmenting medical instruments in endoscopic image data. Our study suggests that (1) segmentations computed from annotations of multiple anonymous non-experts are comparable to those made by medical experts and (2) training data generated by the crowd is of the same quality as that annotated by medical experts. Given the speed of annotation, scalability and low costs, this implies that the scientific community might no longer need to rely on experts to generate reference or training data for certain applications. To trigger further research in endoscopic image processing, the data used in this study will be made publicly available.

  4. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  5. Correction tool for Active Shape Model based lumbar muscle segmentation.

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  6. Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach.

    Science.gov (United States)

    Lavdas, Ioannis; Glocker, Ben; Kamnitsas, Konstantinos; Rueckert, Daniel; Mair, Henrietta; Sandhu, Amandeep; Taylor, Stuart A; Aboagye, Eric O; Rockall, Andrea G

    2017-10-01

    As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance

  7. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  8. The Time Diagram Control Approach for the Dynamic Representation of Time-Oriented Data

    Directory of Open Access Journals (Sweden)

    Rolf Dornberger

    2016-04-01

    Full Text Available The dynamic representation of time-oriented data on small screen devices is of increasing importance. Most solution approaches use issue-specific requirements based on established desktop technologies. Applied to mobile devices with small multi-touch displays such approaches often lead to a limited usability. Particularly, the time-dependent data can only be fragmentarily visualized due to limited screen sizes. Instead of reducing the complexity by visualizing the data, the interpretation of the data is getting more complex. This paper proposes a Time Diagram Control (TDC approach, a new way of representing time-based diagrams on small screen devices. The TDC uses a principle of cybernetics to integrate the user in the visualization process and thus reduce complexity. TDC focuses on simplicity of design by only providing 2D temporal line diagrams with a dynamic zooming function that works via standard multi-touch controls. Involving the user into a continuous loop of refining the visualization, TDC allows to compare data of different temporal granularities without losing the overall context of the presented data. The TDC approach ensures constant information reliability on small screen devices.

  9. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  10. Evaluation of right ventricular function by coronary computed tomography angiography using a novel automated 3D right ventricle volume segmentation approach: a validation study.

    Science.gov (United States)

    Burghard, Philipp; Plank, Fabian; Beyer, Christoph; Müller, Silvana; Dörler, Jakob; Zaruba, Marc-Michael; Pölzl, Leo; Pölzl, Gerhard; Klauser, Andrea; Rauch, Stefan; Barbieri, Fabian; Langer, Christian-Ekkehardt; Schgoer, Wilfried; Williamson, Eric E; Feuchtner, Gudrun

    2018-06-04

    To evaluate right ventricle (RV) function by coronary computed tomography angiography (CTA) using a novel automated three-dimensional (3D) RV volume segmentation tool in comparison with clinical reference modalities. Twenty-six patients with severe end-stage heart failure [left ventricle (LV) ejection fraction (EF) right heart invasive catheterisation (IC). Automated 3D RV volume segmentation was successful in 26 (100%) patients. Read-out time was 3 min 33 s (range, 1 min 50s-4 min 33s). RV EF by CTA was stronger correlated with right atrial pressure (RAP) by IC (r = -0.595; p = 0.006) but weaker with TAPSE (r = 0.366, p = 0.94). When comparing TAPSE with RAP by IC (r = -0.317, p = 0.231), a weak-to-moderate non-significant inverse correlation was found. Interobserver correlation was high with r = 0.96 (p right atrium (RA) and right ventricle (RV) was 196.9 ± 75.3 and 217.5 ± 76.1 HU, respectively. Measurement of RV function by CTA using a novel 3D volumetric segmentation tool is fast and reliable by applying a dedicated biphasic injection protocol. The RV EF from CTA is a closer surrogate of RAP than TAPSE by TTE. • Evaluation of RV function by cardiac CTA by using a novel 3D volume segmentation tool is fast and reliable. • A biphasic contrast agent injection protocol ensures homogenous RV contrast attenuation. • Cardiac CT is a valuable alternative modality to CMR for the evaluation of RV function.

  11. A time warping approach to multiple sequence alignment.

    Science.gov (United States)

    Arribas-Gil, Ana; Matias, Catherine

    2017-04-25

    We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.

  12. Time-dependent Kohn-Sham approach to quantum electrodynamics

    International Nuclear Information System (INIS)

    Ruggenthaler, M.; Mackenroth, F.; Bauer, D.

    2011-01-01

    We prove a generalization of the van Leeuwen theorem toward quantum electrodynamics, providing the formal foundations of a time-dependent Kohn-Sham construction for coupled quantized matter and electromagnetic fields. We circumvent the symmetry-causality problems associated with the action-functional approach to Kohn-Sham systems. We show that the effective external four-potential and four-current of the Kohn-Sham system are uniquely defined and that the effective four-current takes a very simple form. Further we rederive the Runge-Gross theorem for quantum electrodynamics.

  13. Call-to-balloon time dashboard in patients with ST-segment elevation myocardial infarction results in significant improvement in the logistic chain.

    Science.gov (United States)

    Hermans, Maaike P J; Velders, Matthijs A; Smeekes, Martin; Drexhage, Olivier S; Hautvast, Raymond W M; Ytsma, Timon; Schalij, Martin J; Umans, Victor A W M

    2017-08-04

    Timely reperfusion with primary percutaneous coronary intervention (pPCI) in ST-segment elevation myocardial infarction (STEMI) patients is associated with superior clinical outcomes. Aiming to reduce ischaemic time, an innovative system for home-to-hospital (H2H) time monitoring was implemented, which enabled real-time evaluation of ischaemic time intervals, regular feedback and improvements in the logistic chain. The objective of this study was to assess the results after implementation of the H2H dashboard for monitoring and evaluation of ischaemic time in STEMI patients. Ischaemic time in STEMI patients transported by emergency medical services (EMS) and treated with pPCI in the Noordwest Ziekenhuis, Alkmaar before (2008-2009; n=495) and after the implementation of the H2H dashboard (2011-2014; n=441) was compared. Median time intervals were significantly shorter in the H2H group (door-to-balloon time 32 [IQR 25-43] vs. 40 [IQR 28-55] minutes, p-value dashboard was independently associated with shorter time delays. Real-time monitoring and feedback on time delay with the H2H dashboard improves the logistic chain in STEMI patients, resulting in shorter ischaemic time intervals.

  14. Multi-Robot Motion Planning: A Timed Automata Approach

    DEFF Research Database (Denmark)

    Quottrup, Michael Melholt; Bak, Thomas; Izadi-Zamanabadi, Roozbeh

    2004-01-01

    This paper describes how a network of interacting timed automata can be used to model, analyze, and verify motion planning problems in a scenario with multiple robotic vehicles. The method presupposes an infra-structure of robots with feed-back controllers obeying simple restriction on a planar...... grid. The automata formalism merely presents a high-level model of environment, robots and control, but allows composition and formal symbolic reasoning about coordinated solutions. Composition is achieved through synchronization, and the verification software UPPAAL is used for a symbolic verification...... then subsequently be used as a high-level motion plan for the robots. This paper reports on the timed automata framework, results of two verification experiments, promise of the approach, and gives a perspective for future research....

  15. Multi-Robot Motion Planning: A Timed Automata Approach

    DEFF Research Database (Denmark)

    Quottrup, Michael Melholt; Bak, Thomas; Izadi-Zamanabadi, Roozbeh

    This paper describes how a network of interacting timed automata can be used to model, analyze, and verify motion planning problems in a scenario with multiple robotic vehicles. The method presupposes an infra-structure of robots with feed-back controllers obeying simple restriction on a planar...... grid. The automata formalism merely presents a high-level model of environment, robots and control, but allows composition and formal symbolic reasoning about coordinated solutions. Composition is achieved through synchronization, and the verification software UPPAAL is used for a symbolic verification...... then subsequently be used as a high-level motion plan for the robots. This paper reports on the timed automata framework, results of two verification experiments, promise of the approach, and gives a perspective for future research....

  16. An SPM8-based approach for attenuation correction combining segmentation and nonrigid template formation: application to simultaneous PET/MR brain imaging.

    Science.gov (United States)

    Izquierdo-Garcia, David; Hansen, Adam E; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T; Chonde, Daniel B; Catana, Ciprian

    2014-11-01

    We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (μ maps) from MR data in integrated PET/MR scanners. Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The μ maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed. The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed

  17. An SPM8-based Approach for Attenuation Correction Combining Segmentation and Non-rigid Template Formation: Application to Simultaneous PET/MR Brain Imaging

    Science.gov (United States)

    Izquierdo-Garcia, David; Hansen, Adam E.; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T.; Chonde, Daniel B.; Catana, Ciprian

    2014-01-01

    We present an approach for head MR-based attenuation correction (MR-AC) based on the Statistical Parametric Mapping (SPM8) software that combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (µ-maps) from MR data in integrated PET/MR scanners. Methods Coregistered anatomical MR and CT images acquired in 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray and white matter, cerebro-spinal fluid, bone and soft tissue, and air), which were then non-rigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomical MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients (LACs) to be used for AC of PET data. The method was validated on sixteen new subjects with brain tumors (N=12) or mild cognitive impairment (N=4) who underwent CT and PET/MR scans. The µ-maps and corresponding reconstructed PET images were compared to those obtained using the gold standard CT-based approach and the Dixon-based method available on the Siemens Biograph mMR scanner. Relative change (RC) images were generated in each case and voxel- and region of interest (ROI)-based analyses were performed. Results The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain LACs (RC=1.38%±4.52%) compared to the gold standard. Similar results (RC=1.86±4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and ROI-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87±5.0% and 2.74±2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0±10.25% and 9.38±4.97%, respectively). Areas closer to skull showed the largest

  18. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  19. Impact of Indocyanine Green Concentration, Exposure Time, and Degree of Dissolution in Creating Toxic Anterior Segment Syndrome: Evaluation in a Rabbit Model

    Directory of Open Access Journals (Sweden)

    Tamer Tandogan

    2016-01-01

    Full Text Available Purpose. To investigate the role of indocyanine green (ICG dye as a causative material of toxic anterior segment syndrome (TASS in an experimental rabbit model. Method. Eight eyes of four rabbits were allocated to this study. Capsular staining was performed using ICG dye, after which the anterior chamber was irrigated with a balanced salt solution. The effects of different concentrations (control, 0.25, 0.5, and 1.0%, exposure times (10 and 60 seconds, and the degree of dissolution (differently vortexed were investigated. The analysis involved anterior segment photography, ultrasound pachymetry, prostaglandin assay (PGE2 Parameter Assay, R&D systems, Inc., and scanning electron microscopy of each iris. Result. There was no reaction in the control eye. A higher aqueous level of PGE2 and more severe inflammatory reaction were observed in cases of eyes with higher concentration, longer exposure time, and poorly dissolved dye. Additionally, scanning electron microscopy revealed larger and coarser ICG particles. Conclusion. TASS occurrence may be associated with the concentration, exposure time, and degree of dissolution of ICG dye during cataract surgery.

  20. An approach to a real-time distribution system

    Science.gov (United States)

    Kittle, Frank P., Jr.; Paddock, Eddie J.; Pocklington, Tony; Wang, Lui

    1990-01-01

    The requirements of a real-time data distribution system are to provide fast, reliable delivery of data from source to destination with little or no impact to the data source. In this particular case, the data sources are inside an operational environment, the Mission Control Center (MCC), and any workstation receiving data directly from the operational computer must conform to the software standards of the MCC. In order to supply data to development workstations outside of the MCC, it is necessary to use gateway computers that prevent unauthorized data transfer back to the operational computers. Many software programs produced on the development workstations are targeted for real-time operation. Therefore, these programs must migrate from the development workstation to the operational workstation. It is yet another requirement for the Data Distribution System to ensure smooth transition of the data interfaces for the application developers. A standard data interface model has already been set up for the operational environment, so the interface between the distribution system and the application software was developed to match that model as closely as possible. The system as a whole therefore allows the rapid development of real-time applications without impacting the data sources. In summary, this approach to a real-time data distribution system provides development users outside of the MCC with an interface to MCC real-time data sources. In addition, the data interface was developed with a flexible and portable software design. This design allows for the smooth transition of new real-time applications to the MCC operational environment.

  1. What time is it? Deep learning approaches for circadian rhythms.

    Science.gov (United States)

    Agostinelli, Forest; Ceglia, Nicholas; Shahbaba, Babak; Sassone-Corsi, Paolo; Baldi, Pierre

    2016-06-15

    Circadian rhythms date back to the origins of life, are found in virtually every species and every cell, and play fundamental roles in functions ranging from metabolism to cognition. Modern high-throughput technologies allow the measurement of concentrations of transcripts, metabolites and other species along the circadian cycle creating novel computational challenges and opportunities, including the problems of inferring whether a given species oscillate in circadian fashion or not, and inferring the time at which a set of measurements was taken. We first curate several large synthetic and biological time series datasets containing labels for both periodic and aperiodic signals. We then use deep learning methods to develop and train BIO_CYCLE, a system to robustly estimate which signals are periodic in high-throughput circadian experiments, producing estimates of amplitudes, periods, phases, as well as several statistical significance measures. Using the curated data, BIO_CYCLE is compared to other approaches and shown to achieve state-of-the-art performance across multiple metrics. We then use deep learning methods to develop and train BIO_CLOCK to robustly estimate the time at which a particular single-time-point transcriptomic experiment was carried. In most cases, BIO_CLOCK can reliably predict time, within approximately 1 h, using the expression levels of only a small number of core clock genes. BIO_CLOCK is shown to work reasonably well across tissue types, and often with only small degradation across conditions. BIO_CLOCK is used to annotate most mouse experiments found in the GEO database with an inferred time stamp. All data and software are publicly available on the CircadiOmics web portal: circadiomics.igb.uci.edu/ fagostin@uci.edu or pfbaldi@uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  2. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  3. Nutrition targeting by food timing: time-related dietary approaches to combat obesity and metabolic syndrome.

    Science.gov (United States)

    Sofer, Sigal; Stark, Aliza H; Madar, Zecharia

    2015-03-01

    Effective nutritional guidelines for reducing abdominal obesity and metabolic syndrome are urgently needed. Over the years, many different dietary regimens have been studied as possible treatment alternatives. The efficacy of low-calorie diets, diets with different proportions of fat, protein, and carbohydrates, traditional healthy eating patterns, and evidence-based dietary approaches were evaluated. Reviewing literature published in the last 5 y reveals that these diets may improve risk factors associated with obesity and metabolic syndrome. However, each diet has limitations ranging from high dropout rates to maintenance difficulties. In addition, most of these dietary regimens have the ability to attenuate some, but not all, of the components involved in this complicated multifactorial condition. Recently, interest has arisen in the time of day foods are consumed (food timing). Studies have examined the implications of eating at the right or wrong time, restricting eating hours, time allocation for meals, and timing of macronutrient consumption during the day. In this paper we review new insights into well-known dietary therapies as well as innovative time-associated dietary approaches for treating obesity and metabolic syndrome. We discuss results from systematic meta-analyses, clinical interventions, and animal models. © 2015 American Society for Nutrition.

  4. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  5. SAAS-CNV: A Joint Segmentation Approach on Aggregated and Allele Specific Signals for the Identification of Somatic Copy Number Alterations with Next-Generation Sequencing Data.

    Science.gov (United States)

    Zhang, Zhongyang; Hao, Ke

    2015-11-01

    Cancer genomes exhibit profound somatic copy number alterations (SCNAs). Studying tumor SCNAs using massively parallel sequencing provides unprecedented resolution and meanwhile gives rise to new challenges in data analysis, complicated by tumor aneuploidy and heterogeneity as well as normal cell contamination. While the majority of read depth based methods utilize total sequencing depth alone for SCNA inference, the allele specific signals are undervalued. We proposed a joint segmentation and inference approach using both signals to meet some of the challenges. Our method consists of four major steps: 1) extracting read depth supporting reference and alternative alleles at each SNP/Indel locus and comparing the total read depth and alternative allele proportion between tumor and matched normal sample; 2) performing joint segmentation on the two signal dimensions; 3) correcting the copy number baseline from which the SCNA state is determined; 4) calling SCNA state for each segment based on both signal dimensions. The method is applicable to whole exome/genome sequencing (WES/WGS) as well as SNP array data in a tumor-control study. We applied the method to a dataset containing no SCNAs to test the specificity, created by pairing sequencing replicates of a single HapMap sample as normal/tumor pairs, as well as a large-scale WGS dataset consisting of 88 liver tumors along with adjacent normal tissues. Compared with representative methods, our method demonstrated improved accuracy, scalability to large cancer studies, capability in handling both sequencing and SNP array data, and the potential to improve the estimation of tumor ploidy and purity.

  6. Comparison of Immediate With Delayed Stenting Using the Minimalist Immediate Mechanical Intervention Approach in Acute ST-Segment-Elevation Myocardial Infarction: The MIMI Study.

    Science.gov (United States)

    Belle, Loic; Motreff, Pascal; Mangin, Lionel; Rangé, Grégoire; Marcaggi, Xavier; Marie, Antoine; Ferrier, Nadine; Dubreuil, Olivier; Zemour, Gilles; Souteyrand, Géraud; Caussin, Christophe; Amabile, Nicolas; Isaaz, Karl; Dauphin, Raphael; Koning, René; Robin, Christophe; Faurie, Benjamin; Bonello, Laurent; Champin, Stanislas; Delhaye, Cédric; Cuilleret, François; Mewton, Nathan; Genty, Céline; Viallon, Magalie; Bosson, Jean Luc; Croisille, Pierre

    2016-03-01

    Delayed stent implantation after restoration of normal epicardial flow by a minimalist immediate mechanical intervention aims to decrease the rate of distal embolization and impaired myocardial reperfusion after percutaneous coronary intervention. We sought to confirm whether a delayed stenting (DS) approach (24-48 hours) improves myocardial reperfusion, versus immediate stenting, in patients with acute ST-segment-elevation myocardial infarction undergoing primary percutaneous coronary intervention. In the prospective, randomized, open-label minimalist immediate mechanical intervention (MIMI) trial, patients (n=140) with ST-segment-elevation myocardial infarction ≤12 hours were randomized to immediate stenting (n=73) or DS (n=67) after Thrombolysis In Myocardial Infarction 3 flow restoration by thrombus aspiration. Patients in the DS group underwent a second coronary arteriography for stent implantation a median of 36 hours (interquartile range 29-46) after randomization. The primary end point was microvascular obstruction (% left ventricular mass) on cardiac magnetic resonance imaging performed 5 days (interquartile range 4-6) after the first procedure. There was a nonsignificant trend toward lower microvascular obstruction in the immediate stenting group compared with DS group (1.88% versus 3.96%; P=0.051), which became significant after adjustment for the area at risk (P=0.049). Median infarct weight, left ventricular ejection fraction, and infarct size did not differ between groups. No difference in 6-month outcomes was apparent for the rate of major cardiovascular and cerebral events. The present findings do not support a strategy of DS versus immediate stenting in patients with ST-segment-elevation infarction undergoing primary percutaneous coronary intervention and even suggested a deleterious effect of DS on microvascular obstruction size. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01360242. © 2016 American Heart Association, Inc.

  7. Breast ultrasound image segmentation: an optimization approach based on super-pixels and high-level descriptors

    Science.gov (United States)

    Massich, Joan; Lemaître, Guillaume; Martí, Joan; Mériaudeau, Fabrice

    2015-04-01

    Breast cancer is the second most common cancer and the leading cause of cancer death among women. Medical imaging has become an indispensable tool for its diagnosis and follow up. During the last decade, the medical community has promoted to incorporate Ultra-Sound (US) screening as part of the standard routine. The main reason for using US imaging is its capability to differentiate benign from malignant masses, when compared to other imaging techniques. The increasing usage of US imaging encourages the development of Computer Aided Diagnosis (CAD) systems applied to Breast Ultra-Sound (BUS) images. However accurate delineations of the lesions and structures of the breast are essential for CAD systems in order to extract information needed to perform diagnosis. This article proposes a highly modular and flexible framework for segmenting lesions and tissues present in BUS images. The proposal takes advantage of optimization strategies using super-pixels and high-level descriptors, which are analogous to the visual cues used by radiologists. Qualitative and quantitative results are provided stating a performance within the range of the state-of-the-art.

  8. Automated Segmentation of in Vivo and Ex Vivo Mouse Brain Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Alize E.H. Scheenstra

    2009-01-01

    Full Text Available Segmentation of magnetic resonance imaging (MRI data is required for many applications, such as the comparison of different structures or time points, and for annotation purposes. Currently, the gold standard for automated image segmentation is nonlinear atlas-based segmentation. However, these methods are either not sufficient or highly time consuming for mouse brains, owing to the low signal to noise ratio and low contrast between structures compared with other applications. We present a novel generic approach to reduce processing time for segmentation of various structures of mouse brains, in vivo and ex vivo. The segmentation consists of a rough affine registration to a template followed by a clustering approach to refine the rough segmentation near the edges. Compared with manual segmentations, the presented segmentation method has an average kappa index of 0.7 for 7 of 12 structures in in vivo MRI and 11 of 12 structures in ex vivo MRI. Furthermore, we found that these results were equal to the performance of a nonlinear segmentation method, but with the advantage of being 8 times faster. The presented automatic segmentation method is quick and intuitive and can be used for image registration, volume quantification of structures, and annotation.

  9. Seismicity of Romania: fractal properties of earthquake space, time and energy distributions and their correlation with segmentation of subducted lithosphere and Vrancea seismic source

    International Nuclear Information System (INIS)

    Popescu, E.; Ardeleanu, L.; Bazacliu, O.; Popa, M.; Radulian, M.; Rizescu, M.

    2002-01-01

    For any strategy of seismic hazard assessment, it is important to set a realistic seismic input such as: delimitation of seismogenic zones, geometry of seismic sources, seismicity regime, focal mechanism and stress field. The aim of the present project is a systematic investigation focused on the problem of Vrancea seismic regime at different time, space and energy scales which can offer a crucial information on the seismogenic process of this peculiar seismic area. The departures from linearity of the time, space and energy distributions are associated with inhomogeneities in the subducting slab, rheology, tectonic stress distribution and focal mechanism. The significant variations are correlated with the existence of active and inactive segments along the seismogenic zone, the deviation from linearity of the frequency-magnitude distribution is associated with the existence of different earthquake generation models and the nonlinearities showed in the time series are related with the occurrence of the major earthquakes. Another important purpose of the project is to analyze the main crustal seismic sequences generated on the Romanian territory in the following regions: Ramnicu Sarat, Fagaras-Campulung, Banat. Time, space and energy distributions together with the source parameters and scaling relations are investigated. The analysis of the seismicity and clustering properties of the earthquakes generated in both Vrancea intermediate-depth region and Romanian crustal seismogenic zones, achieved within this project, constitutes the starting point for the study of seismic zoning, seismic hazard and earthquake prediction. The data set consists of Vrancea subcrustal earthquake catalogue (since 1974 and continuously updated) and catalogues with events located in the other crustal seimogenic zones of Romania. To build up these data sets, high-quality information made available through multiple international cooperation programs is considered. The results obtained up to

  10. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  11. Change classification in SAR time series: a functional approach

    Science.gov (United States)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2017-10-01

    Change detection represents a broad field of research in SAR remote sensing, consisting of many different approaches. Besides the simple recognition of change areas, the analysis of type, category or class of the change areas is at least as important for creating a comprehensive result. Conventional strategies for change classification are based on supervised or unsupervised landuse / landcover classifications. The main drawback of such approaches is that the quality of the classification result directly depends on the selection of training and reference data. Additionally, supervised processing methods require an experienced operator who capably selects the training samples. This training step is not necessary when using unsupervised strategies, but nevertheless meaningful reference data must be available for identifying the resulting classes. Consequently, an experienced operator is indispensable. In this study, an innovative concept for the classification of changes in SAR time series data is proposed. Regarding the drawbacks of traditional strategies given above, it copes without using any training data. Moreover, the method can be applied by an operator, who does not have detailed knowledge about the available scenery yet. This knowledge is provided by the algorithm. The final step of the procedure, which main aspect is given by the iterative optimization of an initial class scheme with respect to the categorized change objects, is represented by the classification of these objects to the finally resulting classes. This assignment step is subject of this paper.

  12. Ischemic Segment Detection using the Support Vector Domain Description

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Ólafsdóttir, Hildur; Sjöstrand, Karl

    2007-01-01

    Myocardial perfusion Magnetic Resonance (MR) imaging has proven to be a powerful method to assess coronary artery diseases. The current work presents a novel approach to the analysis of registered sequences of myocardial perfusion MR images. A previously reported AAM-based segmentation and regist...... segments found by assessment of the three common perfusion parameters; maximum upslope, peak and time-to-peak obtained pixel-wise....

  13. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    Science.gov (United States)

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. The cultural differences in time and time management: A socio-demographic approach

    Directory of Open Access Journals (Sweden)

    F. Venter

    2006-12-01

    Full Text Available Purpose/Objectives: The aim of this article is to investigate perceived cultural differences in the perceptions of time and time management, and the implications regarding productivity amongst socio-demographic groups in Gauteng. This study indicates that socio-demographic variables such as home language, gender, education, age and income are related to various factors of time perception. Design/Methodology/Approach: The questionnaire consisted of 35 questions to be rated on a five-point Likert scale. Six dimensions of time were measured, namely, the sense of purpose, effective organisation, structured routine, present orientation, persistence and a global time perception. A multi-cultural non-probability convenience sample (n=804 was drawn from residents in the Gauteng region. Respondents were selected from upper- middle- and lowerincome groups residing in various suburban areas and townships in the region. Students of the North-West University carried out the fieldwork. Findings/Implications: The research study found that the dimensions sense of purpose and persistence of time obtained the highest mean factor scores: 4.05 and 3.95 respectively on the 1 (negative to 5 (positive scale, with 87, 4% and 83.8% of the respondents obtaining high scores (above 3.40 respectively. This implies that most respondents felt that they spent their time usefully and meaningfully, while at the same time, would not give up until the task was completed. The dimension present orientation of time produced the lowest mean factor score of 3.09, with 29.4% of respondents obtaining scores below 2.60, indicating a lack of focusing on completing a task at a designated point in time. The study also found that organisations have to increase productivity and reduce costs. The consequences of this for many employees included increased workloads, longer working hours and greater time pressure. Originality/Value: The findings of this study are original and innovative. The

  15. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    Science.gov (United States)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  16. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  17. Distant Cities, Travelling Tales and Segmented Young Lives: Making and Remaking Youth Exclusion across Time and Place

    Science.gov (United States)

    Dillabough, Jo-Anne; McLeod, Julie; Oliver, Caroline

    2015-01-01

    A substantial body of research suggests that incipient moral anxiety is growing in relation to excluded youth, and is manifestly cross-national in nature. While these anxieties are often assumed to be most evident in recent times, historians of childhood and youth persistently remind us of the long history of anxiety recorded in the public record…

  18. Improving operating room turnover time: a systems based approach.

    Science.gov (United States)

    Bhatt, Ankeet S; Carlson, Grant W; Deckers, Peter J

    2014-12-01

    Operating room (OR) turnover time (TT) has a broad and significant impact on hospital administrators, providers, staff and patients. Our objective was to identify current problems in TT management and implement a consistent, reproducible process to reduce average TT and process variability. Initial observations of TT were made to document the existing process at a 511 bed, 24 OR, academic medical center. Three control groups, including one consisting of Orthopedic and Vascular Surgery, were used to limit potential confounders such as case acuity/duration and equipment needs. A redesigned process based on observed issues, focusing on a horizontally structured, systems-based approach has three major interventions: developing consistent criteria for OR readiness, utilizing parallel processing for patient and room readiness, and enhancing perioperative communication. Process redesign was implemented in Orthopedics and Vascular Surgery. Comparisons of mean and standard deviation of TT were made using an independent 2-tailed t-test. Using all surgical specialties as controls (n = 237), mean TT (hh:mm:ss) was reduced by 0:20:48 min (95 % CI, 0:10:46-0:30:50), from 0:44:23 to 0:23:25, a 46.9 % reduction. Standard deviation of TT was reduced by 0:10:32 min, from 0:16:24 to 0:05:52 and frequency of TT≥30 min was reduced from 72.5to 11.7 %. P systems-based focus should drive OR TT design.

  19. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  20. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  1. A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation

    Science.gov (United States)

    Pham, Cuong; Plötz, Thomas; Olivier, Patrick

    We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.

  2. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  3. Comprehensive electrocardiogram-to-device time for primary percutaneous coronary intervention in ST-segment elevation myocardial infarction: A report from the American Heart Association mission: Lifeline program.

    Science.gov (United States)

    Shavadia, Jay S; French, William; Hellkamp, Anne S; Thomas, Laine; Bates, Eric R; Manoukian, Steven V; Kontos, Michael C; Suter, Robert; Henry, Timothy D; Dauerman, Harold L; Roe, Matthew T

    2018-03-01

    Assessing hospital-related network-level primary percutaneous coronary intervention (PCI) performance for ST-segment elevation myocardial infarction (STEMI) is challenging due to differential time-to-treatment metrics based on location of diagnostic electrocardiogram (ECG) for STEMI. STEMI patients undergoing primary PCI at 588 PCI-capable hospitals in AHA Mission: Lifeline (2008-2013) were categorized by initial STEMI identification location: PCI-capable hospitals (Group 1); pre-hospital setting (Group 2); and non-PCI-capable hospitals (Group 3). Patient-specific time-to-treatment categories were converted to minutes ahead of or behind their group-specific mean; average time-to-treatment difference for all patients at a given hospital was termed comprehensive ECG-to-device time. Hospitals were then stratified into tertiles based on their comprehensive ECG-to-device times with negative values below the mean representing shorter (faster) time intervals. Of 117,857 patients, the proportion in Groups 1, 2, and 3 were 42%, 33%, and 25%, respectively. Lower rates of heart failure and cardiac arrest at presentation are noted within patients presenting to high-performing hospitals. Median comprehensive ECG-to-device time was shortest at -9 minutes (25th, 75th percentiles: -13, -6) for the high-performing hospital tertile, 1 minute (-1, 3) for middle-performing, and 11 minutes (7, 16) for low-performing. Unadjusted rates of in-hospital mortality were 2.3%, 2.6%, and 2.7%, respectively, but the adjusted risk of in-hospital mortality was similar across tertiles. Comprehensive ECG-to-device time provides an integrated hospital-related network-level assessment of reperfusion timing metrics for primary PCI, regardless of the location for STEMI identification; further validation will delineate how this metric can be used to facilitate STEMI care improvements. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  5. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  6. Fast prostate segmentation for brachytherapy based on joint fusion of images and labels

    Science.gov (United States)

    Nouranian, Saman; Ramezani, Mahdi; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.; Abolmaesumi, Purang

    2014-03-01

    Brachytherapy as one of the treatment methods for prostate cancer takes place by implantation of radioactive seeds inside the gland. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate which are segmented in order to plan the appropriate seed placement. The segmentation process is usually performed either manually or semi-automatically and is associated with subjective errors because the prostate visibility is limited in ultrasound images. The current segmentation process also limits the possibility of intra-operative delineation of the prostate to perform real-time dosimetry. In this paper, we propose a computationally inexpensive and fully automatic segmentation approach that takes advantage of previously segmented images to form a joint space of images and their segmentations. We utilize joint Independent Component Analysis method to generate a model which is further employed to produce a probability map of the target segmentation. We evaluate this approach on the transrectal ultrasound volume images of 60 patients using a leave-one-out cross-validation approach. The results are compared with the manually segmented prostate contours that were used by clinicians to plan brachytherapy procedures. We show that the proposed approach is fast with comparable accuracy and precision to those found in previous studies on TRUS segmentation.

  7. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  8. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  9. A toolbox for multiple sclerosis lesion segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier [University of Girona, Computer Vision and Robotics Group, Girona (Spain); Cabezas, Mariano; Pareto, Deborah; Rovira, Alex [Vall d' Hebron University Hospital, Magnetic Resonance Unit, Dept. of Radiology, Barcelona (Spain); Vilanova, Joan C. [Girona Magnetic Resonance Center, Girona (Spain); Ramio-Torrenta, Lluis [Dr. Josep Trueta University Hospital, Institut d' Investigacio Biomedica de Girona, Multiple Sclerosis and Neuroimmunology Unit, Girona (Spain)

    2015-10-15

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  10. A toolbox for multiple sclerosis lesion segmentation

    International Nuclear Information System (INIS)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier; Cabezas, Mariano; Pareto, Deborah; Rovira, Alex; Vilanova, Joan C.; Ramio-Torrenta, Lluis

    2015-01-01

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  11. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  12. Approaching stationarity: competition between long jumps and long waiting times

    International Nuclear Information System (INIS)

    Dybiec, Bartłomiej

    2010-01-01

    Within the continuous-time random walk (CTRW) scenarios, properties of the overall motion are determined by the waiting time and the jump length distributions. In the decoupled case, with power-law distributed waiting times and jump lengths, the CTRW scenario is asymptotically described by the double (space and time) fractional Fokker–Planck equation. Properties of a system described by such an equation are determined by the subdiffusion parameter and the jump length exponent. Nevertheless, the stationary state is determined solely by the jump length distribution and the potential. The waiting time distribution determines only the rate of convergence to the stationary state. Here, we inspect the competition between long waiting times and long jumps and how this competition is reflected in the way in which a stationary state is reached. In particular, we show that the distance between a time-dependent and a stationary solution changes in time as a double power law

  13. Energy efficient approach for transient fault recovery in real time ...

    African Journals Online (AJOL)

    DR OKE

    Keywords: DVS, Fault tolerance, Real Time System, Transient Fault. ... in which missing the deadline may cause a failure and soft real time system, ..... Pillai, P., Shin, K., Real-time dynamic voltage scaling for low-power embedded operating ...

  14. A Reparametrization Approach for Dynamic Space-Time Models

    OpenAIRE

    Lee, Hyeyoung; Ghosh, Sujit K.

    2008-01-01

    Researchers in diverse areas such as environmental and health sciences are increasingly working with data collected across space and time. The space-time processes that are generally used in practice are often complicated in the sense that the auto-dependence structure across space and time is non-trivial, often non-separable and non-stationary in space and time. Moreover, the dimension of such data sets across both space and time can be very large leading to computational difficulties due to...

  15. Time interval approach to the pulsed neutron logging method

    International Nuclear Information System (INIS)

    Zhao Jingwu; Su Weining

    1994-01-01

    The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique

  16. Time-series prediction and applications a machine intelligence approach

    CERN Document Server

    Konar, Amit

    2017-01-01

    This book presents machine learning and type-2 fuzzy sets for the prediction of time-series with a particular focus on business forecasting applications. It also proposes new uncertainty management techniques in an economic time-series using type-2 fuzzy sets for prediction of the time-series at a given time point from its preceding value in fluctuating business environments. It employs machine learning to determine repetitively occurring similar structural patterns in the time-series and uses stochastic automaton to predict the most probabilistic structure at a given partition of the time-series. Such predictions help in determining probabilistic moves in a stock index time-series Primarily written for graduate students and researchers in computer science, the book is equally useful for researchers/professionals in business intelligence and stock index prediction. A background of undergraduate level mathematics is presumed, although not mandatory, for most of the sections. Exercises with tips are provided at...

  17. Wound healing: time to look for intelligent, 'natural' immunological approaches?

    Science.gov (United States)

    Garraud, Olivier; Hozzein, Wael N; Badr, Gamal

    2017-06-21

    There is now good evidence that cytokines and growth factors are key factors in tissue repair and often exert anti-infective activities. However, engineering such factors for global use, even in the most remote places, is not realistic. Instead, we propose to examine how such factors work and to evaluate the reparative tools generously provided by 'nature.' We used two approaches to address these objectives. The first approach was to reappraise the internal capacity of the factors contributing the most to healing in the body, i.e., blood platelets. The second was to revisit natural agents such as whey proteins, (honey) bee venom and propolis. The platelet approach elucidates the inflammation spectrum from physiology to pathology, whereas milk and honey derivatives accelerate diabetic wound healing. Thus, this review aims at offering a fresh view of how wound healing can be addressed by natural means.

  18. Cross-visit tumor sub-segmentation and registration with outlier rejection for dynamic contrast-enhanced MRI time series data.

    Science.gov (United States)

    Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M

    2010-01-01

    Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.

  19. Housing Cycles in Switzerland - A Time-Varying Approach

    OpenAIRE

    Drechsel, Dirk

    2015-01-01

    In light of the strong increase of house prices in Switzerland, we analyze the effects of mortgage rate shocks, changes in the interplay between housing demand and supply and GDP growth on house prices for the time period 1981- 2014. We employ Bayesian time-varying coefficients vector autoregressions to allow different monetary and immigration regimes over time. A number of structural changes, such as regulatory changes in the aftermath of the 1990s real estate crisis, the introduction of fre...

  20. Importance-Performance Analysis of Service Attributes based on Customers Segmentation with a Data Mining Approach: a Study in the Mobile Telecommunication Market in Yazd Province

    Directory of Open Access Journals (Sweden)

    Seyed Yaghoub Hosseini

    2012-12-01

    Full Text Available In customer relationship management (CRM systems, importance and performance of the attributes that define a service is very important. Importance-Performance analysis is an effective tool for prioritizing service attributes based on customer needs and expectations and also for identifying strengths and weaknesses of organization in the market. In this study with the purpose of increasing reliability and accuracy of results, customers are segmented based on their demographic characteristics and perception of service attributes performance and then individual IPA matrixes are developed for each segment. Self-Organizing Maps (SOM has been used for segmentation and a feed forward neural network has been used to estimate the importance of attributes. Research findings show that mobile subscribers in Yazd province can be categorized in three segments. Individual IPA matrixes have been provided for each of these segments. Based on these results, recommendations are offered to companies providing mobile phone services.

  1. Organizational Commitment in Times of Change: An Alternative Research Approach.

    Science.gov (United States)

    Larkey, Linda Kathryn

    A study illustrated an interpretive approach to investigating personal commitment during radical organizational transition by examining how people talk metaphorically about commitment and identification as a process. A questionnaire was constructed to be used in phone interviews with six employee assistance program (EAP) counselors who contract…

  2. The strategic marketing planning – General Framework for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Alina Elena OPRESCU

    2014-03-01

    Full Text Available Any approach that involves the use of strategic resources of an organisation requires a responsible approach, a behaviour that enables it to properly integrate itself into the dynamic of the business environment. This articles addresses in a synthetic manner, the issues of specific integration efforts for customers’ segmentation in the strategic marketing planning. The essential activity for any organisation wishing to optimise its response to the market, the customer segmentation will fully benefit from the framework provided by the strategic marketing planning. Being a sequential process, it not only allows time optimisation of the entire marketing activity but it also leads to accuracy of the strategic planning and its stages.

  3. A time-spectral approach to numerical weather prediction

    Science.gov (United States)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  4. An effective approach to the problem of time

    NARCIS (Netherlands)

    Bojowald, M.; Höhn, P.A.|info:eu-repo/dai/nl/330827952; Tsobanjan, A.

    2010-01-01

    A practical way to deal with the problem of time in quantum cosmology and quantum gravity is proposed. The main tool is effective equations, which mainly restrict explicit considerations to semiclassical regimes but have the crucial advantage of allowing the consistent use of local internal times in

  5. A dynamical approach to time dilation and length contraction

    NARCIS (Netherlands)

    Vries, de D.K.; Muynck, de W.M.

    1996-01-01

    Simple models of length and time measuring instruments are studied in order to see under what conditions a relativistic description of the dynamics of accelerated motion can be consistent with the kinematic prescriptions of Lorentz contraction and time dilation. The outcomes obtained for the

  6. Prognostic Value of Cardiac Time Intervals by Tissue Doppler Imaging M-Mode in Patients With Acute ST-Segment-Elevation Myocardial Infarction Treated With Primary Percutaneous Coronary Intervention

    DEFF Research Database (Denmark)

    Biering-Sørensen, Tor; Mogelvang, Rasmus; Søgaard, Peter

    2013-01-01

    Background- Color tissue Doppler imaging M-mode through the mitral leaflet is an easy and precise method to estimate all cardiac time intervals from 1 cardiac cycle and thereby obtain the myocardial performance index (MPI). However, the prognostic value of the cardiac time intervals and the MPI...... assessed by color tissue Doppler imaging M-mode through the mitral leaflet in patients with ST-segment-elevation myocardial infarction (MI) is unknown. Methods and Results- In total, 391 patients were admitted with an ST-segment-elevation MI, treated with primary percutaneous coronary intervention...

  7. A Finite Segment Method for Skewed Box Girder Analysis

    Directory of Open Access Journals (Sweden)

    Xingwei Xue

    2018-01-01

    Full Text Available A finite segment method is presented to analyze the mechanical behavior of skewed box girders. By modeling the top and bottom plates of the segments with skew plate beam element under an inclined coordinate system and the webs with normal plate beam element, a spatial elastic displacement model for skewed box girder is constructed, which can satisfy the compatibility condition at the corners of the cross section for box girders. The formulation of the finite segment is developed based on the variational principle. The major advantage of the proposed approach, in comparison with the finite element method, is that it can simplify a three-dimensional structure into a one-dimensional structure for structural analysis, which results in significant saving in computational times. At last, the accuracy and efficiency of the proposed finite segment method are verified by a model test.

  8. A design approach for ultrareliable real-time systems

    Science.gov (United States)

    Lala, Jaynarayan H.; Harper, Richard E.; Alger, Linda S.

    1991-01-01

    A design approach developed over the past few years to formalize redundancy management and validation is described. Redundant elements are partitioned into individual fault-containment regions (FCRs). An FCR is a collection of components that operates correctly regardless of any arbitrary logical or electrical fault outside the region. Conversely, a fault in an FCR cannot cause hardware outside the region to fail. The outputs of all channels are required to agree bit-for-bit under no-fault conditions (exact bitwise consensus). Synchronization, input agreement, and input validity conditions are discussed. The Advanced Information Processing System (AIPS), which is a fault-tolerant distributed architecture based on this approach, is described. A brief overview of recent applications of these systems and current research is presented.

  9. A Radial Basis Function Approach to Financial Time Series Analysis

    Science.gov (United States)

    1993-12-01

    consequently this approach is at the core of a large fraction of the portfolio management systems today. The Capital Asset Pricing Model ( CAPM ). due...representation used by each method. but of course a critical concern is how to actually estimate the parameters of the models. To sonic extent these...model fitting unseen data nicely depends critically on maintaining a balance between the number of data points used for estimation and the number of

  10. Approaching space-time through velocity in doubly special relativity

    International Nuclear Information System (INIS)

    Aloisio, R.; Galante, A.; Grillo, A.F.; Luzio, E.; Mendez, F.

    2004-01-01

    We discuss the definition of velocity as dE/d vertical bar p vertical bar, where E, p are the energy and momentum of a particle, in doubly special relativity (DSR). If this definition matches dx/dt appropriate for the space-time sector, then space-time can in principle be built consistently with the existence of an invariant length scale. We show that, within different possible velocity definitions, a space-time compatible with momentum-space DSR principles cannot be derived

  11. Improving Scotland's health: time for a fresh approach?

    Science.gov (United States)

    Stone, D H

    2012-05-01

    Scotland's health remains the worst in the UK. There are several probable reasons for this. Of those that are amenable to change, health improvement policy has been excessively preoccupied with targeting individuals perceived to be 'at risk' rather than adopting a whole population perspective. Environmental as opposed to behavioural approaches to health improvement have been relatively neglected. To meet the challenge of Scotland's poor health more effectively in the future, new strategic thinking is necessary. Three initial steps are required: recognize that current approaches are inadequate and that fresh ideas are needed; identify the principles that should underlie future strategy development; translate these principles into achievable operational objectives. Five principles of a revitalized strategy to improve the health of Scotland in the future are proposed. These are start early and sustain effort; create a healthy and safe environment; reduce geographical as well as social inequalities in health; adopt an evidence-based approach to public health interventions; use epidemiology to assess need, plan interventions and monitor progress. These principles may then be translated into achievable operational policy and practice objectives.

  12. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  13. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  14. Resting multilayer 2D speckle-tracking TTE for detection of ischemic segments confirmed by invasive FFR part-2, using post-systolic-strain-index and time from aortic-valve-closure to regional peak longitudinal-strain.

    Science.gov (United States)

    Ozawa, Koya; Funabashi, Nobusada; Nishi, Takeshi; Takahara, Masayuki; Fujimoto, Yoshihide; Kamata, Tomoko; Kobayashi, Yoshio

    2016-08-15

    This study evaluated the post-systolic strain index (PSI), and the time interval between aortic valve closure (AVC) and regional peak longitudinal strain (PLS), measured by transthoracic echocardiography (TTE), for detection of left ventricular (LV) myocardial ischemic segments confirmed by invasive fractional flow reserve (FFR). 39 stable patients (32 males; 65.8±11.9years) with 46 coronary arteries at ≥50% stenosis on invasive coronary angiography underwent 2D speckle tracking TTE (Vivid E9, GE Healthcare) and invasive FFR measurements. PSI, AVC and regional PLS in each LV segment were calculated. FFR ≤0.80 was detected in 27 LV segments. There were no significant differences between segments supplied by FFR ≤0.80 and FFR >0.80 vessels in either PSI or the time interval between AVC and regional PLS. To identify LV segments±FFR ≤0.80, the receiver operator characteristic (ROC) curves for PSI, and the time interval between AVC and regional PLS had areas under the curve (AUC) values of 0.58 and 0.57, respectively, with best cut-off points of 12% (sensitivity 70.4%, specificity 57.9%) and 88ms (sensitivity 70.4%, specificity 52.6%), respectively, but the AUCs were not statistically significant. In stable coronary artery disease patients with ≥50% coronary artery stenosis, measurement of PSI, and the time interval between AVC and regional PLS, on resting TTE, enabled the identification of LV segments with FFR ≤0.80 using each appropriate threshold for PSI, and the time interval between AVC and regional PLS, with reasonable diagnostic accuracy. However, the AUC values were not statistically significant. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Behavioural analysis of a time series– A chaotic approach

    Indian Academy of Sciences (India)

    Abstract. Out of the various methods available to study the chaotic behaviour, cor- ... that CDM is an efficient method for behavioural study of a time series. ...... Stochastic Environmental Research and Risk Assessment, 27(6): 1371–1381.

  16. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series

    Directory of Open Access Journals (Sweden)

    Beatriz Bellón

    2017-06-01

    Full Text Available In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS based on object-based Normalized Difference Vegetation Index (NDVI time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE. This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

  17. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  18. Temperature of thermal plasma jets: A time resolved approach

    Energy Technology Data Exchange (ETDEWEB)

    Sahasrabudhe, S N; Joshi, N K; Barve, D N; Ghorui, S; Tiwari, N; Das, A K, E-mail: sns@barc.gov.i [Laser and Plasma Technology Division, Bhabha Atomic Research Centre, Mumbai - 400 094 (India)

    2010-02-01

    Boltzmann Plot method is routinely used for temperature measurement of thermal plasma jets emanating from plasma torches. Here, it is implicitly assumed that the plasma jet is 'steady' in time. However, most of the experimenters do not take into account the variations due to ripple in the high current DC power supplies used to run plasma torches. If a 3-phase transductor type of power supply is used, then the ripple frequency is 150 Hz and if 3- phase SCR based power supply is used, then the ripple frequency is 300 Hz. The electrical power fed to plasma torch varies at ripple frequency. In time scale, it is about 3.3 to 6.7 ms for one cycle of ripple and it is much larger than the arc root movement times which are within 0.2 ms. Fast photography of plasma jets shows that the luminosity of plasma jet also varies exactly like the ripple in the power supply voltage and thus with the power. Intensity of line radiations varies nonlinearly with the instantaneous power fed to the torch and the simple time average of line intensities taken for calculation of temperature is not appropriate. In this paper, these variations and their effect on temperature determination are discussed and a method to get appropriate data is suggested. With a small adaptation discussed here, this method can be used to get temperature profile of plasma jet within a short time.

  19. A Harmony Search Algorithm approach for optimizing traffic signal timings

    Directory of Open Access Journals (Sweden)

    Mauro Dell'Orco

    2013-07-01

    Full Text Available In this study, a bi-level formulation is presented for solving the Equilibrium Network Design Problem (ENDP. The optimisation of the signal timing has been carried out at the upper-level using the Harmony Search Algorithm (HSA, whilst the traffic assignment has been carried out through the Path Flow Estimator (PFE at the lower level. The results of HSA have been first compared with those obtained using the Genetic Algorithm, and the Hill Climbing on a two-junction network for a fixed set of link flows. Secondly, the HSA with PFE has been applied to the medium-sized network to show the applicability of the proposed algorithm in solving the ENDP. Additionally, in order to test the sensitivity of perceived travel time error, we have used the HSA with PFE with various level of perceived travel time. The results showed that the proposed method is quite simple and efficient in solving the ENDP.

  20. Interevent time distribution in seismicity: A theoretical approach

    International Nuclear Information System (INIS)

    Molchan, G.

    2004-09-01

    This paper presents an analysis of the distribution of the time τ between two consecutive events in a stationary point process. The study is motivated by the discovery of unified scaling laws for τ for the case of seismic events. We demonstrate that these laws cannot exist simultaneously in a seismogenic area. Under very natural assumptions we show that if, after resealing to ensure Eτ = 1, the interevent time has a universal distribution F, then F must be exponential. In other words, Corral's unified scaling law cannot exist in the whole range of time. In the framework of a general cluster model we discuss the parameterization of an empirical unified law and the physical meaning of the parameters involved

  1. Improving the Timed Automata Approach to Biological Pathway Dynamics

    NARCIS (Netherlands)

    Langerak, R.; Pol, Jaco van de; Post, Janine N.; Schivo, Stefano; Aceto, Luca; Bacci, Giorgio; Bacci, Giovanni; Ingólfsdóttir, Anna; Legay, Axel; Mardare, Radu

    2017-01-01

    Biological systems such as regulatory or gene networks can be seen as a particular type of distributed systems, and for this reason they can be modeled within the Timed Automata paradigm, which was developed in the computer science context. However, tools designed to model distributed systems often

  2. Time pressure undermines performance more under avoidance than approach motivation

    NARCIS (Netherlands)

    Roskes, M.; Elliot, A.J.; Nijstad, B.A.; de Dreu, C.K.W.

    2013-01-01

    Four experiments were designed to test the hypothesis that performance is particularly undermined by time pressure when people are avoidance motivated. The results supported this hypothesis across three different types of tasks, including those well suited and those ill suited to the type of

  3. Algebraic time-dependent variational approach to dynamical calculations

    International Nuclear Information System (INIS)

    Shi, S.; Rabitz, H.

    1988-01-01

    A set of time-dependent basis states is obtained with a group of unitary transformations generated by a Lie algebra. Applying the time-dependent variational principle to the trial function subspace constructed from the linear combination of the time-dependent basis states gives rise to a set of ''classical'' equations of motion for the group parameters and the expansion coefficients from which the time evolution of the system state can be determined. The formulation is developed for a general Lie algebra as well as for the commonly encountered algebra containing homogeneous polynominal products of the coordinate Q and momentum P operators (or equivalently the boson creation a/sup dagger/ and annihilation a operators) of order 0, 1, and 2. Explicit expressions for the transition amplitudes are derived by virtue of the cannonical transformation properties of the unitary transformation. The applicability of the present formalism in a variety of problems is implied by two illustrative examples: (a) a parametric amplifier; (b) the collinear collision of an atom with a Morse oscillator

  4. Construction of time-dependent dynamical invariants: A new approach

    International Nuclear Information System (INIS)

    Bertin, M. C.; Pimentel, B. M.; Ramirez, J. A.

    2012-01-01

    We propose a new way to obtain polynomial dynamical invariants of the classical and quantum time-dependent harmonic oscillator from the equations of motion. We also establish relations between linear and quadratic invariants, and discuss how the quadratic invariant can be related to the Ermakov invariant.

  5. Time Pressure Undermines Performance More Under Avoidance Than Approach Motivation

    NARCIS (Netherlands)

    Roskes, Marieke; Elliot, Andrew J.; Nijstad, Bernard A.; De Dreu, Carsten K. W.

    Four experiments were designed to test the hypothesis that performance is particularly undermined by time pressure when people are avoidance motivated. The results supported this hypothesis across three different types of tasks, including those well suited and those ill suited to the type of

  6. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  7. Theoretical information measurement in nonrelativistic time-dependent approach

    Science.gov (United States)

    Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.

    2018-02-01

    The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.

  8. Multi-Robot Motion Planning: A Timed Automata Approach

    OpenAIRE

    Quottrup, Michael Melholt; Bak, Thomas; Izadi-Zamanabadi, Roozbeh

    2004-01-01

    This paper describes how a network of interacting timed automata can be used to model, analyze, and verify motion planning problems in a scenario with multiple robotic vehicles. The method presupposes an infra-structure of robots with feed-back controllers obeying simple restriction on a planar grid. The automata formalism merely presents a high-level model of environment, robots and control, but allows composition and formal symbolic reasoning about coordinated solutions. Composition is achi...

  9. Arbitrage, market definition and monitoring a time series approach

    OpenAIRE

    Burke, S; Hunter, J

    2012-01-01

    This article considers the application to regional price data of time series methods to test stationarity, multivariate cointegration and exogeneity. The discovery of stationary price differentials in a bivariate setting implies that the series are rendered stationary by capturing a common trend and we observe through this mechanism long-run arbitrage. This is indicative of a broader market definition and efficiency. The problem is considered in relation to more than 700 weekly data points on...

  10. Analytical Approach to Space- and Time-Fractional Burgers Equations

    International Nuclear Information System (INIS)

    Yıldırım, Ahmet; Mohyud-Din, Syed Tauseef

    2010-01-01

    A scheme is developed to study numerical solution of the space- and time-fractional Burgers equations under initial conditions by the homotopy analysis method. The fractional derivatives are considered in the Caputo sense. The solutions are given in the form of series with easily computable terms. Numerical solutions are calculated for the fractional Burgers equation to show the nature of solution as the fractional derivative parameter is changed

  11. The algebraic approach to space-time geometry

    International Nuclear Information System (INIS)

    Heller, M.; Multarzynski, P.; Sasin, W.

    1989-01-01

    A differential manifold can be defined in terms of smooth real functions carried by it. By rejecting the postulate, in such a definition, demanding the local diffeomorphism of a manifold to the Euclidean space, one obtains the so-called differential space concept. Every subset of R n turns out to be a differential space. Extensive parts of differential geometry on differential spaces, developed by Sikorski, are reviewed and adapted to relativistic purposes. Differential space as a new model of space-time is proposed. The Lorentz structure and Einstein's field equations on differential spaces are discussed. 20 refs. (author)

  12. Algorithmic Approach to Abstracting Linear Systems by Timed Automata

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2011-01-01

    This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...

  13. Competing approaches to analysis of failure times with competing risks.

    Science.gov (United States)

    Farley, T M; Ali, M M; Slaymaker, E

    2001-12-15

    For the analysis of time to event data in contraceptive studies when individuals are subject to competing causes for discontinuation, some authors have recently advocated the use of the cumulative incidence rate as a more appropriate measure to summarize data than the complement of the Kaplan-Meier estimate of discontinuation. The former method estimates the rate of discontinuation in the presence of competing causes, while the latter is a hypothetical rate that would be observed if discontinuations for the other reasons could not occur. The difference between the two methods of analysis is the continuous time equivalent of a debate that took place in the contraceptive literature in the 1960s, when several authors advocated the use of net (adjusted or single decrement life table rates) rates in preference to crude rates (multiple decrement life table rates). A small simulation study illustrates the interpretation of the two types of estimate - the complement of the Kaplan-Meier estimate corresponds to a hypothetical rate where discontinuations for other reasons did not occur, while the cumulative incidence gives systematically lower estimates. The Kaplan-Meier estimates are more appropriate when estimating the effectiveness of a contraceptive method, but the cumulative incidence estimates are more appropriate when making programmatic decisions regarding contraceptive methods. Other areas of application, such as cancer studies, may prefer to use the cumulative incidence estimates, but their use should be determined according to the application. Copyright 2001 John Wiley & Sons, Ltd.

  14. Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI.

    Science.gov (United States)

    Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana

    2018-01-01

    Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n  = 20), quantitatively and qualitatively in relatively healthy older subjects ( n  = 96), and qualitatively in patients from a memory clinic ( n  = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ  = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of

  15. Equivalence between the real-time Feynman histories and the quantum-shutter approaches for the 'passage time' in tunneling

    International Nuclear Information System (INIS)

    Garcia-Calderon, Gaston; Villavicencio, Jorge; Yamada, Norifumi

    2003-01-01

    We show the equivalence of the functions G p (t) and vertical bar Ψ(d,t) vertical bar 2 for the 'passage time' in tunneling. The former, obtained within the framework of the real-time Feynman histories approach to the tunneling time problem, uses the Gell-Mann and Hartle's decoherence functional, and the latter involves an exact analytical solution to the time-dependent Schroedinger equation for cutoff initial waves

  16. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  17. Modern linear control design a time-domain approach

    CERN Document Server

    Caravani, Paolo

    2013-01-01

    This book offers a compact introduction to modern linear control design.  The simplified overview presented of linear time-domain methodology paves the road for the study of more advanced non-linear techniques. Only rudimentary knowledge of linear systems theory is assumed - no use of Laplace transforms or frequency design tools is required. Emphasis is placed on assumptions and logical implications, rather than abstract completeness; on interpretation and physical meaning, rather than theoretical formalism; on results and solutions, rather than derivation or solvability.  The topics covered include transient performance and stabilization via state or output feedback; disturbance attenuation and robust control; regional eigenvalue assignment and constraints on input or output variables; asymptotic regulation and disturbance rejection. Lyapunov theory and Linear Matrix Inequalities (LMI) are discussed as key design methods. All methods are demonstrated with MATLAB to promote practical use and comprehension. ...

  18. Space and Time as Relations: The Theoretical Approach of Leibniz

    Directory of Open Access Journals (Sweden)

    Basil Evangelidis

    2018-04-01

    Full Text Available The epistemological rupture of Copernicus, the laws of planetary motions of Kepler, the comprehensive physical observations of Galileo and Huygens, the conception of relativity, and the physical theory of Newton were components of an extremely fertile and influential cognitive environment that prompted the restless Leibniz to shape an innovative theory of space and time. This theory expressed some of the concerns and intuitions of the scientific community of the seventeenth century, in particular the scientific group of the Academy of Sciences of Paris, but remained relatively unknown until the twentieth century. After Einstein, however, the relational theory of Leibniz gained wider respect and fame. The aim of this article is to explain how Leibniz foresaw relativity, through his critique of contemporary mechanistic philosophy.

  19. Advanced Time Approach of FW-H Equations for Predicting Noise

    DEFF Research Database (Denmark)

    Haiqing, Si; Yan, Shi; Shen, Wen Zhong

    2013-01-01

    An advanced time approach of Ffowcs Williams-Hawkings (FW-H) acoustic analogy is developed, and the integral equations and integral solution of FW-H acoustic analogy are derived. Compared with the retarded time approach, the transcendental equation need not to be solved in the advanced time...

  20. Time-resolved biophysical approaches to nucleocytoplasmic transport

    Directory of Open Access Journals (Sweden)

    Francesco Cardarelli

    Full Text Available Molecules are continuously shuttling across the nuclear envelope barrier that separates the nucleus from the cytoplasm. Instead of being just a barrier to diffusion, the nuclear envelope is rather a complex filter that provides eukaryotes with an elaborate spatiotemporal regulation of fundamental molecular processes, such as gene expression and protein translation. Given the highly dynamic nature of nucleocytoplasmic transport, during the past few decades large efforts were devoted to the development and application of time resolved, fluorescence-based, biophysical methods to capture the details of molecular motion across the nuclear envelope. These methods are here divided into three major classes, according to the differences in the way they report on the molecular process of nucleocytoplasmic transport. In detail, the first class encompasses those methods based on the perturbation of the fluorescence signal, also known as ensemble-averaging methods, which average the behavior of many molecules (across many pores. The second class comprises those methods based on the localization of single fluorescently-labelled molecules and tracking of their position in space and time, potentially across single pores. Finally, the third class encompasses methods based on the statistical analysis of spontaneous fluorescence fluctuations out of the equilibrium or stationary state of the system. In this case, the behavior of single molecules is probed in presence of many similarly-labelled molecules, without dwelling on any of them. Here these three classes, with their respective pros and cons as well as their main applications to nucleocytoplasmic shuttling will be briefly reviewed and discussed. Keywords: Fluorescence recovery after photobleaching, Single particle tracking, Fluorescence correlation spectroscopy, Diffusion, Transport, GFP, Nuclear pore complex, Live cell, Confocal microscopy

  1. Algebraic approach to time-delay data analysis for LISA

    International Nuclear Information System (INIS)

    Dhurandhar, S.V.; Nayak, K. Rajesh; Vinet, J.-Y.

    2002-01-01

    Cancellation of laser frequency noise in interferometers is crucial for attaining the requisite sensitivity of the triangular three-spacecraft LISA configuration. Raw laser noise is several orders of magnitude above the other noises and thus it is essential to bring it down to the level of other noises such as shot, acceleration, etc. Since it is impossible to maintain equal distances between spacecrafts, laser noise cancellation must be achieved by appropriately combining the six beams with appropriate time delays. It has been shown in several recent papers that such combinations are possible. In this paper, we present a rigorous and systematic formalism based on algebraic geometrical methods involving computational commutative algebra, which generates in principle all the data combinations canceling the laser frequency noise. The relevant data combinations form the first module of syzygies, as it is called in the literature of algebraic geometry. The module is over a polynomial ring in three variables, the three variables corresponding to the three time delays around the LISA triangle. Specifically, we list several sets of generators for the module whose linear combinations with polynomial coefficients generate the entire module. We find that this formalism can also be extended in a straightforward way to cancel Doppler shifts due to optical bench motions. The two modules are in fact isomorphic. We use our formalism to obtain the transfer functions for the six beams and for the generators. We specifically investigate monochromatic gravitational wave sources in the LISA band and carry out the maximization over linear combinations of the generators of the signal-to-noise ratios with the frequency and source direction angles as parameters

  2. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    Science.gov (United States)

    Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.

    2016-10-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.

  3. Differential impact of two risk communications on antipsychotic prescribing to people with dementia in Scotland: segmented regression time series analysis 2001-2011.

    Science.gov (United States)

    Guthrie, Bruce; Clark, Stella A; Reynish, Emma L; McCowan, Colin; Morales, Daniel R

    2013-01-01

    Regulatory risk communications are an important method for disseminating drug safety information, but their impact varies. Two significant UK risk communications about antipsychotic use in older people with dementia were issued in 2004 and 2009. These varied considerably in their content and dissemination, allowing examination of their differential impact. Segmented regression time-series analysis 2001-2011 for people aged ≥65 years with dementia in 87 Scottish general practices, examining the impact of two pre-specified risk communications in 2004 and 2009 on antipsychotic and other psychotropic prescribing. The percentage of people with dementia prescribed an antipsychotic was 15.9% in quarter 1 2001 and was rising by an estimated 0.6%/quarter before the 2004 risk communication. The 2004 risk communication was sent directly to all prescribers, and specifically recommended review of all patients prescribed relevant drugs. It was associated with an immediate absolute reduction in antipsychotic prescribing of 5.9% (95% CI -6.6 to -5.2) and a change to a stable level of prescribing subsequently. The 2009 risk communication was disseminated in a limited circulation bulletin, and only specifically recommended avoiding initiation if possible. There was no immediate associated impact, but it was associated with a significant decline in prescribing subsequently which appeared driven by a decline in initiation, with the percentage prescribed an antipsychotic falling from 18.4% in Q1 2009 to 13.5% in Q1 2011. There was no widespread substitution of antipsychotics with other psychotropic drugs. The two risk communications were associated with reductions in antipsychotic use, in ways which were compatible with marked differences in their content and dissemination. Further research is needed to ensure that the content and dissemination of regulatory risk communications is optimal, and to track their impact on intended and unintended outcomes. Although rates are falling

  4. Differential Impact of Two Risk Communications on Antipsychotic Prescribing to People with Dementia in Scotland: Segmented Regression Time Series Analysis 2001–2011

    Science.gov (United States)

    Guthrie, Bruce; Clark, Stella A.; Reynish, Emma L.; McCowan, Colin; Morales, Daniel R.

    2013-01-01

    Background Regulatory risk communications are an important method for disseminating drug safety information, but their impact varies. Two significant UK risk communications about antipsychotic use in older people with dementia were issued in 2004 and 2009. These varied considerably in their content and dissemination, allowing examination of their differential impact. Methods Segmented regression time-series analysis 2001–2011 for people aged ≥65 years with dementia in 87 Scottish general practices, examining the impact of two pre-specified risk communications in 2004 and 2009 on antipsychotic and other psychotropic prescribing. Results The percentage of people with dementia prescribed an antipsychotic was 15.9% in quarter 1 2001 and was rising by an estimated 0.6%/quarter before the 2004 risk communication. The 2004 risk communication was sent directly to all prescribers, and specifically recommended review of all patients prescribed relevant drugs. It was associated with an immediate absolute reduction in antipsychotic prescribing of 5.9% (95% CI −6.6 to −5.2) and a change to a stable level of prescribing subsequently. The 2009 risk communication was disseminated in a limited circulation bulletin, and only specifically recommended avoiding initiation if possible. There was no immediate associated impact, but it was associated with a significant decline in prescribing subsequently which appeared driven by a decline in initiation, with the percentage prescribed an antipsychotic falling from 18.4% in Q1 2009 to 13.5% in Q1 2011. There was no widespread substitution of antipsychotics with other psychotropic drugs. Conclusions The two risk communications were associated with reductions in antipsychotic use, in ways which were compatible with marked differences in their content and dissemination. Further research is needed to ensure that the content and dissemination of regulatory risk communications is optimal, and to track their impact on intended and

  5. Differential impact of two risk communications on antipsychotic prescribing to people with dementia in Scotland: segmented regression time series analysis 2001-2011.

    Directory of Open Access Journals (Sweden)

    Bruce Guthrie

    Full Text Available Regulatory risk communications are an important method for disseminating drug safety information, but their impact varies. Two significant UK risk communications about antipsychotic use in older people with dementia were issued in 2004 and 2009. These varied considerably in their content and dissemination, allowing examination of their differential impact.Segmented regression time-series analysis 2001-2011 for people aged ≥65 years with dementia in 87 Scottish general practices, examining the impact of two pre-specified risk communications in 2004 and 2009 on antipsychotic and other psychotropic prescribing.The percentage of people with dementia prescribed an antipsychotic was 15.9% in quarter 1 2001 and was rising by an estimated 0.6%/quarter before the 2004 risk communication. The 2004 risk communication was sent directly to all prescribers, and specifically recommended review of all patients prescribed relevant drugs. It was associated with an immediate absolute reduction in antipsychotic prescribing of 5.9% (95% CI -6.6 to -5.2 and a change to a stable level of prescribing subsequently. The 2009 risk communication was disseminated in a limited circulation bulletin, and only specifically recommended avoiding initiation if possible. There was no immediate associated impact, but it was associated with a significant decline in prescribing subsequently which appeared driven by a decline in initiation, with the percentage prescribed an antipsychotic falling from 18.4% in Q1 2009 to 13.5% in Q1 2011. There was no widespread substitution of antipsychotics with other psychotropic drugs.The two risk communications were associated with reductions in antipsychotic use, in ways which were compatible with marked differences in their content and dissemination. Further research is needed to ensure that the content and dissemination of regulatory risk communications is optimal, and to track their impact on intended and unintended outcomes. Although rates

  6. Diagnostic value of ST-segment deviations during cardiac exercise stress testing: Systematic comparison of different ECG leads and time-points.

    Science.gov (United States)

    Puelacher, Christian; Wagener, Max; Abächerli, Roger; Honegger, Ursina; Lhasam, Nundsin; Schaerli, Nicolas; Prêtre, Gil; Strebel, Ivo; Twerenbold, Raphael; Boeddinghaus, Jasper; Nestelberger, Thomas; Rubini Giménez, Maria; Hillinger, Petra; Wildi, Karin; Sabti, Zaid; Badertscher, Patrick; Cupa, Janosch; Kozhuharov, Nikola; du Fay de Lavallaz, Jeanne; Freese, Michael; Roux, Isabelle; Lohrmann, Jens; Leber, Remo; Osswald, Stefan; Wild, Damian; Zellweger, Michael J; Mueller, Christian; Reichlin, Tobias

    2017-07-01

    Exercise ECG stress testing is the most widely available method for evaluation of patients with suspected myocardial ischemia. Its major limitation is the relatively poor accuracy of ST-segment changes regarding ischemia detection. Little is known about the optimal method to assess ST-deviations. A total of 1558 consecutive patients undergoing bicycle exercise stress myocardial perfusion imaging (MPI) were enrolled. Presence of inducible myocardial ischemia was adjudicated using MPI results. The diagnostic value of ST-deviations for detection of exercise-induced myocardial ischemia was systematically analyzed 1) for each individual lead, 2) at three different intervals after the J-point (J+40ms, J+60ms, J+80ms), and 3) at different time points during the test (baseline, maximal workload, 2min into recovery). Exercise-induced ischemia was detected in 481 (31%) patients. The diagnostic accuracy of ST-deviations was highest at +80ms after the J-point, and at 2min into recovery. At this point, ST-amplitude showed an AUC of 0.63 (95% CI 0.59-0.66) for the best-performing lead I. The combination of ST-amplitude and ST-slope in lead I did not increase the AUC. Lead I reached a sensitivity of 37% and a specificity of 83%, with similar sensitivity to manual ECG analysis (34%, p=0.31) but lower specificity (90%, pST-deviations is highest when evaluated at +80ms after the J-point, and at 2min into recovery. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Pre-hospital ticagrelor in patients with ST-segment elevation myocardial infarction with long transport time to primary PCI facility.

    Science.gov (United States)

    Lupi, Alessandro; Schaffer, Alon; Lazzero, Maurizio; Tessitori, Massimo; De Martino, Leonardo; Rognoni, Andrea; Bongo, Angelo S; Porto, Italo

    2016-12-01

    Pre-hospital ticagrelor, given less than 1h before coronary intervention (PCI), failed to improve coronary reperfusion in ST-segment elevation myocardial infarction (STEMI) patients undergoing primary PCI. It is unknown whether a longer interval from ticagrelor administration to primary PCI might reveal any improvement of coronary reperfusion. We retrospectively compared 143 patients, pre-treated in spoke centers or ambulance with ticagrelor at least 1.5h before PCI (Pre-treatment Group), with 143 propensity score-matched controls treated with ticagrelor in the hub before primary PCI (Control Group) extracted from RENOVAMI, a large observational Italian registry of more than 1400 STEMI patients enrolled from Jan. 2012 to Oct. 2015 (ClinicalTrials.gov id: NCT01347580). The median time from ticagrelor administration and PCI was 2.08h (95% CI 1.66-2.84) in the Pre-treatment Group and 0.56h (95% CI 0.33-0.76) in the Control Group. TIMI flow grade before primary PCI in the infarct related artery was the primary endpoint. The primary endpoint, baseline TIMI flow grade, was significantly higher in Pre-treatment Group (0.88±1.14 vs 0.53±0.86, P=0.02). However in-hospital mortality, in-hospital stent thrombosis, bleeding rates and other clinical and angiographic outcomes were similar in the two groups. In a real world STEMI network, pre-treatment with ticagrelor in spoke hospitals or in ambulance loading at least 1.5h before primary PCI is safe and might improve pre-PCI coronary reperfusion, in comparison with ticagrelor administration immediately before PCI. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Prognostic impact of alkaline phosphatase measured at time of presentation in patients undergoing primary percutaneous coronary intervention for ST-segment elevation myocardial infarction.

    Directory of Open Access Journals (Sweden)

    Pyung Chun Oh

    Full Text Available Serum alkaline phosphatase (ALP has been shown to be a prognostic factor in several subgroups of patients due to its promotion of vascular calcification. However, the prognostic impact of serum ALP level in ST-segment elevation myocardial infarction (STEMI patients with a relatively low calcification burden has not been determined. We aimed to investigate the association of ALP level measured at time of presentation on clinical outcomes in patients with STEMI requiring primary percutaneous coronary intervention (PCI.A total of 1178 patients with STEMI undergoing primary PCI between 2007 and 2014 were retrospectively enrolled from the INTERSTELLAR registry and classified into tertiles by ALP level (83 IU/L. The primary study outcome was a major adverse cardiac or cerebrovascular event (MACCE, defined as the composite of all-cause death, non-fatal myocardial infarction, non-fatal stroke, and ischemia-driven revascularization.Median follow-up duration was 25 months (interquartile range, 10-39 months. The incidence of MACCE significantly increased as ALP level increased, that is, for the 83 IU/L tertiles incidences were 8.7%, 11.7%, and 15.7%, respectively; p for trend = 0.003. After adjustment for potential confounders, the adjusted hazard ratios for MACCE in the middle and highest tertiles were 1.69 (95% CI 1.01-2.81 and 2.46 (95% CI 1.48-4.09, respectively, as compared with the lowest ALP tertile.Elevated ALP level at presentation, but within the higher limit of normal, was found to be independently associated with higher risk of MACCE after primary PCI in patients with STEMI.

  9. A Simple Approach for Monitoring Business Service Time Variation

    Directory of Open Access Journals (Sweden)

    Su-Fen Yang

    2014-01-01

    Full Text Available Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart and an asymmetric EWMA mean chart (EWMA-AM chart based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  10. A simple approach for monitoring business service time variation.

    Science.gov (United States)

    Yang, Su-Fen; Arnold, Barry C

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  11. Wavelet transform approach for fitting financial time series data

    Science.gov (United States)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  12. Rigorous time slicing approach to Feynman path integrals

    CERN Document Server

    Fujiwara, Daisuke

    2017-01-01

    This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...

  13. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  14. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  15. A correlative approach to segmenting phases and ferrite morphologies in transformation-induced plasticity steel using electron back-scattering diffraction and energy dispersive X-ray spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Gazder, Azdiar A., E-mail: azdiar@uow.edu.au [Electron Microscopy Centre, University of Wollongong, New South Wales 2500 (Australia); Al-Harbi, Fayez; Spanke, Hendrik Th. [School of Mechanical, Materials and Mechatronic Engineering, University of Wollongong, New South Wales 2522 (Australia); Mitchell, David R.G. [Electron Microscopy Centre, University of Wollongong, New South Wales 2500 (Australia); Pereloma, Elena V. [Electron Microscopy Centre, University of Wollongong, New South Wales 2500 (Australia); School of Mechanical, Materials and Mechatronic Engineering, University of Wollongong, New South Wales 2522 (Australia)

    2014-12-15

    Using a combination of electron back-scattering diffraction and energy dispersive X-ray spectroscopy data, a segmentation procedure was developed to comprehensively distinguish austenite, martensite, polygonal ferrite, ferrite in granular bainite and bainitic ferrite laths in a thermo-mechanically processed low-Si, high-Al transformation-induced plasticity steel. The efficacy of the ferrite morphologies segmentation procedure was verified by transmission electron microscopy. The variation in carbon content between the ferrite in granular bainite and bainitic ferrite laths was explained on the basis of carbon partitioning during their growth. - Highlights: • Multi-condition segmentation of austenite, martensite, polygonal ferrite and ferrite in bainite. • Ferrites in granular bainite and bainitic ferrite segmented by variation in relative carbon counts. • Carbon partitioning during growth explains variation in carbon content of ferrites in bainites. • Developed EBSD image processing tools can be applied to the microstructures of a variety of alloys. • EBSD-based segmentation procedure verified by correlative TEM results.

  16. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  18. Segmenting hospitals for improved management strategy.

    Science.gov (United States)

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  19. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  20. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  1. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  2. A timed-automata approach for critical path detection in a soft real-time application

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Rensink, Arend; Aksit, Mehmet

    In this paper, we report preliminary ideas from our project called “Time Performance Improvement With Parallel Processing Systems‿ (TIPS). In the TIPS project, we plan to take advantage of multi-core platforms for performance improvement by parallelizing a complex soft real-time application. In

  3. Hospital process intervals, not EMS time intervals, are the most important predictors of rapid reperfusion in EMS Patients with ST-segment elevation myocardial infarction.

    Science.gov (United States)

    Clark, Carol Lynn; Berman, Aaron D; McHugh, Ann; Roe, Edward Jedd; Boura, Judith; Swor, Robert A

    2012-01-01

    To assess the relationship of emergency medical services (EMS) intervals and internal hospital intervals to the rapid reperfusion of patients with ST-segment elevation myocardial infarction (STEMI). We performed a secondary analysis of a prospectively collected database of STEMI patients transported to a large academic community hospital between January 1, 2004, and December 31, 2009. EMS and hospital data intervals included EMS scene time, transport time, hospital arrival to myocardial infarction (MI) team activation (D2Page), page to catheterization laboratory arrival (P2Lab), and catheterization laboratory arrival to reperfusion (L2B). We used two outcomes: EMS scene arrival to reperfusion (S2B) ≤90 minutes and hospital arrival to reperfusion (D2B) ≤90 minutes. Means and proportions are reported. Pearson chi-square and multivariate regression were used for analysis. During the study period, we included 313 EMS-transported STEMI patients with 298 (95.2%) MI team activations. Of these STEMI patients, 295 (94.2%) were taken to the cardiac catheterization laboratory and 244 (78.0%) underwent percutaneous coronary intervention (PCI). For the patients who underwent PCI, 127 (52.5%) had prehospital EMS activation, 202 (82.8%) had D2B ≤90 minutes, and 72 (39%) had S2B ≤90 minutes. In a multivariate analysis, hospital processes EMS activation (OR 7.1, 95% CI 2.7, 18.4], Page to Lab [6.7, 95% CI 2.3, 19.2] and Lab arrival to Reperfusion [18.5, 95% CI 6.1, 55.6]) were the most important predictors of Scene to Balloon ≤ 90 minutes. EMS scene and transport intervals also had a modest association with rapid reperfusion (OR 0.85, 95% CI 0.78, 0.93 and OR 0.89, 95% CI 0.83, 0.95, respectively). In a secondary analysis, Hospital processes (Door to Page [OR 44.8, 95% CI 8.6, 234.4], Page 2 Lab [OR 5.4, 95% CI 1.9, 15.3], and Lab arrival to Reperfusion [OR 14.6 95% CI 2.5, 84.3]), but not EMS scene and transport intervals were the most important predictors D2B ≤90

  4. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  5. A correlative approach to segmenting phases and ferrite morphologies in transformation-induced plasticity steel using electron back-scattering diffraction and energy dispersive X-ray spectroscopy.

    Science.gov (United States)

    Gazder, Azdiar A; Al-Harbi, Fayez; Spanke, Hendrik Th; Mitchell, David R G; Pereloma, Elena V

    2014-12-01

    Using a combination of electron back-scattering diffraction and energy dispersive X-ray spectroscopy data, a segmentation procedure was developed to comprehensively distinguish austenite, martensite, polygonal ferrite, ferrite in granular bainite and bainitic ferrite laths in a thermo-mechanically processed low-Si, high-Al transformation-induced plasticity steel. The efficacy of the ferrite morphologies segmentation procedure was verified by transmission electron microscopy. The variation in carbon content between the ferrite in granular bainite and bainitic ferrite laths was explained on the basis of carbon partitioning during their growth. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Door to needle time of streptokinase and ST segment resolution assessing the efficacy of reperfusion therapy at Karachi Institute of Heart Diseases

    International Nuclear Information System (INIS)

    Sultana, R.; Sultana, N.; Rasheed, A.; Rasheed, Z.; Ahmed, M.; Ishaq, M.; Samad, A.

    2010-01-01

    Background: Early start of treatment including coronary revascularisation has been recognised as crucial variable in the outcome of acute ST-segment Elevation Myocardial Infarction (STEMI). Objectives of the study were to determine the magnitude of ST-segment resolution after thrombolytic therapy predicts short- and long-term outcomes in patients with an Acute Myocardial Infarction (AMI). Methods: The duration of quasi experimental study was 3 years, from July 2006 to June 2009, conducted at Karachi Institute of Heart Diseases. Total 1,023 patients of STEMI treated with streptokinase (SK) were enrolled in the study. Result: Of the total 1023, 689 (67.3%) patients were males and 334 (32.6%) were females. Six hundred and twenty-nine (61.5%) were successfully resolved after thrombolytic therapy while in 395 (38.5%) patients ST-segment could not resolve into 3 conventional ST-segment resolution categories at 60 minute and 90 minute after thrombolysis. Three hundred and twelve (30%) and 444 (43.4%) with complete resolution, 344 (33.62%) and 325 (31.76%) with partial resolution, 367 (35.8%) and 491 (19.29%) were with no resolution at 60 and 90 minutes respectively. Conclusion: Shock, congestive heart failure, and recurrent angina and ischemia occurred more often in patients with partial or no ST resolution as compare to complete resolution. (author)

  7. Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach

    Science.gov (United States)

    Selig, James P.; Preacher, Kristopher J.; Little, Todd D.

    2012-01-01

    We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…

  8. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  9. Segmentation of multiple sclerosis lesions in MR images: a review

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, Daryoush; Kouzani, Abbas Z. [Deakin University, School of Engineering, Geelong, Victoria (Australia); Soltanian-Zadeh, Hamid [Henry Ford Health System, Image Analysis Laboratory, Radiology Department, Detroit, MI (United States); University of Tehran, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, Tehran (Iran, Islamic Republic of); School of Cognitive Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)

    2012-04-15

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  10. Segmentation of multiple sclerosis lesions in MR images: a review

    International Nuclear Information System (INIS)

    Mortazavi, Daryoush; Kouzani, Abbas Z.; Soltanian-Zadeh, Hamid

    2012-01-01

    Multiple sclerosis (MS) is an inflammatory demyelinating disease that the parts of the nervous system through the lesions generated in the white matter of the brain. It brings about disabilities in different organs of the body such as eyes and muscles. Early detection of MS and estimation of its progression are critical for optimal treatment of the disease. For diagnosis and treatment evaluation of MS lesions, they may be detected and segmented in Magnetic Resonance Imaging (MRI) scans of the brain. However, due to the large amount of MRI data to be analyzed, manual segmentation of the lesions by clinical experts translates into a very cumbersome and time consuming task. In addition, manual segmentation is subjective and prone to human errors. Several groups have developed computerized methods to detect and segment MS lesions. These methods are not categorized and compared in the past. This paper reviews and compares various MS lesion segmentation methods proposed in recent years. It covers conventional methods like multilevel thresholding and region growing, as well as more recent Bayesian methods that require parameter estimation algorithms. It also covers parameter estimation methods like expectation maximization and adaptive mixture model which are among unsupervised techniques as well as kNN and Parzen window methods that are among supervised techniques. Integration of knowledge-based methods such as atlas-based approaches with Bayesian methods increases segmentation accuracy. In addition, employing intelligent classifiers like Fuzzy C-Means, Fuzzy Inference Systems, and Artificial Neural Networks reduces misclassified voxels. (orig.)

  11. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  12. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  13. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  14. Combining Statistical Methodologies in Water Quality Monitoring in a Hydrological Basin - Space and Time Approaches

    OpenAIRE

    Costa, Marco; A. Manuela Gonçalves

    2012-01-01

    In this work are discussed some statistical approaches that combine multivariate statistical techniques and time series analysis in order to describe and model spatial patterns and temporal evolution by observing hydrological series of water quality variables recorded in time and space. These approaches are illustrated with a data set collected in the River Ave hydrological basin located in the Northwest region of Portugal.

  15. Communication with market segments - travel agencies' perspective

    OpenAIRE

    Lorena Bašan; Jasmina Dlačić; Željko Trezner

    2013-01-01

    Purpose – The purpose of this paper is to research the travel agencies’ communication with market segments. Communication with market segments takes into account marketing communication means as well as the implementation of different business orientations. Design – Special emphasis is placed on the use of different marketing communication means and their efficiency. Research also explores business orientation adaptation when approaching different market segments. Methodology – In explo...

  16. An approach to handle Real Time and Probabilistic behaviors in e-commerce

    DEFF Research Database (Denmark)

    Diaz, G.; Larsen, Kim Guldstrand; Pardo, J.

    2005-01-01

    In this work we describe an approach to deal with systems having at the same time probabilistic and real-time behav- iors. The main goal in the paper is to show the automatic translation from a real time model based on UPPAAL tool, which makes automatic verification of Real Time Systems, to the R...

  17. Millisecond single-molecule localization microscopy combined with convolution analysis and automated image segmentation to determine protein concentrations in complexly structured, functional cells, one cell at a time.

    Science.gov (United States)

    Wollman, Adam J M; Leake, Mark C

    2015-01-01

    We present a single-molecule tool called the CoPro (concentration of proteins) method that uses millisecond imaging with convolution analysis, automated image segmentation and super-resolution localization microscopy to generate robust estimates for protein concentration in different compartments of single living cells, validated using realistic simulations of complex multiple compartment cell types. We demonstrate its utility experimentally on model Escherichia coli bacteria and Saccharomyces cerevisiae budding yeast cells, and use it to address the biological question of how signals are transduced in cells. Cells in all domains of life dynamically sense their environment through signal transduction mechanisms, many involving gene regulation. The glucose sensing mechanism of S. cerevisiae is a model system for studying gene regulatory signal transduction. It uses the multi-copy expression inhibitor of the GAL gene family, Mig1, to repress unwanted genes in the presence of elevated extracellular glucose concentrations. We fluorescently labelled Mig1 molecules with green fluorescent protein (GFP) via chromosomal integration at physiological expression levels in living S. cerevisiae cells, in addition to the RNA polymerase protein Nrd1 with the fluorescent protein reporter mCherry. Using CoPro we make quantitative estimates of Mig1 and Nrd1 protein concentrations in the cytoplasm and nucleus compartments on a cell-by-cell basis under physiological conditions. These estimates indicate a ∼4-fold shift towards higher values in the concentration of diffusive Mig1 in the nucleus if the external glucose concentration is raised, whereas equivalent levels in the cytoplasm shift to smaller values with a relative change an order of magnitude smaller. This compares with Nrd1 which is not involved directly in glucose sensing, and which is almost exclusively localized in the nucleus under high and low external glucose levels. CoPro facilitates time-resolved quantification of

  18. Robust Stabilization of Discrete-Time Systems with Time-Varying Delay: An LMI Approach

    Directory of Open Access Journals (Sweden)

    Valter J. S. Leite

    2008-01-01

    Full Text Available Sufficient linear matrix inequality (LMI conditions to verify the robust stability and to design robust state feedback gains for the class of linear discrete-time systems with time-varying delay and polytopic uncertainties are presented. The conditions are obtained through parameter-dependent Lyapunov-Krasovskii functionals and use some extra variables, which yield less conservative LMI conditions. Both problems, robust stability analysis and robust synthesis, are formulated as convex problems where all system matrices can be affected by uncertainty. Some numerical examples are presented to illustrate the advantages of the proposed LMI conditions.

  19. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  20. Novel approach for identification of influenza virus host range and zoonotic transmissible sequences by determination of host-related associative positions in viral genome segments.

    Science.gov (United States)

    Kargarfard, Fatemeh; Sami, Ashkan; Mohammadi-Dehcheshmeh, Manijeh; Ebrahimie, Esmaeil

    2016-11-16

    Recent (2013 and 2009) zoonotic transmission of avian or porcine influenza to humans highlights an increase in host range by evading species barriers. Gene reassortment or antigenic shift between viruses from two or more hosts can generate a new life-threatening virus when the new shuffled virus is no longer recognized by antibodies existing within human populations. There is no large scale study to help understand the underlying mechanisms of host transmission. Furthermore, there is no clear understanding of how different segments of the influenza genome contribute in the final determination of host range. To obtain insight into the rules underpinning host range determination, various supervised machine learning algorithms were employed to mine reassortment changes in different viral segments in a range of hosts. Our multi-host dataset contained whole segments of 674 influenza strains organized into three host categories: avian, human, and swine. Some of the sequences were assigned to multiple hosts. In point of fact, the datasets are a form of multi-labeled dataset and we utilized a multi-label learning method to identify discriminative sequence sites. Then algorithms such as CBA, Ripper, and decision tree were applied to extract informative and descriptive association rules for each viral protein segment. We found informative rules in all segments that are common within the same host class but varied between different hosts. For example, for infection of an avian host, HA14V and NS1230S were the most important discriminative and combinatorial positions. Host range identification is facilitated by high support combined rules in this study. Our major goal was to detect discriminative genomic positions that were able to identify multi host viruses, because such viruses are likely to cause pandemic or disastrous epidemics.

  1. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  2. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  3. Superpixel-based segmentation of glottal area from videolaryngoscopy images

    Science.gov (United States)

    Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail

    2017-11-01

    Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.

  4. A Quantitative Approach to the Formal Verification of Real-Time Systems.

    Science.gov (United States)

    1996-09-01

    Computer Science A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos September 1996 CMU-CS-96-199...ptisiic raieaiSI v Diambimos Lboiamtad _^ A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos...implied, of NSF, the Semiconduc- tor Research Corporation, ARPA or the U.S. government. Keywords: real - time systems , formal verification, symbolic

  5. Inferior vena cava segmentation with parameter propagation and graph cut.

    Science.gov (United States)

    Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing

    2017-09-01

    The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.

  6. Dynamics in international market segmentation of new product growth

    NARCIS (Netherlands)

    Lemmens, A.; Croux, C.; Stremersch, S.

    2012-01-01

    Prior international segmentation studies have been static in that they have identified segments that remain stable over time. This paper shows that country segments in new product growth are intrinsically dynamic. We propose a semiparametric hidden Markov model to dynamically segment countries based

  7. Measuring multi-joint stiffness during single movements: numerical validation of a novel time-frequency approach.

    Science.gov (United States)

    Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R

    2012-01-01

    This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.

  8. Measuring multi-joint stiffness during single movements: numerical validation of a novel time-frequency approach.

    Directory of Open Access Journals (Sweden)

    Davide Piovesan

    Full Text Available This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.

  9. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  10. Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules.

    Science.gov (United States)

    Feng, Xinyang; Yang, Jie; Laine, Andrew F; Angelini, Elsa D

    2017-09-01

    Automated detection and segmentation of pulmonary nodules on lung computed tomography (CT) scans can facilitate early lung cancer diagnosis. Existing supervised approaches for automated nodule segmentation on CT scans require voxel-based annotations for training, which are labor- and time-consuming to obtain. In this work, we propose a weakly-supervised method that generates accurate voxel-level nodule segmentation trained with image-level labels only. By adapting a convolutional neural network (CNN) trained for image classification, our proposed method learns discriminative regions from the activation maps of convolution units at different scales, and identifies the true nodule location with a novel candidate-screening framework. Experimental results on the public LIDC-IDRI dataset demonstrate that, our weakly-supervised nodule segmentation framework achieves competitive performance compared to a fully-supervised CNN-based segmentation method.

  11. Dynamically Determining the Toll Plaza Capacity by Monitoring Approaching Traffic Conditions in Real-Time

    Directory of Open Access Journals (Sweden)

    Cheolsun Kim

    2016-03-01

    Full Text Available This study presents an analytical method for dynamically adjusting toll plaza capacity to cope with a sudden shift in demand. The proposed method uses a proxy measure developed using discharge rate observed at toll plazas and segment travel times measured by probe vehicles. The effectiveness of the method has been evaluated by analyzing the empirical data obtained from toll plazas in the San Francisco Bay Area before and after toll plaza capacity changed. Findings indicate that the estimated number of vehicles stored near the upstream of toll plaza based on discharge rate and their travel times can be used as a proxy measure for predicting the effect of changes in toll plaza capacity. The proposed model can aid government agencies to dynamically adjust the toll plaza capacity in response to a sudden shift in demand due to various situations of failure.

  12. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  13. A novel segmentation approach for implementation of MRAC in head PET/MRI employing Short-TE MRI and 2-point Dixon method in a fuzzy C-means framework

    Energy Technology Data Exchange (ETDEWEB)

    Khateri, Parisa; Rad, Hamidreza Saligheh [Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Jafari, Amir Homayoun [Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Ay, Mohammad Reza, E-mail: mohammadreza_ay@tums.ac.ir [Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of)

    2014-01-11

    Quantitative PET image reconstruction requires an accurate map of attenuation coefficients of the tissue under investigation at 511 keV (μ-map), and in order to correct the emission data for attenuation. The use of MRI-based attenuation correction (MRAC) has recently received lots of attention in the scientific literature. One of the major difficulties facing MRAC has been observed in the areas where bone and air collide, e.g. ethmoidal sinuses in the head area. Bone is intrinsically not detectable by conventional MRI, making it difficult to distinguish air from bone. Therefore, development of more versatile MR sequences to label the bone structure, e.g. ultra-short echo-time (UTE) sequences, certainly plays a significant role in novel methodological developments. However, long acquisition time and complexity of UTE sequences limit its clinical applications. To overcome this problem, we developed a novel combination of Short-TE (ShTE) pulse sequence to detect bone signal with a 2-point Dixon technique for water–fat discrimination, along with a robust image segmentation method based on fuzzy clustering C-means (FCM) to segment the head area into four classes of air, bone, soft tissue and adipose tissue. The imaging protocol was set on a clinical 3 T Tim Trio and also 1.5 T Avanto (Siemens Medical Solution, Erlangen, Germany) employing a triple echo time pulse sequence in the head area. The acquisition parameters were as follows: TE1/TE2/TE3=0.98/4.925/6.155 ms, TR=8 ms, FA=25 on the 3 T system, and TE1/TE2/TE3=1.1/2.38/4.76 ms, TR=16 ms, FA=18 for the 1.5 T system. The second and third echo-times belonged to the Dixon decomposition to distinguish soft and adipose tissues. To quantify accuracy, sensitivity and specificity of the bone segmentation algorithm, resulting classes of MR-based segmented bone were compared with the manual segmented one by our expert neuro-radiologist. Results for both 3 T and 1.5 T systems show that bone segmentation applied in several

  14. A novel segmentation approach for implementation of MRAC in head PET/MRI employing Short-TE MRI and 2-point Dixon method in a fuzzy C-means framework

    Science.gov (United States)

    Khateri, Parisa; Rad, Hamidreza Saligheh; Jafari, Amir Homayoun; Ay, Mohammad Reza

    2014-01-01

    Quantitative PET image reconstruction requires an accurate map of attenuation coefficients of the tissue under investigation at 511 keV (μ-map), and in order to correct the emission data for attenuation. The use of MRI-based attenuation correction (MRAC) has recently received lots of attention in the scientific literature. One of the major difficulties facing MRAC has been observed in the areas where bone and air collide, e.g. ethmoidal sinuses in the head area. Bone is intrinsically not detectable by conventional MRI, making it difficult to distinguish air from bone. Therefore, development of more versatile MR sequences to label the bone structure, e.g. ultra-short echo-time (UTE) sequences, certainly plays a significant role in novel methodological developments. However, long acquisition time and complexity of UTE sequences limit its clinical applications. To overcome this problem, we developed a novel combination of Short-TE (ShTE) pulse sequence to detect bone signal with a 2-point Dixon technique for water-fat discrimination, along with a robust image segmentation method based on fuzzy clustering C-means (FCM) to segment the head area into four classes of air, bone, soft tissue and adipose tissue. The imaging protocol was set on a clinical 3 T Tim Trio and also 1.5 T Avanto (Siemens Medical Solution, Erlangen, Germany) employing a triple echo time pulse sequence in the head area. The acquisition parameters were as follows: TE1/TE2/TE3=0.98/4.925/6.155 ms, TR=8 ms, FA=25 on the 3 T system, and TE1/TE2/TE3=1.1/2.38/4.76 ms, TR=16 ms, FA=18 for the 1.5 T system. The second and third echo-times belonged to the Dixon decomposition to distinguish soft and adipose tissues. To quantify accuracy, sensitivity and specificity of the bone segmentation algorithm, resulting classes of MR-based segmented bone were compared with the manual segmented one by our expert neuro-radiologist. Results for both 3 T and 1.5 T systems show that bone segmentation applied in several

  15. A Theoretical Methodology and Prototype Implementation for Detection Segmentation Classification of Digital Mammogram Tumor by Machine Learning and Problem Solving Approach

    OpenAIRE

    Raman Valliappan; Sumari Putra; Rajeswari Mandava

    2010-01-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. The CAD systems can provide such help and they are important and necessary for breast cancer control. Microcalcifications and masses are the two most important indicators of malignancy, and their automated detection is very valuable for early breast cancer diagnosis. The main objective of this paper is to detect, segment and classify the tumor from ...

  16. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Multi-scale/multi-level geographic object-based image analysis (MS-GEOBIA methods are becoming widely-used in remote sensing because single-scale/single-level (SS-GEOBIA methods are often unable to obtain an accurate segmentation and classification of all land use/land cover (LULC types in an image. However, there have been few comparisons between SS-GEOBIA and MS-GEOBIA approaches for the purpose of mapping a specific LULC type, so it is not well understood which is more appropriate for this task. In addition, there are few methods for automating the selection of segmentation parameters for MS-GEOBIA, while manual selection (i.e., trial-and-error approach of parameters can be quite challenging and time-consuming. In this study, we examined SS-GEOBIA and MS-GEOBIA approaches for extracting residential areas in Landsat 8 imagery, and compared naïve and parameter-optimized segmentation approaches to assess whether unsupervised segmentation parameter optimization (USPO could improve the extraction of residential areas. Our main findings were: (i the MS-GEOBIA approaches achieved higher classification accuracies than the SS-GEOBIA approach, and (ii USPO resulted in more accurate MS-GEOBIA classification results while reducing the number of segmentation levels and classification variables considerably.

  17. A game-theoretic approach to real-time system testing

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao

    2008-01-01

    This paper presents a game-theoretic approach to the testing of uncontrollable real-time systems. By modelling the systems with Timed I/O Game Automata and specifying the test purposes as Timed CTL formulas, we employ a recently developed timed game solver UPPAAL-TIGA to synthesize winning...... strategies, and then use these strategies to conduct black-box conformance testing of the systems. The testing process is proved to be sound and complete with respect to the given test purposes. Case study and preliminary experimental results indicate that this is a viable approach to uncontrollable timed...... system testing....

  18. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  19. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    Science.gov (United States)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  20. A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems

    OpenAIRE

    Matos , Carlos; Ortigueira , Manuel ,

    2012-01-01

    Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...

  1. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  2. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  3. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  4. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  5. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  6. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  7. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  8. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  9. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  10. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  11. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    Science.gov (United States)

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  12. Real-time risk monitoring in business processes : a sensor-based approach

    NARCIS (Netherlands)

    Conforti, R.; La Rosa, M.; Fortino, G.; Hofstede, ter A.H.M.; Recker, J.; Adams, M.

    2013-01-01

    This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis,

  13. Using the mean approach in pooling cross-section and time series data for regression modelling

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1989-12-01

    The mean approach is one of the methods for pooling cross section and time series data for mathematical-statistical modelling. Though a simple approach, its results are sometimes paradoxical in nature. However, researchers still continue using it for its simplicity. Here, the paper investigates the nature and source of such unwanted phenomena. (author). 7 refs

  14. A combined rheology and time domain NMR approach for determining water distributions in protein blends

    NARCIS (Netherlands)

    Dekkers, Birgit L.; Kort, de Daan W.; Grabowska, Katarzyna J.; Tian, Bei; As, Van Henk; Goot, van der Atze Jan

    2016-01-01

    We present a combined time domain NMR and rheology approach to quantify the water distribution in a phase separated protein blend. The approach forms the basis for a new tool to assess the microstructural properties of phase separated biopolymer blends, making it highly relevant for many food and

  15. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  16. Nutrition Targeting by Food Timing: Time-Related Dietary Approaches to Combat Obesity and Metabolic Syndrome1234

    Science.gov (United States)

    Sofer, Sigal; Stark, Aliza H; Madar, Zecharia

    2015-01-01

    Effective nutritional guidelines for reducing abdominal obesity and metabolic syndrome are urgently needed. Over the years, many different dietary regimens have been studied as possible treatment alternatives. The efficacy of low-calorie diets, diets with different proportions of fat, protein, and carbohydrates, traditional healthy eating patterns, and evidence-based dietary approaches were evaluated. Reviewing literature published in the last 5 y reveals that these diets may improve risk factors associated with obesity and metabolic syndrome. However, each diet has limitations ranging from high dropout rates to maintenance difficulties. In addition, most of these dietary regimens have the ability to attenuate some, but not all, of the components involved in this complicated multifactorial condition. Recently, interest has arisen in the time of day foods are consumed (food timing). Studies have examined the implications of eating at the right or wrong time, restricting eating hours, time allocation for meals, and timing of macronutrient consumption during the day. In this paper we review new insights into well-known dietary therapies as well as innovative time-associated dietary approaches for treating obesity and metabolic syndrome. We discuss results from systematic meta-analyses, clinical interventions, and animal models. PMID:25770260

  17. Functional approach to a time-dependent self-consistent field theory

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1979-01-01

    The time-dependent Hartree-Fock approximation is formulated within the path integral approach. It is shown that by a suitable choice of the collective field the classical equation of motion of the collective field coincides with the time-dependent Hartree (TDH) equation. The consideration is restricted to the TDH equation, since the exchange terms do not appear in the functional approach on the same footing as the direct terms

  18. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  19. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    Science.gov (United States)

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  20. Hybrid Clustering And Boundary Value Refinement for Tumor Segmentation using Brain MRI

    Science.gov (United States)

    Gupta, Anjali; Pahuja, Gunjan

    2017-08-01

    The method of brain tumor segmentation is the separation of tumor area from Brain Magnetic Resonance (MR) images. There are number of methods already exist for segmentation of brain tumor efficiently. However it’s tedious task to identify the brain tumor from MR images. The segmentation process is extraction of different tumor tissues such as active, tumor, necrosis, and edema from the normal brain tissues such as gray matter (GM), white matter (WM), as well as cerebrospinal fluid (CSF). As per the survey study, most of time the brain tumors are detected easily from brain MR image using region based approach but required level of accuracy, abnormalities classification is not predictable. The segmentation of brain tumor consists of many stages. Manually segmenting the tumor from brain MR images is very time consuming hence there exist many challenges in manual segmentation. In this research paper, our main goal is to present the hybrid clustering which consists of Fuzzy C-Means Clustering (for accurate tumor detection) and level set method(for handling complex shapes) for the detection of exact shape of tumor in minimal computational time. using this approach we observe that for a certain set of images 0.9412 sec of time is taken to detect tumor which is very less in comparison to recent existing algorithm i.e. Hybrid clustering (Fuzzy C-Means and K Means clustering).

  1. Texture analysis of cardiac cine magnetic resonance imaging to detect nonviable segments in patients with chronic myocardial infarction.

    Science.gov (United States)

    Larroza, Andrés; López-Lereu, María P; Monmeneu, José V; Gavara, Jose; Chorro, Francisco J; Bodí, Vicente; Moratal, David

    2018-04-01

    To investigate the ability of texture analysis to differentiate between infarcted nonviable, viable, and remote segments on cardiac cine magnetic resonance imaging (MRI). This retrospective study included 50 patients suffering chronic myocardial infarction. The data were randomly split into training (30 patients) and testing (20 patients) sets. The left ventricular myocardium was segmented according to the 17-segment model in both cine and late gadolinium enhancement (LGE) MRI. Infarcted myocardium regions were identified on LGE in short-axis views. Nonviable segments were identified as those showing LGE ≥ 50%, and viable segments those showing 0 cine images. A support vector machine (SVM) classifier was trained with different combination of texture features to obtain a model that provided optimal classification performance. The best classification on testing set was achieved with local binary patterns features using a 2D + t approach, in which the features are computed by including information of the time dimension available in cine sequences. The best overall area under the receiver operating characteristic curve (AUC) were: 0.849, sensitivity of 92% to detect nonviable segments, 72% to detect viable segments, and 85% to detect remote segments. Nonviable segments can be detected on cine MRI using texture analysis and this may be used as hypothesis for future research aiming to detect the infarcted myocardium by means of a gadolinium-free approach. © 2018 American Association of Physicists in Medicine.

  2. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  3. So many genes, so little time: A practical approach to divergence-time estimation in the genomic era.

    Science.gov (United States)

    Smith, Stephen A; Brown, Joseph W; Walker, Joseph F

    2018-01-01

    Phylogenomic datasets have been successfully used to address questions involving evolutionary relationships, patterns of genome structure, signatures of selection, and gene and genome duplications. However, despite the recent explosion in genomic and transcriptomic data, the utility of these data sources for efficient divergence-time inference remains unexamined. Phylogenomic datasets pose two distinct problems for divergence-time estimation: (i) the volume of data makes inference of the entire dataset intractable, and (ii) the extent of underlying topological and rate heterogeneity across genes makes model mis-specification a real concern. "Gene shopping", wherein a phylogenomic dataset is winnowed to a set of genes with desirable properties, represents an alternative approach that holds promise in alleviating these issues. We implemented an approach for phylogenomic datasets (available in SortaDate) that filters genes by three criteria: (i) clock-likeness, (ii) reasonable tree length (i.e., discernible information content), and (iii) least topological conflict with a focal species tree (presumed to have already been inferred). Such a winnowing procedure ensures that errors associated with model (both clock and topology) mis-specification are minimized, therefore reducing error in divergence-time estimation. We demonstrated the efficacy of this approach through simulation and applied it to published animal (Aves, Diplopoda, and Hymenoptera) and plant (carnivorous Caryophyllales, broad Caryophyllales, and Vitales) phylogenomic datasets. By quantifying rate heterogeneity across both genes and lineages we found that every empirical dataset examined included genes with clock-like, or nearly clock-like, behavior. Moreover, many datasets had genes that were clock-like, exhibited reasonable evolutionary rates, and were mostly compatible with the species tree. We identified overlap in age estimates when analyzing these filtered genes under strict clock and uncorrelated

  4. Anterior corpectomy via the mini-open, extreme lateral, transpsoas approach combined with short-segment posterior fixation for single-level traumatic lumbar burst fractures: analysis of health-related quality of life outcomes and patient satisfaction.

    Science.gov (United States)

    Theologis, Alexander A; Tabaraee, Ehsan; Toogood, Paul; Kennedy, Abbey; Birk, Harjus; McClellan, R Trigg; Pekmezci, Murat

    2016-01-01

    The authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures. Patients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up. Twelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10

  5. Benchmarking the stochastic time-dependent variational approach for excitation dynamics in molecular aggregates

    Energy Technology Data Exchange (ETDEWEB)

    Chorošajev, Vladimir [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Gelzinis, Andrius; Valkunas, Leonas [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Department of Molecular Compound Physics, Center for Physical Sciences and Technology, Sauletekio 3, 10222 Vilnius (Lithuania); Abramavicius, Darius, E-mail: darius.abramavicius@ff.vu.lt [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania)

    2016-12-20

    Highlights: • The Davydov ansatze can be used for finite temperature simulations with an extension. • The accuracy is high if the system is strongly coupled to the environmental phonons. • The approach can simulate time-resolved fluorescence spectra. - Abstract: Time dependent variational approach is a convenient method to characterize the excitation dynamics in molecular aggregates for different strengths of system-bath interaction a, which does not require any additional perturbative schemes. Until recently, however, this method was only applicable in zero temperature case. It has become possible to extend this method for finite temperatures with the introduction of stochastic time dependent variational approach. Here we present a comparison between this approach and the exact hierarchical equations of motion approach for describing excitation dynamics in a broad range of temperatures. We calculate electronic population evolution, absorption and auxiliary time resolved fluorescence spectra in different regimes and find that the stochastic approach shows excellent agreement with the exact approach when the system-bath coupling is sufficiently large and temperatures are high. The differences between the two methods are larger, when temperatures are lower or the system-bath coupling is small.

  6. Fast and robust segmentation of white blood cell images by self-supervised learning.

    Science.gov (United States)

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Human body segmentation via data-driven graph cut.

    Science.gov (United States)

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  8. Calculation of the tunneling time using the extended probability of the quantum histories approach

    International Nuclear Information System (INIS)

    Rewrujirek, Jiravatt; Hutem, Artit; Boonchui, Sutee

    2014-01-01

    The dwell time of quantum tunneling has been derived by Steinberg (1995) [7] as a function of the relation between transmission and reflection times τ t and τ r , weighted by the transmissivity and the reflectivity. In this paper, we reexamine the dwell time using the extended probability approach. The dwell time is calculated as the weighted average of three mutually exclusive events. We consider also the scattering process due to a resonance potential in the long-time limit. The results show that the dwell time can be expressed as the weighted sum of transmission, reflection and internal probabilities.

  9. Brain Tumor Image Segmentation in MRI Image

    Science.gov (United States)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  10. HARDWARE REALIZATION OF CANNY EDGE DETECTION ALGORITHM FOR UNDERWATER IMAGE SEGMENTATION USING FIELD PROGRAMMABLE GATE ARRAYS

    Directory of Open Access Journals (Sweden)

    ALEX RAJ S. M.

    2017-09-01

    Full Text Available Underwater images raise new challenges in the field of digital image processing technology in recent years because of its widespread applications. There are many tangled matters to be considered in processing of images collected from water medium due to the adverse effects imposed by the environment itself. Image segmentation is preferred as basal stage of many digital image processing techniques which distinguish multiple segments in an image and reveal the hidden crucial information required for a peculiar application. There are so many general purpose algorithms and techniques that have been developed for image segmentation. Discontinuity based segmentation are most promising approach for image segmentation, in which Canny Edge detection based segmentation is more preferred for its high level of noise immunity and ability to tackle underwater environment. Since dealing with real time underwater image segmentation algorithm, which is computationally complex enough, an efficient hardware implementation is to be considered. The FPGA based realization of the referred segmentation algorithm is presented in this paper.

  11. Segmentation of 3D ultrasound computer tomography reflection images using edge detection and surface fitting

    Science.gov (United States)

    Hopp, T.; Zapf, M.; Ruiter, N. V.

    2014-03-01

    An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.

  12. Interactive lung segmentation in abnormal human and animal chest CT scans

    International Nuclear Information System (INIS)

    Kockelkorn, Thessa T. J. P.; Viergever, Max A.; Schaefer-Prokop, Cornelia M.; Bozovic, Gracijela; Muñoz-Barrutia, Arrate; Rikxoort, Eva M. van; Brown, Matthew S.; Jong, Pim A. de; Ginneken, Bram van

    2014-01-01

    Purpose: Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors’ aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans. Methods: In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling results can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities. Results: On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933. Conclusions: The authors have developed two fast and reliable methods for interactive lung segmentation in

  13. Reconstructing in space and time the closure of the middle and western segments of the Bangong-Nujiang Tethyan Ocean in the Tibetan Plateau

    Science.gov (United States)

    Fan, Jian-Jun; Li, Cai; Wang, Ming; Xie, Chao-Ming

    2018-01-01

    When and how the Bangong-Nujiang Tethyan Ocean closed is a highly controversial subject. In this paper, we present a detailed study and review of the Cretaceous ophiolites, ocean islands, and flysch deposits in the middle and western segments of the Bangong-Nujiang suture zone (BNSZ), and the Cretaceous volcanic rocks, late Mesozoic sediments, and unconformities within the BNSZ and surrounding areas. Our aim was to reconstruct the spatial-temporal patterns of the closing of the middle and western segments of the Bangong-Nujiang Tethyan Ocean. Our conclusion is that the closure of the ocean started during the Late Jurassic and was mainly complete by the end of the Early Cretaceous. The closure of the ocean involved both "longitudinal diachronous closure" from north to south and "transverse diachronous closure" from east to west. The spatial-temporal patterns of the closure process can be summarized as follows: the development of the Bangong-Nujiang Tethyan oceanic lithosphere and its subduction started before the Late Jurassic; after the Late Jurassic, the ocean began to close because of the compressional regime surrounding the BNSZ; along the northern margin of the Bangong-Nujiang Tethyan Ocean, collisions involving the arcs, back-arc basins, and marginal basins of a multi-arc basin system first took place during the Late Jurassic-early Early Cretaceous, resulting in regional uplift and the regional unconformity along the northern margin of the ocean and in the Southern Qiangtang Terrane on the northern side of the ocean. However, the closure of the Bangong-Nujiang Tethyan Ocean cannot be attributed to these arc-arc and arc-continent collisions, because subduction and the development of the Bangong-Nujiang Tethyan oceanic lithosphere continued until the late Early Cretaceous. The gradual closure of the middle and western segments of Bangong-Nujiang Tethyan Ocean was diachronous from east to west, starting in the east in the middle Early Cretaceous, and being mainly

  14. A non-critical string approach to black holes, time and quantum dynamics

    CERN Document Server

    Ellis, John R.; Nanopoulos, Dimitri V.

    1994-01-01

    We review our approach to time and quantum dynamics based on non-critical string theory, developing its relationship to previous work on non-equilibrium quantum statistical mechanics and the microscopic arrow of time. We exhibit specific non-factorizing contributions to the {\

  15. A time series modeling approach in risk appraisal of violent and sexual recidivism.

    Science.gov (United States)

    Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E

    2010-10-01

    For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.

  16. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  17. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  18. Production Time Loss Reduction in Sauce Production Line by Lean Six Sigma Approach

    Science.gov (United States)

    Ritprasertsri, Thitima; Chutima, Parames

    2017-06-01

    In all industries, time losses, which are incurred in processing are very important. As a result, losses are incurred in productivity and cost. This research aimed to reduce lost time that occurs in sauce production line by using the lean six sigma approach. The main objective was to reduce the time for heating sauce which causes a lot of time lost in the production line which affects productivity. The methodology was comprised of the five-phase improvement model of Six Sigma. This approach begins with defining phase, measuring phase, analysing phase, improving phase and controlling phase. Cause-and-effect matrix and failure mode and effect analysis (FMEA) were adopted to screen the factors which affect production time loss. The results showed that the percentage of lost time from heating sauce reduced by 47.76%. This increased productivity to meet the plan.

  19. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    Science.gov (United States)

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  20. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    Science.gov (United States)

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation