WorldWideScience

Sample records for semi-automatic computer system

  1. SEMI-AUTOMATIC SPEAKER VERIFICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    E. V. Bulgakova

    2016-03-01

    Full Text Available Subject of Research. The paper presents a semi-automatic speaker verification system based on comparing of formant values, statistics of phone lengths and melodic characteristics as well. Due to the development of speech technology, there is an increased interest now in searching for expert speaker verification systems, which have high reliability and low labour intensiveness because of the automation of data processing for the expert analysis. System Description. We present a description of a novel system analyzing similarity or distinction of speaker voices based on comparing statistics of phone lengths, formant features and melodic characteristics. The characteristic feature of the proposed system based on fusion of methods is a weak correlation between the analyzed features that leads to a decrease in the error rate of speaker recognition. The system advantage is the possibility to carry out rapid analysis of recordings since the processes of data preprocessing and making decision are automated. We describe the functioning methods as well as fusion of methods to combine their decisions. Main Results. We have tested the system on the speech database of 1190 target trials and 10450 non-target trials, including the Russian speech of the male and female speakers. The recognition accuracy of the system is 98.59% on the database containing records of the male speech, and 96.17% on the database containing records of the female speech. It was also experimentally established that the formant method is the most reliable of all used methods. Practical Significance. Experimental results have shown that proposed system is applicable for the speaker recognition task in the course of phonoscopic examination.

  2. Semi-automatic microdrive system for positioning electrodes during electrophysiological recordings from rat brain

    Science.gov (United States)

    Dabrowski, Piotr; Kublik, Ewa; Mozaryn, Jakub

    2015-09-01

    Electrophysiological recording of neuronal action potentials from behaving animals requires portable, precise and reliable devices for positioning of multiple microelectrodes in the brain. We propose a semi-automatic microdrive system for independent positioning of up to 8 electrodes (or tetrodes) in a rat (or larger animals). Device is intended to be used in chronic, long term recording applications in freely moving animals. Our design is based on independent stepper motors with lead screws which will offer single steps of ~ μm semi-automatically controlled from the computer. Microdrive system prototype for one electrode was developed and tested. Because of the lack of the systematic test procedures dedicated to such applications, we propose the evaluation of the prototype similar to ISO norm for industrial robots. To this end we designed and implemented magnetic linear and rotary encoders that provided information about electrode displacement and motor shaft movement. On the basis of these measurements we estimated repeatability, accuracy and backlash of the drive. According to the given assumptions and preliminary tests, the device should provide greater accuracy than hand-controlled manipulators available on the market. Automatic positioning will also shorten the course of the experiment and improve the acquisition of signals from multiple neuronal populations.

  3. Feasibility Study of Semi-Automatic Pipe Handling System and Fabrication Facility

    Science.gov (United States)

    1978-08-01

    industries . Japan has two major manufacturers. The Ishikawajima - Harima Heavy Industries Company, Limited, or IHL system is very...Company Howaldtswerke-Deutsche Werdt (HDW) Ishikawajima - Harima Heavy Industries Italcantieri-Cantiere De Geneva-Sestri Kockums AB Larikka Company (T...Equipment Suppliers l .- 2. 3. 4 . 5 . Racking -Systems, Semi-Automatic Ishikawajima - Harima Heavy Industries Company, Ltd. (IHI) Mitsui

  4. A semi-automatic system for labelling seafood products and ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-05-10

    May 10, 2010 ... The labelling system is based on years of direct and careful observation of ... ponents and software development have been carefully designed with solutions ... database and web server (DWS) which is an optional system for.

  5. A semi-automatic system for labelling seafood products and ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-05-10

    May 10, 2010 ... tecture of a system designed for the trawl fishing industry ...... of precursors for lignin biosynthesis and other phenolic defensive compounds in .... Industrial. Polysaccharides: Genetic Engineering, Structure/Property Relations.

  6. A semi-automatic system for labelling seafood products and ...

    African Journals Online (AJOL)

    ... label by user-friendly automated software that excludes any possible manipulation by the crew. Based on results obtained from the installation of the LS on bottom commercial trawlers, the system certified the origin of the seafood products and simultaneously provided, indirectly, geospatial fisheries yield and fishing effort ...

  7. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    Science.gov (United States)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  8. Semi-automatic segmentation of subcutaneous tumours from micro-computed tomography images

    Science.gov (United States)

    Ali, Rehan; Gunduz-Demir, Cigdem; Szilágyi, Tünde; Durkee, Ben; Graves, Edward E.

    2013-11-01

    This paper outlines the first attempt to segment the boundary of preclinical subcutaneous tumours, which are frequently used in cancer research, from micro-computed tomography (microCT) image data. MicroCT images provide low tissue contrast, and the tumour-to-muscle interface is hard to determine, however faint features exist which enable the boundary to be located. These are used as the basis of our semi-automatic segmentation algorithm. Local phase feature detection is used to highlight the faint boundary features, and a level set-based active contour is used to generate smooth contours that fit the sparse boundary features. The algorithm is validated against manually drawn contours and micro-positron emission tomography (microPET) images. When compared against manual expert segmentations, it was consistently able to segment at least 70% of the tumour region (n = 39) in both easy and difficult cases, and over a broad range of tumour volumes. When compared against tumour microPET data, it was able to capture over 80% of the functional microPET volume. Based on these results, we demonstrate the feasibility of subcutaneous tumour segmentation from microCT image data without the assistance of exogenous contrast agents. Our approach is a proof-of-concept that can be used as the foundation for further research, and to facilitate this, the code is open-source and available from www.setuvo.com.

  9. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    Science.gov (United States)

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, 14.4 km · h-1) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  10. Semi-automatic system for UV images analysis of historical musical instruments

    Science.gov (United States)

    Dondi, Piercarlo; Invernizzi, Claudia; Licchelli, Maurizio; Lombardi, Luca; Malagodi, Marco; Rovetta, Tommaso

    2015-06-01

    The selection of representative areas to be analyzed is a common problem in the study of Cultural Heritage items. UV fluorescence photography is an extensively used technique to highlight specific surface features which cannot be observed in visible light (e.g. restored parts or treated with different materials), and it proves to be very effective in the study of historical musical instruments. In this work we propose a new semi-automatic solution for selecting areas with the same perceived color (a simple clue of similar materials) on UV photos, using a specifically designed interactive tool. The proposed method works in two steps: (i) users select a small rectangular area of the image; (ii) program automatically highlights all the areas that have the same color of the selected input. The identification is made by the analysis of the image in HSV color model, the most similar to the human perception. The achievable result is more accurate than a manual selection, because it can detect also points that users do not recognize as similar due to perception illusion. The application has been developed following the rules of usability, and Human Computer Interface has been improved after a series of tests performed by expert and non-expert users. All the experiments were performed on UV imagery of the Stradivari violins collection stored by "Museo del Violino" in Cremona.

  11. Semi-automatic classification of skeletal morphology in genetically altered mice using flat-panel volume computed tomography.

    Directory of Open Access Journals (Sweden)

    Christian Dullin

    2007-07-01

    Full Text Available Rapid progress in exploring the human and mouse genome has resulted in the generation of a multitude of mouse models to study gene functions in their biological context. However, effective screening methods that allow rapid noninvasive phenotyping of transgenic and knockout mice are still lacking. To identify murine models with bone alterations in vivo, we used flat-panel volume computed tomography (fpVCT for high-resolution 3-D imaging and developed an algorithm with a computational intelligence system. First, we tested the accuracy and reliability of this approach by imaging discoidin domain receptor 2- (DDR2- deficient mice, which display distinct skull abnormalities as shown by comparative landmark-based analysis. High-contrast fpVCT data of the skull with 200 microm isotropic resolution and 8-s scan time allowed segmentation and computation of significant shape features as well as visualization of morphological differences. The application of a trained artificial neuronal network to these datasets permitted a semi-automatic and highly accurate phenotype classification of DDR2-deficient compared to C57BL/6 wild-type mice. Even heterozygous DDR2 mice with only subtle phenotypic alterations were correctly determined by fpVCT imaging and identified as a new class. In addition, we successfully applied the algorithm to classify knockout mice lacking the DDR1 gene with no apparent skull deformities. Thus, this new method seems to be a potential tool to identify novel mouse phenotypes with skull changes from transgenic and knockout mice on the basis of random mutagenesis as well as from genetic models. However for this purpose, new neuronal networks have to be created and trained. In summary, the combination of fpVCT images with artificial neuronal networks provides a reliable, novel method for rapid, cost-effective, and noninvasive primary screening tool to detect skeletal phenotypes in mice.

  12. Semi-automatic ground truth generation for license plate recognition system

    Science.gov (United States)

    Wang, Shen-Zheng; Zhao, San-Lung; Chen, Yi-Yuan; Lan, Kung-Ming

    2011-09-01

    License plate recognition (LPR) system is to help alert relevant personnel of any passing vehicle in the surveillance area. In order to test algorithms for license plate recognition, it is necessary to have input frames in which the ground truth is determined. The purpose of ground truth data is here to provide an absolute reference for performance evaluation or training purposes. However, annotating ground truth data for real-life inputs is very disturbing task because of timeconsuming manual. In this paper, we proposed a method of semi-automatic ground truth generation for license plate recognition in video sequences. The method started from region of interesting detection to rapidly extract characters lines followed by a license plate recognition system to verify the license plate regions and recognized the numbers. On the top of the LPR system, we incorporated a tracking-validation mechanism to detect the time interval of passing vehicles in input sequences. The tracking mechanism is initialized by a single license plate region in one frame. Moreover, in order to tolerate the variation of the license plate appearances in the input sequences, the validator would be updated by capturing positive and negatives samples during tracking. Experimental results show that the proposed method can achieve promising results.

  13. A semi-automatic indexing system based on embedded information in HTML documents

    OpenAIRE

    Vàllez Letrado, Mari; Pedraza, Rafael; Codina, Lluís; Blanco, Saúl; Rovira, Cristòfol

    2015-01-01

    Purpose: The purpose of this paper is to describe and evaluate the tool DigiDoc MetaEdit which allows the semi-automatic indexing of HTML documents. The tool works by identifying and suggesting keywords from a thesaurus according to the embedded information in HTML documents. This enables the parameterization of keyword assignment based on how frequently the terms appear in the document, the relevance of their position, and the combination of both. Design/methodology/approach: In order to ...

  14. Building a semi-automatic ontology learning and construction system for geosciences

    Science.gov (United States)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  15. Comparison of (semi-)automatic and manually adjusted measurements of left ventricular function in dual source computed tomography using three different software tools

    NARCIS (Netherlands)

    de Jonge, G. J.; van Ooijen, P. M. A.; Overbosch, J.; Litcheva Gueorguieva, A.; Janssen-van der Weide, M. C.; Oudkerk, M.

    To assess the accuracy of (semi-)automatic measurements of left ventricular (LV) functional parameters in cardiac dual-source computed tomography (DSCT) compared to manually adjusted measurements in three different workstations. Forty patients, who underwent cardiac DSCT, were included (31 men, mean

  16. The Semi-Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    Directory of Open Access Journals (Sweden)

    C.S. Ierotheou

    2001-01-01

    Full Text Available The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.

  17. Semi-automatic integrated segmentation approaches and contour extraction applied to computed tomography scan images.

    Science.gov (United States)

    Khoodoruth, B Dhalila S Y; Rughooputh, Harry C S; Lefer, Wilfrid

    2008-01-01

    We propose to segment two-dimensional CT scans traumatic brain injuries with various methods. These methods are hybrid, feature extraction, level sets, region growing, and watershed which are analysed based upon their parametric and nonparametric arguments. The pixel intensities, gradient magnitude, affinity map, and catchment basins of these methods are validated based upon various constraints evaluations. In this article, we also develop a new methodology for a computational pipeline that uses bilateral filtering, diffusion properties, watershed, and filtering with mathematical morphology operators for the contour extraction of the lesion in the feature available based mainly on the gradient function. The evaluations of the classification of these lesions are very briefly outlined in this context and are being undertaken by pattern recognition in another paper work.

  18. Application of a semi-automatic ROI setting system for brain PET images to animal PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Kuge, Yuji; Akai, Nobuo; Tamura, Koji [Inst. for Biofunctional Research, Ltd., Suita, Osaka (Japan)] [and others

    1998-10-01

    ProASSIST, a semi-automatic ROI (region of interest) setting system for human brain PET images, has been modified for use with the canine brain, and the performance of the obtained system was evaluated by comparing the operational simplicity for ROI setting and the consistency of ROI values obtained with those by a conventional manual procedure. Namely, we created segment maps for the canine brain by making reference to the coronal section atlas of the canine brain by Lim et al., and incorporated them into the ProASSIST system. For the performance test, CBF (cerebral blood flow) and CMRglc (cerebral metabolic rate in glucose) images in dogs with or without focal cerebral ischemia were used. In ProASSIST, brain contours were defined semiautomatically. In the ROI analysis of the test image, manual modification of the contour was necessary in half cases examined (8/16). However, the operation was rather simple so that the operation time per one brain section was significantly shorter than that in the manual operation. The ROI values determined by the system were comparable with those by the manual procedure, confirming the applicability of the system to these animal studies. The use of the system like the present one would also merit the more objective data acquisition for the quantitative ROI analysis, because no manual procedure except for some specifications of the anatomical features is required for ROI setting. (author)

  19. Semi-automatic region-of-interest segmentation based computer-aided diagnosis of mass lesions from dynamic contrast-enhanced magnetic resonance imaging based breast cancer screening.

    Science.gov (United States)

    Levman, Jacob; Warner, Ellen; Causer, Petrina; Martel, Anne

    2014-10-01

    Cancer screening with magnetic resonance imaging (MRI) is currently recommended for very high risk women. The high variability in the diagnostic accuracy of radiologists analyzing screening MRI examinations of the breast is due, at least in part, to the large amounts of data acquired. This has motivated substantial research towards the development of computer-aided diagnosis (CAD) systems for breast MRI which can assist in the diagnostic process by acting as a second reader of the examinations. This retrospective study was performed on 184 benign and 49 malignant lesions detected in a prospective MRI screening study of high risk women at Sunnybrook Health Sciences Centre. A method for performing semi-automatic lesion segmentation based on a supervised learning formulation was compared with the enhancement threshold based segmentation method in the context of a computer-aided diagnostic system. The results demonstrate that the proposed method can assist in providing increased separation between malignant and radiologically suspicious benign lesions. Separation between malignant and benign lesions based on margin measures improved from a receiver operating characteristic (ROC) curve area of 0.63 to 0.73 when the proposed segmentation method was compared with the enhancement threshold, representing a statistically significant improvement. Separation between malignant and benign lesions based on dynamic measures improved from a ROC curve area of 0.75 to 0.79 when the proposed segmentation method was compared to the enhancement threshold, also representing a statistically significant improvement. The proposed method has potential as a component of a computer-aided diagnostic system.

  20. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks

    Science.gov (United States)

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans

    2016-01-01

    Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686

  1. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...

  2. Semi-Automatic Contacting System ’GR-1’ in Marine Radios,

    Science.gov (United States)

    1982-10-26

    TECIHOOY Do. 100411 TECNOLOGY IVISION VISION. WP.APG. OHIO. FMD -ID(RS)T-1043-92 Date 26 Oct 19 62 * 6~~ * -’ *(.* b ~ ~ ~ -C *; PVN’ 7 ’.\\ Th’% q...34Regulamin’Radiokommunikacyjing z salecenia CCIR Nr 476" (Radio Communication Regulations recom- mended by "CCIR" Number 476) and many other publications... communication links. The basic block diagram of the system is presented in Figures 1 and 2. The operating procedures of the system are defined by the

  3. Graphical user interface (GUIDE) and semi-automatic system for the acquisition of anaglyphs

    Science.gov (United States)

    Canchola, Marco A.; Arízaga, Juan A.; Cortés, Obed; Tecpanecatl, Eduardo; Cantero, Jose M.

    2013-09-01

    Diverse educational experiences have shown greater acceptance of children to ideas related to science, compared with adults. That fact and showing great curiosity are factors to consider to undertake scientific outreach efforts for children, with prospects of success. Moreover now 3D digital images have become a topic that has gained importance in various areas, entertainment, film and video games mainly, but also in areas such as medical practice transcendental in disease detection This article presents a system model for 3D images for educational purposes that allows students of various grade levels, school and college, have an approach to image processing, explaining the use of filters for stereoscopic images that give brain impression of depth. The system is based on one of two hardware elements, centered on an Arduino board, and a software based on Matlab. The paper presents the design and construction of each of the elements, also presents information on the images obtained and finally how users can interact with the device.

  4. Semi-automatic surface sediment sampling system - A prototype to be implemented in bivalve fishing surveys

    Science.gov (United States)

    Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.

    2018-01-01

    In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

  5. Semi-Automatic Identification of Humpback Whales

    NARCIS (Netherlands)

    E.B. Ranguelova (Elena); M.J. Huiskes (Mark); E.J. Pauwels (Eric); K. Dawson-Howe; A.C. Kokaram; F. Shevlin

    2004-01-01

    htmlabstractThis paper describes current work on a photo-id system for humpback whales. Individuals of this species can be uniquely identied by the light and dark pigmentation patches on their tails. We propose semi-automatic algorithm based on marker-controlled watershed transformation for

  6. High-Resolution, Semi-Automatic Fault Mapping Using Umanned Aerial Vehicles and Computer Vision: Mapping from an Armchair

    Science.gov (United States)

    Micklethwaite, S.; Vasuki, Y.; Turner, D.; Kovesi, P.; Holden, E.; Lucieer, A.

    2012-12-01

    Our ability to characterise fractures depends upon the accuracy and precision of field techniques, as well as the quantity of data that can be collected. Unmanned Aerial Vehicles (UAVs; otherwise known as "drones") and photogrammetry, provide exciting new opportunities for the accurate mapping of fracture networks, over large surface areas. We use a highly stable, 8 rotor, UAV platform (Oktokopter) with a digital SLR camera and the Structure-from-Motion computer vision technique, to generate point clouds, wireframes, digital elevation models and orthorectified photo mosaics. Furthermore, new image analysis methods such as phase congruency are applied to the data to semiautomatically map fault networks. A case study is provided of intersecting fault networks and associated damage, from Piccaninny Point in Tasmania, Australia. Outcrops >1 km in length can be surveyed in a single 5-10 minute flight, with pixel resolution ~1 cm. Centimetre scale precision can be achieved when selected ground control points are measured using a total station. These techniques have the potential to provide rapid, ultra-high resolution mapping of fracture networks, from many different lithologies; enabling us to more accurately assess the "fit" of observed data relative to model predictions, over a wide range of boundary conditions.igh resolution DEM of faulted outcrop (Piccaninny Point, Tasmania) generated using the Oktokopter UAV (inset) and photogrammetric techniques.

  7. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    Directory of Open Access Journals (Sweden)

    Aymery Constant

    Full Text Available INTRODUCTION: Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. METHODS: We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. RESULTS: Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5] and this impact faded within six months following the intervention. No effect of information delivery was found. CONCLUSION: Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  8. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    Science.gov (United States)

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  9. A method for semi-automatic segmentation and evaluation of intracranial aneurysms in bone-subtraction computed tomography angiography (BSCTA) images

    Science.gov (United States)

    Krämer, Susanne; Ditt, Hendrik; Biermann, Christina; Lell, Michael; Keller, Jörg

    2009-02-01

    The rupture of an intracranial aneurysm has dramatic consequences for the patient. Hence early detection of unruptured aneurysms is of paramount importance. Bone-subtraction computed tomography angiography (BSCTA) has proven to be a powerful tool for detection of aneurysms in particular those located close to the skull base. Most aneurysms though are chance findings in BSCTA scans performed for other reasons. Therefore it is highly desirable to have techniques operating on standard BSCTA scans available which assist radiologists and surgeons in evaluation of intracranial aneurysms. In this paper we present a semi-automatic method for segmentation and assessment of intracranial aneurysms. The only user-interaction required is placement of a marker into the vascular malformation. Termination ensues automatically as soon as the segmentation reaches the vessels which feed the aneurysm. The algorithm is derived from an adaptive region-growing which employs a growth gradient as criterion for termination. Based on this segmentation values of high clinical and prognostic significance, such as volume, minimum and maximum diameter as well as surface of the aneurysm, are calculated automatically. the segmentation itself as well as the calculated diameters are visualised. Further segmentation of the adjoining vessels provides the means for visualisation of the topographical situation of vascular structures associated to the aneurysm. A stereolithographic mesh (STL) can be derived from the surface of the segmented volume. STL together with parameters like the resiliency of vascular wall tissue provide for an accurate wall model of the aneurysm and its associated vascular structures. Consequently the haemodynamic situation in the aneurysm itself and close to it can be assessed by flow modelling. Significant values of haemodynamics such as pressure onto the vascular wall, wall shear stress or pathlines of the blood flow can be computed. Additionally a dynamic flow model can be

  10. Semi-automatic approach for music classification

    Science.gov (United States)

    Zhang, Tong

    2003-11-01

    Audio categorization is essential when managing a music database, either a professional library or a personal collection. However, a complete automation in categorizing music into proper classes for browsing and searching is not yet supported by today"s technology. Also, the issue of music classification is subjective to some extent as each user may have his own criteria for categorizing music. In this paper, we propose the idea of semi-automatic music classification. With this approach, a music browsing system is set up which contains a set of tools for separating music into a number of broad types (e.g. male solo, female solo, string instruments performance, etc.) using existing music analysis methods. With results of the automatic process, the user may further cluster music pieces in the database into finer classes and/or adjust misclassifications manually according to his own preferences and definitions. Such a system may greatly improve the efficiency of music browsing and retrieval, while at the same time guarantee accuracy and user"s satisfaction of the results. Since this semi-automatic system has two parts, i.e. the automatic part and the manual part, they are described separately in the paper, with detailed descriptions and examples of each step of the two parts included.

  11. Natural Fiber Cut Machine Semi-Automatic Linear Motion System for Empty Fiber Bunches: Re-designing for Local Use

    Science.gov (United States)

    Asfarizal; Kasim, Anwar; Gunawarman; Santosa

    2017-12-01

    Empty Palm bunches of fiber is local ingredient in Indonesia that easy to obtain. Empty Palm bunches of fiber can be obtained from the palm oil industry such as in West Pasaman. The character of the empty Palm bunches of fiber that is strong and pliable has high-potential for particle board. To transform the large quantities of fiber become particles in size 0-10 mm requires a specially designed cut machine. Therefore, the machine is designed in two-stage system that is mechanical system, structure and cutting knife. Components that have been made, assembled and then tested to reveal the ability of the machine to cut. The results showed that the straight back and forth motion cut machine is able to cut out the empty oil palm bunches of fiber with a length 0-1 cm, 2 cm, 8 cm and the surface of the cut is not stringy. The cutting capacity is at a length of 2 cm in the result 24.4 (kg/h) and 8 cm obtained results of up to 84 (kg/h)

  12. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    Science.gov (United States)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  13. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    Science.gov (United States)

    Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960

  14. Semi-automatic determination of dips and depths of geologic contacts from magnetic data with application to the Turi Fault System, Taranaki Basin, New Zealand

    Science.gov (United States)

    Caratori Tontini, Fabio; Blakely, Richard J.; Stagpoole, Vaughan; Seebeck, Hannu

    2018-03-01

    We show a simple and fast method for calculating geometric parameters of magnetic contacts from spatial gradients of magnetic field data. The method is based on well-established properties of the tangent of the tilt-angle of reduced-to-the-pole magnetic data, and extends the performance of existing methods by allowing direct estimation of depths, locations and dips of magnetic contacts. It uses a semi-automatic approach where the user interactively specifies points on magnetic maps where the calculation is to be performed. Some prior geologic knowledge and visual interpretation of magnetic anomalies is required to choose proper calculation points. We successfully tested the method on synthetic models of contacts at different depths and with different dip angles. We offer an example of the method applied to airborne magnetic data from Taranaki Basin located offshore the North Island of New Zealand.

  15. A Semi-Automatic Variability Search

    Science.gov (United States)

    Maciejewski, G.; Niedzielski, A.

    Technical features of the Semi-Automatic Variability Search (SAVS) operating at the Astronomical Observatory of the Nicolaus Copernicus University and the results of the first year of observations are presented. The user-friendly software developed for reduction of acquired CCD images and detection of new variable stars is also described.

  16. Semi-automatic recognition of marine debris on beaches

    OpenAIRE

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-01-01

    An increasing amount of anthropogenic marine debris is pervading the earth?s environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with ...

  17. Semi-automatic object geometry estimation for image personalization

    Science.gov (United States)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  18. Accessories for Enhancement of the Semi-Automatic Welding Processes

    National Research Council Canada - National Science Library

    Wheeler, Douglas M; Sawhill, James M

    2000-01-01

    .... The development of these accessories for work normally performed by the semi-automatic welding operator should significantly reduce operator hand-to-eye coordination requirements thereby enhancing...

  19. A web based semi automatic frame work for astrobiological researches

    Directory of Open Access Journals (Sweden)

    P.V. Arun

    2013-12-01

    Full Text Available Astrobiology addresses the possibility of extraterrestrial life and explores measures towards its recognition. Researches in this context are founded upon the premise that indicators of life encountered in space will be recognizable. However, effective recognition can be accomplished through a universal adaptation of life signatures without restricting solely to those attributes that represent local solutions to the challenges of survival. The life indicators should be modelled with reference to temporal and environmental variations specific to each planet and time. In this paper, we investigate a semi-automatic open source frame work for the accurate detection and interpretation of life signatures by facilitating public participation, in a similar way as adopted by SETI@home project. The involvement of public in identifying patterns can bring a thrust to the mission and is implemented using semi-automatic framework. Different advanced intelligent methodologies may augment the integration of this human machine analysis. Automatic and manual evaluations along with dynamic learning strategy have been adopted to provide accurate results. The system also helps to provide a deep public understanding about space agency’s works and facilitate a mass involvement in the astrobiological studies. It will surely help to motivate young eager minds to pursue a career in this field.

  20. Neuromantic - from semi manual to semi automatic reconstruction of neuron morphology

    Directory of Open Access Journals (Sweden)

    Darren eMyatt

    2012-03-01

    Full Text Available The ability to create accurate geometric models of neuronal morphologyis important for understanding the role of shape in informationprocessing. Despite a significant amount of research on automating neuronreconstructions from image stacks obtained via microscopy, in practice mostdata are still collected manually.This paper describes Neuromantic, an open source system for threedimensional digital tracing of neurites. Neuromantic reconstructions arecomparable in quality to those of existing commercial and freeware systemswhile balancing speed and accuracy of manual reconstruction. Thecombination of semi-automatic tracing, intuitive editing, and ability ofvisualising large image stacks on standard computing platforms providesa versatile tool that can help address the reconstructions availabilitybottleneck. Practical considerations for reducing the computational time andspace requirements of the extended algorithm are also discussed.

  1. Semi-automatic bowel wall thickness measurements on MR enterography in patients with Crohn's disease.

    Science.gov (United States)

    Naziroglu, Robiel E; Puylaert, Carl A J; Tielbeek, Jeroen A W; Makanyanga, Jesica; Menys, Alex; Ponsioen, Cyriel Y; Hatzakis, Haralambos; Taylor, Stuart A; Stoker, Jaap; van Vliet, Lucas J; Vos, Frans M

    2017-06-01

    facilitates reproducible delineation of regions with active Crohn's disease. The semi-automatic thickness measurement sustains significantly improved interobserver agreement. Advances in knowledge: Automation of bowel wall thickness measurements strongly increases reproducibility of these measurements, which are commonly used in MRI scoring systems of Crohn's disease activity.

  2. United States Coast Guard (USCG) SSAMPS (Standard Semi-Automatic Message Processing System) Upgrade Network Studies Functional Analysis and Cost Report

    Science.gov (United States)

    1988-07-01

    80286 to a 80386 based CPU . Genicom Uot Matrix Printer Manufactured by Genicom, the dot matrix printer is a receive only printer which handles all...II The USCG will receive the FAST System II in FY88 and FY89 to support MDZ operations. The FAST System II employs the CPT S 9000T CPU , and with the...9825 processor used in the SSAMPS uses a single-task CPU . Rather than delay SSAMPS message processing through addition of the SRT communications

  3. Semi-automatic ROI placement system for analysis of brain PET images based on elastic model. Application to diagnosis of Alzheimer's disease

    Energy Technology Data Exchange (ETDEWEB)

    Ohyama, Masashi; Mishina, Masahiro; Kitamura, Shin; Katayama, Yasuo [Nippon Medical School, Tokyo (Japan); Senda, Michio; Tanizaki, Naoki; Ishii, Kenji

    2000-02-01

    PET with 18F-fluorodeoxyglucose (FDG) is a useful technique to image cerebral glucose metabolism and to detect patients with Alzheimer's disease in the early stage, in which characteristic temporoparietal hypometabolism is visualized. We have developed a new system, in which the standard brain ROI atlas made of networks of segments is elastically transformed to match the subject brain images, so that standard ROIs defined on the segments are placed on the individual brain images and are used to measure radioactivity over each brain region. We applied this methods to Alzheimer's disease. This method was applied to the images of 10 normal subjects (ages 55 +/- 12) and 21 patients clinically diagnosed as Alzheimer's disease (age 61 +/- 10). The FDG uptake reflecting glucose metabolism was evaluated with SUV, i.e. decay corrected radioactivity divided by injected dose per body weight in (Bq/ml)/(Bq/g). The system worked all right in every subject including those with extensive hypometabolism. Alzheimer patients showed markedly lower in the parietal cortex (4.0-4.1). When the threshold value of FDG uptake in the parietal lobe was set as 5 (Bq/ml)/(Bq/g), we could discriminate the patients with Alzheimer's disease from the normal subjects. The sensitivity was 86% and the specificity was 90%. This system can assist diagnosis of FDG images and may be useful for treating data of a large number of subjects; e.g. when PET is applied to health screening. (author)

  4. Research on Semi-automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Qian Jun

    2008-11-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  5. Research on Semi-Automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Zeng Jian-Jun

    2007-06-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  6. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  7. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    NARCIS (Netherlands)

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide

  8. Semi-automatic recognition of marine debris on beaches.

    Science.gov (United States)

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-05-09

    An increasing amount of anthropogenic marine debris is pervading the earth's environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments.

  9. Semi-automatic Term Extraction for the African Languages, with ...

    African Journals Online (AJOL)

    rbr

    in either English or Afrikaans, with textbooks on literature and grammar of the. African languages a possible exception. ... (i.e. the analysis of raw corpora with WST, with the aim of semi-automatically extracting terminology), the ... versely, since manual term excerption is of necessity subject to human error, the results of the ...

  10. Semi-automatic Term Extraction for the African Languages, with ...

    African Journals Online (AJOL)

    Worldwide, semi-automatically extracting terms from corpora is becoming the norm for the compilation of terminology lists, term banks or dictionaries for special purposes. If Africanlanguage terminologists are willing to take their rightful place in the new millennium, they must not only take cognisance of this trend but also be ...

  11. Semi-automatic segmentation of femur based on harmonic barrier.

    Science.gov (United States)

    Zou, Zheng; Liao, Sheng-Hui; Luo, San-Ding; Liu, Qing; Liu, Shi-Jian

    2017-05-01

    Segmentation of the femur from the hip joint in computed tomography (CT) is an important preliminary step in hip surgery planning and simulation. However, this is a time-consuming and challenging task due to the weak boundary, the varying topology of the hip joint, and the extremely narrow or blurred space between the femoral head and the acetabulum. To address these problems, this study proposed a semi-automatic segmentation framework based on harmonic fields for accurate segmentation. The proposed method comprises three steps. First, with high-level information provided by the user, shape information provided by neighboring slices as well as the statistical information in the mask, a region selection method is proposed to effectively locate joint space for the harmonic field. Second, incorporated with an improved gradient, the harmonic field is used to adaptively extract a curve as the barrier that separates the femoral head from the acetabulum accurately. Third, a divide and conquer segmentation strategy based on the harmonic barrier is used to combine the femoral head part and body part as the final segmentation result. We have tested 40 hips with considerately narrow or disappeared joint spaces. The experimental results are evaluated based on Jaccard, Dice, directional cut discrepancy (DCD) and receiver operating characteristic (ROC), and we achieve the higher Jaccard of 84.02%, Dice of 85.96%, area under curve (AUC) of 89.3%, and the lower error with DCD of 0.52mm. The effective ratio of our method is 79.1% even for cases with severe malformation. The results show that our method performs best in terms of effectiveness and accuracy on the whole data set. The proposed method is efficient to segment femurs with narrow joint space. The accurate segmentation results can assist the physicians for osteoarthritis diagnosis in future. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Semi-automatic person-annotation in context-aware personal photo collections

    OpenAIRE

    O'Hare, Neil

    2007-01-01

    Recent years have seen a revolution in photography with a move away from analogue film capture towards digital capture technologies, resulting m the accumulation of large numbers of personal digital photos. This means that people now have very large collections of their own personal photos, which they must manage and organise. In this thesis we present a prototype context-aware photo management system called MediAssist, which facilitates browsing, searching and semi-automatic annotati...

  13. Semi-Automatic Story Generation for a Geographic Server

    Directory of Open Access Journals (Sweden)

    Rizwan Mehmood

    2017-06-01

    Full Text Available Most existing servers providing geographic data tend to offer various numeric data. We started to work on a new type of geographic server, motivated by four major issues: (i How to handle figures when different databases present different values; (ii How to build up sizeable collections of pictures with detailed descriptions; (iii How to update rapidly changing information, such as personnel holding important functions, and (iv how to describe countries not just by using trivial facts, but stories typical of the country involved. We have discussed and partially resolved issues (i and (ii in previous papers; we have decided to deal with (iii, regional updates, by tying in an international consortium whose members would either help themselves or find individuals to do so. It is issue (iv, how to generate non-trivial stories typical of a country, that we decided to tackle both manually (the consortium has by now generated around 200 stories, and by developing techniques for semi-automatic story generation, which is the topic of this paper. The basic idea was first to define sets of reasonably reliable servers that may differ from region to region, to extract “interesting facts” from the servers, and combine them in a raw version of a report that would require some manual cleaning-up (hence: semi-automatic. It may sound difficult to extract “interesting facts” from Web pages, but it is quite possible to define heuristics to do so, never exceeding the few lines allowed for quotation purposes. One very simple rule we adopted was this: ‘Look for sentences with superlatives!’ If a sentence contains words like “biggest”, “highest”, “most impressive” etc. it is likely to contain an interesting fact. With a little imagination, we have been able to establish a set of such rules. We will show that the stories can be completely different. For some countries, historical facts may dominate; for others, the beauty of landscapes; for

  14. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    Science.gov (United States)

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  15. Implementation of a microcontroller-based semi-automatic coagulator.

    Science.gov (United States)

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  16. Semi Automatic Ontology Instantiation in the domain of Risk Management

    Science.gov (United States)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  17. Does semi-automatic bone-fragment segmentation improve the reproducibility of the Letournel acetabular fracture classification?

    Science.gov (United States)

    Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J

    2017-09-01

    The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); Psegmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  18. Reproducibility of semi-automatic coronary plaque quantification in coronary CT angiography with sub-mSv radiation dose

    DEFF Research Database (Denmark)

    Øvrehus, Kristian Altern; Schuhbaeck, Annika; Marwan, Mohamed

    2015-01-01

    or response to medical therapies. The reproducibility from repeated assessment of such quantitative measurements from low-radiation dose coronary CTA has not been previously assessed. Purpose: To evaluate the interscan, interobserver and intraobserver reproducibility for coronary plaque volume assessment...... using semi-automatic plaque analyses algorithm in low radiation dose coronary CTA. Methods: In 50 consecutive patients undergoing two 128-slice dual source CT scans within 12 days with a mean radiation dose of 0.7 mSv per coronary CTA, the interscan, interobserver and intraobserver reproducibility.......6% and +/- 32.1%, respectively. Conclusion: A semi-automatic plaque assessment algorithm in repeated low radiation dose coronary CTA allows for high reproducibility of coronary plaque characterization and quantification measures. (C) 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc...

  19. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    Science.gov (United States)

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  20. Semi-automatic tool to ease the creation and optimization of GPU programs

    DEFF Research Database (Denmark)

    Jepsen, Jacob

    2014-01-01

    We present a tool that reduces the development time of GPU-executable code. We implement a catalogue of common optimizations specific to the GPU architecture. Through the tool, the programmer can semi-automatically transform a computationally-intensive code section into GPU-executable form...... and apply optimizations thereto. Based on experiments, the code generated by the tool can be 3-256X faster than code generated by an OpenACC compiler, 4-37X faster than optimized CPU code, and attain up to 25% of peak performance of the GPU. We found that by using pattern-matching rules, many...... of the transformations can be performed automatically, which makes the tool usable for both novices and experts in GPU programming....

  1. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    Science.gov (United States)

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  2. A NEW APPROACH FOR THE SEMI-AUTOMATIC TEXTURE GENERATION OF THE BUILDINGS FACADES, FROM TERRESTRIAL LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    E. Oniga

    2012-07-01

    Full Text Available The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  3. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    Science.gov (United States)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  4. Semi-automatic creation and exploitation of competence ontologies for trend aware profiling, matching and planning

    Directory of Open Access Journals (Sweden)

    H. Ulrich Hoppe

    2013-03-01

    Full Text Available Human resource managers are confronted with the problem that they have to fulfil the enterprise’s competence needs either by developing their current staff or by recruiting new employees. In both cases decisions about who to select for the new position and more often which competences are crucial for the future success. This is especially true for highly dynamic industries like the IT industry. This article presents our work from the KoPIWA project in the Digital Economy. Our approach is based on a conceptual model that encompasses the market level, the social context and relations between competences. This model is the foundation for the ontology based decision support system for human resource managers presented in this article. To semi-automatically create and update the competence ontology methods from the areas data mining, social network analysis and information retrieval are employed. The results of these methods with regard to recruiting and learning processes are presented.

  5. Semi-automatic Road Extraction from SAR images using EKF and PF

    Science.gov (United States)

    Zhao, J. Q.; Yang, J.; Li, P. X.; Lu, J. M.

    2015-06-01

    Recently, the use of linear features for processing remote sensing images has shown its importance in applications. As one of typical linear targets, road is a hot spot of remote sensing image interpretation. Since extracting road by manual processing is too expensive and time consuming, researches based on automatic and semi-automatic have become more and more popular. Such interest is motivated by the requirements for civilian and military applications, such as road maps, traffic monitoring, navigation applications, and topographic mapping. How to extract road accurately and efficiently from SAR images is a key problem. In this paper, through analyzing characteristics of road, semi-automatic road extraction based on Extend Kalman Filtering (EKF) and Particles Filtering (PF), is presented. These two methods have the same algorithm flow which is an iterative approach based on prediction and update. The specific procedure as follows: at prediction stage, we obtain prior probability density function by the prior stage and prediction model, and through prior probability density function and the new measurement, at update stage we obtain the posterior probability density function which is the optimal estimation of road system state. Both EKF and PF repeat the steps above until the extracting tasks are finished. We use these two methods to extract road respectively. The effectiveness of the proposed method is demonstrated through the experiments from Howland by UAVSAR in L-band. And through contrast experiments, we discover that extracting difference complexity of road based on different methods can improve accuracy and efficiency. The results show that EKF has better performance on road with middle noise and PF has better performance on road with high noise.

  6. Semi-automatic quantitative measurements of intracranial internal carotid artery stenosis and calcification using CT angiography

    NARCIS (Netherlands)

    Bleeker, Leslie; Marquering, Henk A.; van den Berg, René; Nederkoorn, Paul J.; Majoie, Charles B.

    2012-01-01

    Intracranial carotid artery atherosclerotic disease is an independent predictor for recurrent stroke. However, its quantitative assessment is not routinely performed in clinical practice. In this diagnostic study, we present and evaluate a novel semi-automatic application to quantitatively measure

  7. Complications in CT-guided, semi-automatic coaxial core biopsy of potentially malignant pulmonary lesions; Komplikationen bei CT-gesteuerter, koaxialer Stanzbiopsie malignomverdaechtiger Lungenherde in halbautomatischer Technik

    Energy Technology Data Exchange (ETDEWEB)

    Schulze, R. [Klinik Loewenstein (Germany). Dept. of Radiology; Seebacher, G.; Enderes, B.; Kugler, G.; Graeter, T.P. [Klinik Loewenstein (Germany). Dept. of Thoracic and Vascular Surgery; Fischer, J.R. [Klinik Loewenstein (Germany). Dept. of Oncology

    2015-08-15

    Histological verification of pulmonary lesions is important to ensure correct treatment. Computed tomographic (CT) transthoracic core biopsy is a well-established procedure for this. Comparison of available studies is difficult though, as technical and patient characteristics vary. Using a standardized biopsy technique, we evaluated our results for CT-guided coaxial core biopsy in a semi-automatic technique. Within 2 years, 664 consecutive transpulmonary biopsies were analyzed retrospectively. All interventions were performed using a 17/18G semi-automatic core biopsy system (4 to 8 specimens). The incidence of complications and technical and patient-dependent risk factors were evaluated. Comparing the histology with the final diagnosis, the sensitivity was 96.3 %, and the specificity was 100 %. 24 procedures were not diagnostic. In all others immunohistological staining was possible. The main complication was pneumothorax (PT, 21.7 %), with chest tube insertion in 6 % of the procedures (n = 40). Bleeding without therapeutic consequences was seen in 43 patients. There was no patient mortality. The rate of PT with chest tube insertion was 9.6 % in emphysema patients and 2.8 % without emphysema (p = 0.001). Smokers with emphysema had a 5 times higher risk of developing PT (p = 0.001). Correlation of tumor size or biopsy angle and the risk of PT was not significant. The risk of developing a PT was associated with an increasing intrapulmonary depth of the lesion (p = 0.001). CT-guided, semiautomatic coaxial core biopsy of the lung is a safe diagnostic procedure. The rate of major complications is low, and the sensitivity and specificity of the procedure are high. Smokers with emphysema are at a significantly higher risk of developing pneumothorax and should be monitored accordingly.

  8. A semi-automatic system for labelling seafood products and ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-05-10

    May 10, 2010 ... commission for the Mediterranean; FAO, food and agriculture organization; NMEA, national marine ... fishery including the packaging and stacking of fish products until the vessel reaches the harbor. The planning of the ...... Chitosan - a natural, cationic biopolymer: commercial applications. In: Yapalma M ...

  9. A semi-automatic system for labelling seafood products and ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-05-10

    May 10, 2010 ... most significant step in the evolution of market economies. The wide use of information .... data daily via a satellite modem connected to the internet to access a file transfer protocol (FTP) server. All settings ... provider, username and password for internet and FTP access) are pre-set on the DTMS software ...

  10. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography.

    Science.gov (United States)

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-07-01

    To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed.

  11. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Fitton, I. [European Georges Pompidou Hospital, Department of Radiology, 20 rue Leblanc, 75015, Paris (France); Cornelissen, S. A. P. [Image Sciences Institute, UMC, Department of Radiology, P.O. Box 85500, 3508 GA Utrecht (Netherlands); Duppen, J. C.; Rasch, C. R. N.; Herk, M. van [The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Department of Radiotherapy, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Steenbakkers, R. J. H. M. [University Medical Center Groningen, Department of Radiation Oncology, Hanzeplein 1, 9713 GZ Groningen (Netherlands); Peeters, S. T. H. [UZ Gasthuisberg, Herestraat 49, 3000 Leuven, Belgique (Belgium); Hoebers, F. J. P. [Maastricht University Medical Center, Department of Radiation Oncology (MAASTRO clinic), GROW School for Oncology and Development Biology Maastricht, 6229 ET Maastricht (Netherlands); Kaanders, J. H. A. M. [UMC St-Radboud, Department of Radiotherapy, Geert Grooteplein 32, 6525 GA Nijmegen (Netherlands); Nowak, P. J. C. M. [ERASMUS University Medical Center, Department of Radiation Oncology,Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2011-08-15

    Purpose: To develop a delineation tool that refines physician-drawn contours of the gross tumor volume (GTV) in nasopharynx cancer, using combined pixel value information from x-ray computed tomography (CT) and magnetic resonance imaging (MRI) during delineation. Methods: Operator-guided delineation assisted by a so-called ''snake'' algorithm was applied on weighted CT-MRI registered images. The physician delineates a rough tumor contour that is continuously adjusted by the snake algorithm using the underlying image characteristics. The algorithm was evaluated on five nasopharyngeal cancer patients. Different linear weightings CT and MRI were tested as input for the snake algorithm and compared according to contrast and tumor to noise ratio (TNR). The semi-automatic delineation was compared with manual contouring by seven experienced radiation oncologists. Results: A good compromise for TNR and contrast was obtained by weighing CT twice as strong as MRI. The new algorithm did not notably reduce interobserver variability, it did however, reduce the average delineation time by 6 min per case. Conclusions: The authors developed a user-driven tool for delineation and correction based a snake algorithm and registered weighted CT image and MRI. The algorithm adds morphological information from CT during the delineation on MRI and accelerates the delineation task.

  12. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  13. Comparison of manual and semi-automatic measuring techniques in MSCT scans of patients with lymphoma: a multicentre study.

    Science.gov (United States)

    Höink, A J; Weßling, J; Koch, R; Schülke, C; Kohlhase, N; Wassenaar, L; Mesters, R M; D'Anastasi, M; Fabel, M; Wulff, A; Pinto dos Santos, D; Kießling, A; Graser, A; Dicken, V; Karpitschka, M; Bornemann, L; Heindel, W; Buerke, B

    2014-11-01

    Multicentre evaluation of the precision of semi-automatic 2D/3D measurements in comparison to manual, linear measurements of lymph nodes regarding their inter-observer variability in multi-slice CT (MSCT) of patients with lymphoma. MSCT data of 63 patients were interpreted before and after chemotherapy by one/two radiologists in five university hospitals. In 307 lymph nodes, short (SAD)/long (LAD) axis diameter and WHO area were determined manually and semi-automatically. Volume was solely calculated semi-automatically. To determine the precision of the individual parameters, a mean was calculated for every lymph node/parameter. Deviation of the measured parameters from this mean was evaluated separately. Statistical analysis entailed intraclass correlation coefficients (ICC) and Kruskal-Wallis tests. Median relative deviations of semi-automatic parameters were smaller than deviations of manually assessed parameters, e.g. semi-automatic SAD 5.3 vs. manual 6.5 %. Median variations among different study sites were smaller if the measurement was conducted semi-automatically, e. g. manual LAD 5.7/4.2 % vs. semi-automatic 3.4/3.4 %. Semi-automatic volumetry was superior to the other parameters (2.8 %). Semi-automatic determination of different lymph node parameters is (compared to manually assessed parameters) associated with a slightly greater precision and a marginally lower inter-observer variability. These results are with regard to the increasing mobility of patients among different medical centres and in relation to the quality management of multicentre trials of importance. • In a multicentre setting, semi-automatic measurements are more accurate than manual assessments. • Lymph node volumetry outperforms all other semi-automatically and manually performed measurements. • Use of semi-automatic lymph node analyses can reduce the inter-observer variability.

  14. Automatic vs semi-automatic global cardiac function assessment using 64-row CT

    Science.gov (United States)

    Greupner, J; Zimmermann, E; Hamm, B; Dewey, M

    2012-01-01

    Objective Global cardiac function assessment using multidetector CT (MDCT) is time-consuming. Therefore we sought to compare an automatic software tool with an established semi-automatic method. Methods A total of 36 patients underwent CT with 64×0.5 mm detector collimation, and global left ventricular function was subsequently assessed by two independent blinded readers using both an automatic region-growing-based software tool (with and without manual adjustment) and an established semi-automatic software tool. We also analysed automatic motion mapping to identify end-systole. Results The time needed for assessment using the semi-automatic approach (12:12±6:19 min) was reduced by 75–85% with the automatic software tool (unadjusted, 01:34±0:29 min, adjusted, 02:53±1:19 min; both pautomatic (58.6±14.9%) and the semi-automatic (58.0±15.3%) approaches. Also the manually adjusted automatic approach led to significantly smaller limits of agreement than the unadjusted automatic approach for end-diastolic volume (±36.4 ml vs ±58.5 ml, p>0.05). Using motion mapping to automatically identify end-systole reduced analysis time by 95% compared with the semi-automatic approach, but showed inferior precision for EF and end-systolic volume. Conclusion Automatic function assessment using MDCT with manual adjustment shows good agreement with an established semi-automatic approach, while reducing the analysis by 75% to less than 3 min. This suggests that automatic CT function assessment with manual correction may be used for fast, comfortable and reliable evaluation of global left ventricular function. PMID:22045953

  15. Semi-Automatic Operational Service for Drought Monitoring and Forecasting in the Tuscany Region

    Directory of Open Access Journals (Sweden)

    Ramona Magno

    2018-02-01

    Full Text Available A drought-monitoring and forecasting system developed for the Tuscany region was improved in order to provide a semi-automatic, more detailed, timely and comprehensive operational service for decision making, water authorities, researchers and general stakeholders. Ground-based and satellite data from different sources (regional meteorological stations network, MODIS Terra satellite and CHIRPS/CRU precipitation datasets are integrated through an open-source, interoperable SDI (spatial data infrastructure based on PostgreSQL/PostGIS to produce vegetation and precipitation indices that allow following of the occurrence and evolution of a drought event. The SDI allows the dissemination of comprehensive, up-to-date and customizable information suitable for different end-users through different channels, from a web page and monthly bulletins, to interoperable web services, and a comprehensive climate service. The web services allow geospatial elaborations on the fly, and the geo-database can be increased with new input/output data to respond to specific requests or to increase the spatial resolution.

  16. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    Science.gov (United States)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  17. Pulmonary subsolid nodules: value of semi-automatic measurement in diagnostic accuracy, diagnostic reproducibility and nodule classification agreement.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Hwang, Eui Jin; Ahn, Su Yeon; Goo, Jin Mo

    2017-12-01

    We hypothesized that semi-automatic diameter measurements would improve the accuracy and reproducibility in discriminating preinvasive lesions and minimally invasive adenocarcinomas from invasive pulmonary adenocarcinomas appearing as subsolid nodules (SSNs) and increase the reproducibility in classifying SSNs. Two readers independently performed semi-automatic and manual measurements of the diameters of 102 SSNs and their solid portions. Diagnostic performance in predicting invasive adenocarcinoma based on diameters was tested using logistic regression analysis with subsequent receiver operating characteristic curves. Inter- and intrareader reproducibilities of diagnosis and SSN classification according to Fleischner's guidelines were investigated for each measurement method using Cohen's κ statistics. Semi-automatic effective diameter measurements were superior to manual average diameters for the diagnosis of invasive adenocarcinoma (AUC, 0.905-0.923 for semi-automatic measurement and 0.833-0.864 for manual measurement; pautomatic measurement (κ=0.924 for semi-automatic measurement and 0.690 for manual measurement, p=0.012). Inter-reader SSN classification reproducibility was significantly higher with semi-automatic measurement (κ=0.861 for semi-automatic measurement and 0.683 for manual measurement, p=0.022). Semi-automatic effective diameter measurement offers an opportunity to improve diagnostic accuracy and reproducibility as well as the classification reproducibility of SSNs. • Semi-automatic effective diameter measurement improves the diagnostic accuracy for pulmonary subsolid nodules. • Semi-automatic measurement increases the inter-reader agreement on the diagnosis for subsolid nodules. • Semi-automatic measurement augments the inter-reader reproducibility for the classification of subsolid nodules.

  18. Semi-automatic deformable registration of prostate MR images to pathological slices

    NARCIS (Netherlands)

    Mazaheri, Yousef; Bokacheva, Louisa; Kroon, Dirk-Jan; Akin, Oguz; Hricak, Hedvig; Chamudot, Daniel; Fine, Samson; Koutcher, Jason A.

    2010-01-01

    Purpose: To present a semi-automatic deformable registration algorithm for co-registering T2-weighted (T2w) images of the prostate with whole-mount pathological sections of prostatectomy specimens. Materials and Methods: Twenty-four patients underwent 1.5 Tesla (T) endorectal MR imaging before

  19. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.

    Science.gov (United States)

    Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu

    2014-10-01

    Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.

  20. Semi-automatic Citation Correction with Lemon8-XML

    Directory of Open Access Journals (Sweden)

    MJ Suhonos

    2009-03-01

    Full Text Available The Lemon8-XML software application, developed by the Public Knowledge Project (PKP, provides an open-source, computer-assisted interface for reliable citation structuring and validation. Lemon8-XML combines citation parsing algorithms with freely-available online indexes such as PubMed, WorldCat, and OAIster. Fully-automated markup of entire bibliographies may be a genuine possibility using this approach. Automated markup of citations would increase bibliographic accuracy while reducing copyediting demands.

  1. Breast Contrast Enhanced MR Imaging: Semi-Automatic Detection of Vascular Map and Predominant Feeding Vessel.

    Science.gov (United States)

    Petrillo, Antonella; Fusco, Roberta; Filice, Salvatore; Granata, Vincenza; Catalano, Orlando; Vallone, Paolo; Di Bonito, Maurizio; D'Aiuto, Massimiliano; Rinaldo, Massimo; Capasso, Immacolata; Sansone, Mario

    2016-01-01

    To obtain breast vascular map and to assess correlation between predominant feeding vessel and tumor location with a semi-automatic method compared to conventional radiologic reading. 148 malignant and 75 benign breast lesions were included. All patients underwent bilateral MR imaging. Written informed consent was obtained from the patients before MRI. The local ethics committee granted approval for this study. Semi-automatic breast vascular map and predominant vessel detection was performed on MRI, for each patient. Semi-automatic detection (depending on grey levels threshold manually chosen by radiologist) was compared with results of two expert radiologists; inter-observer variability and reliability of semi-automatic approach were assessed. Anatomic analysis of breast lesions revealed that 20% of patients had masses in internal half, 50% in external half and the 30% in subareolar/central area. As regards the 44 tumors in internal half, based on radiologic consensus, 40 demonstrated a predominant feeding vessel (61% were supplied by internal thoracic vessels, 14% by lateral thoracic vessels, 16% by both thoracic vessels and 9% had no predominant feeding vessel-pfeeding vessel (66% were supplied by internal thoracic vessels, 11% by lateral thoracic vessels, 9% by both thoracic vessels and 14% had no predominant feeding vessel-pfeeding vessel (25% were supplied by internal thoracic vessels, 39% by lateral thoracic vessels, 18% by both thoracic vessels and 18% had no predominant feeding vessel-pfeeding vessel (27% were supplied by internal thoracic vessels, 45% by lateral thoracic vessels, 4% by both thoracic vessels and 24% had no predominant feeding vessel-pfeeding vessel. An excellent reliability for semi-automatic assessment (Cronbach's alpha = 0.96) was reported. Predominant feeding vessel location was correlated with breast lesion location: internal thoracic artery supplied the highest proportion of breasts with tumor in internal half and lateral thoracic

  2. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    Energy Technology Data Exchange (ETDEWEB)

    Renz, Diane M. [Charite University Medicine Berlin, Campus Virchow Clinic, Department of Radiology, Berlin (Germany); Hahn, Horst K.; Rexilius, Jan [Institute for Medical Image Computing, Fraunhofer MEVIS, Bremen (Germany); Schmidt, Peter [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Neuroradiology, Jena (Germany); Lentschig, Markus [MR- and PET/CT Centre Bremen, Bremen (Germany); Pfeil, Alexander [Friedrich-Schiller-University, Jena University Hospital, Department of Internal Medicine III, Jena (Germany); Sauner, Dieter [St. Georg Clinic Leipzig, Hospital Hubertusburg, Department of Radiology, Wermsdorf (Germany); Fitzek, Clemens [Asklepios Clinic Brandenburg, Department of Radiology and Neuroradiology, Brandenburg an der Havel (Germany); Mentzel, Hans-Joachim [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Pediatric Radiology, Jena (Germany); Kaiser, Werner A. [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Reichenbach, Juergen R. [Friedrich-Schiller-University, Jena University Hospital, Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Boettcher, Joachim [SRH Clinic Gera, Institute of Diagnostic and Interventional Radiology, Gera (Germany)

    2011-04-15

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were <4% with a mean error of 2.2%. In vitro, reproducibility was also promising with intra-observer variability of 0.7% for observer 1 and 0.3% for observers 2 and 3; mean inter-observer variability was in vitro 1.2%. In vivo, scan-rescan, intra-observer and inter-observer variability showed mean values of 3.2%, 1.8% and 3.3%, respectively. Unifactorial analysis of variance demonstrated no significant differences between pituitary volumes for various MR scans or software calculations in the healthy study groups (p > 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  3. Concept-based semi-automatic classification of drugs.

    Science.gov (United States)

    Gurulingappa, Harsha; Kolárik, Corinna; Hofmann-Apitius, Martin; Fluck, Juliane

    2009-08-01

    The anatomical therapeutic chemical (ATC) classification system maintained by the World Health Organization provides a global standard for the classification of medical substances and serves as a source for drug repurposing research. Nevertheless, it lacks several drugs that are major players in the global drug market. In order to establish classifications for yet unclassified drugs, this paper presents a newly developed approach based on a combination of information extraction (IE) and machine learning (ML) techniques. Most of the information about drugs is published in the scientific articles. Therefore, an IE-based framework is employed to extract terms from free text that express drug's chemical, pharmacological, therapeutic, and systemic effects. The extracted terms are used as features within a ML framework to predict putative ATC class labels for unclassified drugs. The system was tested on a portion of ATC containing drugs with an indication on the cardiovascular system. The class prediction turned out to be successful with the best predictive accuracy of 89.47% validated by a 100-fold bootstrapping of the training set and an accuracy of 77.12% on an independent test set. The presented concept-based classification system outperformed state-of-the-art classification methods based on chemical structure properties.

  4. Comparison of manual and semi-automatic measuring techniques in MSCT scans of patients with lymphoma: a multicentre study

    Energy Technology Data Exchange (ETDEWEB)

    Hoeink, A.J.; Wessling, J.; Schuelke, C.; Kohlhase, N.; Wassenaar, L.; Heindel, W.; Buerke, B. [University Hospital Muenster, Department of Clinical Radiology, Muenster (Germany); Koch, R. [University of Muenster, Institute of Biostatistics and Clinical Research (IBKF), Muenster (Germany); Mesters, R.M. [University Hospital Muenster, Department of Haematology and Oncology, Muenster (Germany); D' Anastasi, M.; Graser, A.; Karpitschka, M. [University Hospital Muenchen (LMU), Institute of Clinical Radiology, Muenchen (Germany); Fabel, M.; Wulff, A. [University Hospital Kiel, Department of Clinical Radiology, Kiel (Germany); Pinto dos Santos, D. [University Hospital Mainz, Department of Diagnostic and Interventional Radiology, Mainz (Germany); Kiessling, A. [University Hospital Marburg, Department of Diagnostic and Interventional Radiology, Marburg (Germany); Dicken, V.; Bornemann, L. [Institute of Medical Imaging Computing, Fraunhofer MeVis, Bremen (Germany)

    2014-11-15

    Multicentre evaluation of the precision of semi-automatic 2D/3D measurements in comparison to manual, linear measurements of lymph nodes regarding their inter-observer variability in multi-slice CT (MSCT) of patients with lymphoma. MSCT data of 63 patients were interpreted before and after chemotherapy by one/tworadiologists in five university hospitals. In 307 lymph nodes, short (SAD)/long (LAD) axis diameter and WHO area were determined manually and semi-automatically. Volume was solely calculated semi-automatically. To determine the precision of the individual parameters, a mean was calculated for every lymph node/parameter. Deviation of the measured parameters from this mean was evaluated separately. Statistical analysis entailed intraclass correlation coefficients (ICC) and Kruskal-Wallis tests. Median relative deviations of semi-automatic parameters were smaller than deviations of manually assessed parameters, e.g. semi-automatic SAD 5.3 vs. manual 6.5 %. Median variations among different study sites were smaller if the measurement was conducted semi-automatically, e. g. manual LAD 5.7/4.2 % vs. semi-automatic 3.4/3.4 %. Semi-automatic volumetry was superior to the other parameters (2.8 %). Semi-automatic determination of different lymph node parameters is (compared to manually assessed parameters) associated with a slightly greater precision and a marginally lower inter-observer variability. These results are with regard to the increasing mobility of patients among different medical centres and in relation to the quality management of multicentre trials of importance. (orig.)

  5. Semi-automatic Segmentation of Multiple Sclerosis Lesion Based Active Contours Model and Variational Dirichlet Process'

    OpenAIRE

    Derraz, Foued; Peyrodie, Laurent; Pinti, Antonio; Taleb, Abdelmalik; Chikh, Azzeddine; Hautecoeur, Patrick

    2010-01-01

    International audience; We propose a new semi-automatic segmentation based Active Contour Model and statistic prior knowledge of Multiple Sclerosis (MS) Lesions in Regions Of Interest (RIO) within brain Magnetic Resonance Images(MRI). Reliable segmentation of MS lesion is important for at least three types of practical applications: pharmaceutical trails, making decision for drug treatment, patient follow-up. Manual segmentation of the MS lesions in brain MRI by well qualified experts is usua...

  6. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy

    OpenAIRE

    Ramkumar, A.; Dolz, J.; Kirisli, H.A.; Adebahr, S; Schimek-Jasch, T.; NESTLE, U; Massoptier, L.; Varga, E.; Stappers, P.J.; W. J. Niessen; Song, Y.

    2016-01-01

    textabstractAccurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians’ expertise a...

  7. How to tag it right? Semi-automatic support for email management

    OpenAIRE

    Dolata, Mateusz; Jeners, Nils; Prinz, Wolfgang

    2013-01-01

    Smarting-up email processing is a challenging task. Users file or retrieve multiple messages every day, while receiving little support from most popular email clients. Incorporating semi-automatic sorting into existing applications can help users with their daily work through more efficient organization and more effective search. Successful and seamless integration of tagging into existing email solutions requires exact analysis of user practices, needs and considerations, which are addressed...

  8. CPR courses and semi-automatic defibrillators--life saving in cardiac arrest?

    Science.gov (United States)

    Schneider, Liane; Sterz, Fritz; Haugk, Moritz; Eisenburger, Philip; Scheinecker, Wolfdieter; Kliegel, Andreas; Laggner, Anton N

    2004-12-01

    The aim was to assess the knowledge of life-supporting first-aid in both cardiac arrest survivors and relatives, and their willingness to have a semi-automatic external defibrillator in their homes and use it in an emergency. Cardiac arrest survivors, their families, friends, neighbours and co-workers were interviewed by medical students using prepared questionnaires. Their knowledge and self-assessment of life-supporting first-aid, their willingness to have a semi-automatic defibrillator in their homes and their willingness to use it in an emergency before and after a course in cardiopulmonary resuscitation (CPR) with a semi-automatic external defibrillator was evaluated. Courses were taught by medical students who had received special training in basic and advanced life support. Both patients and relatives, after a course of 2-3 h, were no longer afraid of making mistakes by providing life-supporting first-aid. The automated external defibrillator (AED) was generally accepted and considered easy to handle. We consider equipping high-risk patients and their families with AEDs as a viable method of increasing their survival in case of a recurring cardiac arrest. This, of course, should be corroborated by further studies.

  9. Semi-automatic road extraction from very high resolution remote sensing imagery by RoadModeler

    Science.gov (United States)

    Lu, Yao

    Accurate and up-to-date road information is essential for both effective urban planning and disaster management. Today, very high resolution (VHR) imagery acquired by airborne and spaceborne imaging sensors is the primary source for the acquisition of spatial information of increasingly growing road networks. Given the increased availability of the aerial and satellite images, it is necessary to develop computer-aided techniques to improve the efficiency and reduce the cost of road extraction tasks. Therefore, automation of image-based road extraction is a very active research topic. This thesis deals with the development and implementation aspects of a semi-automatic road extraction strategy, which includes two key approaches: multidirectional and single-direction road extraction. It requires a human operator to initialize a seed circle on a road and specify a extraction approach before the road is extracted by automatic algorithms using multiple vision cues. The multidirectional approach is used to detect roads with different materials, widths, intersection shapes, and degrees of noise, but sometimes it also interprets parking lots as road areas. Different from the multidirectional approach, the single-direction approach can detect roads with few mistakes, but each seed circle can only be used to detect one road. In accordance with this strategy, a RoadModeler prototype was developed. Both aerial and GeoEye-1 satellite images of seven different types of scenes with various road shapes in rural, downtown, and residential areas were used to evaluate the performance of the RoadModeler. The experimental results demonstrated that the RoadModeler is reliable and easy-to-use by a non-expert operator. Therefore, the RoadModeler is much better than the object-oriented classification. Its average road completeness, correctness, and quality achieved 94%, 97%, and 94%, respectively. These results are higher than those of Hu et al. (2007), which are 91%, 90%, and 85

  10. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    National Research Council Canada - National Science Library

    Jung-ran Park; Andrew Brenza

    2015-01-01

      Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data...

  11. Creating an interface for automatic and semi-automatic segmentation of brain structures in newborns

    OpenAIRE

    Luzárraga Aznar, Eduardo; Pacheco, Manuela

    2013-01-01

    Projecte realitzat en el marc d’un programa de mobilitat a TÉLÉCOM PARIS-TECH [ANGLÈS] At the request of Kremlin Bicetre hospital radiologists in Paris, the work done at Télécom ParisTech is to create an interface capable of semi-automatically and automatically segment MRI brain regions's of children between 0 and 3 years. Particulary, we do brain extractions, cerebral grey core and ventricles segmentations, as well as the three tissues present in the brain, white matter, gray matter and ...

  12. Semi-automatic quantitative measurements of intracranial internal carotid artery stenosis and calcification using CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Bleeker, Leslie; Berg, Rene van den; Majoie, Charles B. [Academic Medical Center, Department of Radiology, Amsterdam (Netherlands); Marquering, Henk A. [Academic Medical Center, Department of Radiology, Amsterdam (Netherlands); Academic Medical Center, Department of Biomedical Engineering and Physics, Amsterdam (Netherlands); Nederkoorn, Paul J. [Academic Medical Center, Department of Neurology, Amsterdam (Netherlands)

    2012-09-15

    Intracranial carotid artery atherosclerotic disease is an independent predictor for recurrent stroke. However, its quantitative assessment is not routinely performed in clinical practice. In this diagnostic study, we present and evaluate a novel semi-automatic application to quantitatively measure intracranial internal carotid artery (ICA) degree of stenosis and calcium volume in CT angiography (CTA) images. In this retrospective study involving CTA images of 88 consecutive patients, intracranial ICA stenosis was quantitatively measured by two independent observers. Stenoses were categorized with cutoff values of 30% and 50%. The calcification in the intracranial ICA was qualitatively categorized as absent, mild, moderate, or severe and quantitatively measured using the semi-automatic application. Linear weighted kappa values were calculated to assess the interobserver agreement of the stenosis and calcium categorization. The average and the standard deviation of the quantitative calcium volume were calculated for the calcium categories. For the stenosis measurements, the CTA images of 162 arteries yielded an interobserver correlation of 0.78 (P < 0.001). Kappa values of the categorized stenosis measurements were moderate: 0.45 and 0.58 for cutoff values of 30% and 50%, respectively. The kappa value for the calcium categorization was 0.62, with a good agreement between the qualitative and quantitative calcium assessment. Quantitative degree of stenosis measurement of the intracranial ICA on CTA is feasible with a good interobserver agreement ICA. Qualitative calcium categorization agrees well with quantitative measurements. (orig.)

  13. A tool for semi-automatic linear feature detection based on DTM

    Science.gov (United States)

    Bonetto, Sabrina; Facello, Anna; Ferrero, Anna Maria; Umili, Gessica

    2015-02-01

    The tectonic movement along faults is often reflected by geomorphological features such as linear valleys, ridgelines and slope-breaks, steep slopes of uniform aspect, regional anisotropy and tilt of terrain. In the last years, remote sensing data have been used as a source of information for the detection of tectonic structures. In this paper, a new fully 3D approach for semi-automatic extraction and characterization of geological lineaments is presented: linear features are detected on a DTM by means of algorithms based on principal curvature values, and then they are grouped according to data collected from literature review regarding expected orientation of lineaments in the studied area. The overall positive aspects of this semi-automatic process were found to be the informativeness on geological structure for preliminary geological assessment and set identification, the possibility to identify the most interesting portions to be investigated and to analyze zones that are not directly accessible. This method has been applied to a geologically well-known area (the Monferrato geological domain) in order to validate the results of the software processing with remotely sensed data collected from literature review. As regard to orientation, spatial distribution and length of the lineaments, the study demonstrates a correspondence of the obtained results with both remote sensed linear features and field geostructural data.

  14. Semi-automatic 3D lung nodule segmentation in CT using dynamic programming

    Science.gov (United States)

    Sargent, Dustin; Park, Sun Young

    2017-02-01

    We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.

  15. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  16. Sla-Oriented Semi-Automatic Management Of Data Storage And Applications In Distributed Environments

    Directory of Open Access Journals (Sweden)

    Dariusz Król

    2010-01-01

    Full Text Available In this paper we describe a semi-automatic programming framework for supporting userswith managing the deployment of distributed applications along with storing large amountsof data in order to maintain Quality of Service in highly dynamic and distributed environments,e.g., Grid. The Polish national PL-GRID project aims to provide Polish science withboth hardware and software infrastructures which will allow scientists to perform complexsimulations and in-silico experiments on a scale greater than ever before. We highlight theissues and challenges related to data storage strategies that arise at the analysis stage ofuser requirements coming from different areas of science. Next we present a solution to thediscussed issues along with a description of sample usage scenarios. At the end we provideremarks on the current status of the implementation work and some results from the testsperformed.

  17. Semi-automatic Fisher-Tippett guided active contour for lumbar multifidus muscle segmentation.

    Science.gov (United States)

    Lui, Dorothy; Scharfenberger, Christian; De Carvalho, Diana E; Callaghan, Jack P; Wong, Alexander

    2014-01-01

    Rehabilitative Ultrasound Imaging or diagnostic ultrasound is used to measure geometric properties of the lumbar multifidus muscle to infer muscle strength or degeneration for back pain therapy. For this purpose, a novel semi-automatic approach (FTS: Fisher-Tippett Segmentation) based upon the Decoupled Active Contour is proposed to reliably and quickly segment the lumbar multifidus muscle in diagnostic ultrasound. To overcome speckle or hardly visible region boundaries in ultrasound images, we first propose a novel external energy functional to explicitly consider the underlying Fisher-Tippett distribution of ultrasound data. We then introduce a user-guided Hidden Markov Model trellis formation for improved segmentation of weakly-defined regions. Extensive experiments have shown that our approach not only improves the segmentation performance when compared to existing methods, but also does not rely on sub-specialized knowledge for segmentation.

  18. The Semi-automatic Synthesis of 18F-fluoroethyl-choline by Domestic FDG Synthesizer

    Directory of Open Access Journals (Sweden)

    ZHOU Ming

    2016-02-01

    Full Text Available As an important complementary imaging agent for 18F-FDG, 18F-fluoroethyl-choline (18F-FECH has been demonstrated to be promising in brain and prostate cancer imaging. By using domestic PET-FDG-TI-I CPCU synthesizer, 18F-FECH was synthesized by different reagents and consumable supplies. The C18 column was added before the product collection bottle to remove K2.2.2. The 18F-FECH was synthesized by PET-FDG-IT-I synthesizer efficiently about 30 minutes by radiochemical yield of 42.0% (no decay corrected, n=5, and the radiochemical purity was still more than 99.0% after 6 hours. The results showed the domestic PET-FDG-IT-I synthesizer could semi-automatically synthesize injectable 18F-FECH in high efficiency and radiochemical purity

  19. Contour propagation in MRI-guided radiotherapy treatment of cervical cancer: the accuracy of rigid, non-rigid and semi-automatic registrations

    Science.gov (United States)

    van der Put, R. W.; Kerkhof, E. M.; Raaymakers, B. W.; Jürgenliemk-Schulz, I. M.; Lagendijk, J. J. W.

    2009-12-01

    External beam radiation treatment for patients with cervical cancer is hindered by the relatively large motion of the target volume. A hybrid MRI-accelerator system makes it possible to acquire online MR images during treatment in order to correct for motion and deformation. To fully benefit from such a system, online delineation of the target volumes is necessary. The aim of this study is to investigate the accuracy of rigid, non-rigid and semi-automatic registrations of MR images for interfractional contour propagation in patients with cervical cancer. Registration using mutual information was performed on both bony anatomy and soft tissue. A B-spline transform was used for the non-rigid method. Semi-automatic registration was implemented with a point set registration algorithm on a small set of manual landmarks. Online registration was simulated by application of each method to four weekly MRI scans for each of 33 cervical cancer patients. Evaluation was performed by distance analysis with respect to manual delineations. The results show that soft-tissue registration significantly (P cervical cancer, online MRI imaging will allow target localization based on soft tissue visualization, which provides a significantly higher accuracy than localization based on bony anatomy. The use of limited user input to guide the registration increases overall accuracy. Additional non-rigid registration further reduces the propagation error and negates errors caused by small observer variations.

  20. Semi-automatic normalization of multitemporal remote images based on vegetative pseudo-invariant features.

    Directory of Open Access Journals (Sweden)

    Luis Garcia-Torres

    Full Text Available A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1 defining the same parcel of selected vegetative pseudo-invariant features (VPIFs in each multitemporal image; 2 extracting data concerning the VPIF spectral bands from each image; 3 calculating the correction factors (CFs for each image band to fit each image band to the average value of the image series; and 4 obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT, olive orchards (OLI and poplar groves (POP. In the ARIN-normalized images, the range, standard deviation (s. d. and root mean square error (RMSE of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method's efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified.

  1. Definition extraction for glossary creation : a study on extracting definitions for semi-automatic glossary creation in Dutch

    NARCIS (Netherlands)

    Westerhout, E.N.

    2010-01-01

    The central topic of this thesis is the automatic extraction of definitions from text. Definition extraction can play a role in various applications including the semi-automatic development of glossaries in an eLearning context, which constitutes the main focus of this dissertation. A glossary

  2. Evaluation of ventricular dysfunction using semi-automatic longitudinal strain analysis of four-chamber cine MR imaging.

    Science.gov (United States)

    Kawakubo, Masateru; Nagao, Michinobu; Kumazawa, Seiji; Yamasaki, Yuzo; Chishaki, Akiko S; Nakamura, Yasuhiko; Honda, Hiroshi; Morishita, Junji

    2016-02-01

    The aim of this study was to evaluate ventricular dysfunction using the longitudinal strain analysis in 4-chamber (4CH) cine MR imaging, and to investigate the agreement between the semi-automatic and manual measurements in the analysis. Fifty-two consecutive patients with ischemic, or non-ischemic cardiomyopathy and repaired tetralogy of Fallot who underwent cardiac MR examination incorporating cine MR imaging were retrospectively enrolled. The LV and RV longitudinal strain values were obtained by semi-automatically and manually. Receiver operating characteristic (ROC) analysis was performed to determine the optimal cutoff of the minimum longitudinal strain value for the detection of patients with cardiac dysfunction. The correlations between manual and semi-automatic measurements for LV and RV walls were analyzed by Pearson coefficient analysis. ROC analysis demonstrated the optimal cut-off of the minimum longitudinal strain values (εL_min) for diagnoses the LV and RV dysfunction at a high accuracy (LV εL_min = -7.8 %: area under the curve, 0.89; sensitivity, 83 %; specificity, 91 %, RV εL_min = -15.7 %: area under the curve, 0.82; sensitivity, 92 %; specificity, 68 %). Excellent correlations between manual and semi-automatic measurements for LV and RV free wall were observed (LV, r = 0.97, p cine MR imaging can evaluate LV and RV dysfunction with simply and easy measurements. The strain analysis could have extensive application in cardiac imaging for various clinical cases.

  3. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    Science.gov (United States)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  4. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  5. Semi-automatic procedure for the characterization of the shape of volcanic particles

    Science.gov (United States)

    Lo Castro, M.; Andronico, D.; Beckmann, G.; Dueffels, K.; Prestifilippo, M.; Westermann, J.

    2010-12-01

    Volcanic ash is composed of different components, namely juvenile particles, lithics and crystals. Quantifying the relative percentage of the component typologies forming an ash sample is a very important tool to better investigate the physical and geochemical processes related to the dynamics of an explosive event. Such a goal is further enhanced when associated to the characterization of the morphology of volcanic ash particles. However, the measurement and quantification of particle shape are hard challenges, especially when the number of the particles to analyse is high and the size small (i.e. sub-millimetric), as in the case of volcanic ash. The methods for quantitative measurements of particle shape currently used in volcanology are based on image processing, mainly achieved by manual outputs (e.g. microscopy investigations), techniques which are usually time consuming and meticulous permitting to analyse only a limited number of particles. Here we present preliminary results of a new procedure aimed to the development of a fast and reliable, semi-automatic technique for the characterization of morphological and dimensional parameters of a given volcanic ash sample. The proposed procedure founds on the results deriving from CAMSIZER, a compact laboratory instrument developed by Retsch Technology (see http://retsch-technology.com) for the simultaneous measurement of particle size distribution and particle shape of incoherent materials in the range of 30 µm to 30 mm. CAMSIZER bases on digital image processing and permits to obtain measurements of shape parameters on a high number of particles. Our procedure provides first the measurement of the ash sample by CAMSIZER. The obtained results are successively used as input data in cluster analysis model that bases on fuzzy c-mean algorithm, allowing us to semi-automatically define the main classes grouping all those particles characterized by similar morphological parameters. The prospective on the potential of

  6. A reproducible semi-automatic method to quantify the muscle-lipid distribution in clinical 3D CT images of the thigh.

    Science.gov (United States)

    Mühlberg, Alexander; Museyko, Oleg; Laredo, Jean-Denis; Engelke, Klaus

    2017-01-01

    Many studies use threshold-based techniques to assess in vivo the muscle, bone and adipose tissue distribution of the legs using computed tomography (CT) imaging. More advanced techniques divide the legs into subcutaneous adipose tissue (SAT), anatomical muscle (muscle tissue and adipocytes within the muscle border) and intra- and perimuscular adipose tissue. In addition, a so-called muscle density directly derived from the CT-values is often measured. We introduce a new integrated approach to quantify the muscle-lipid system (MLS) using quantitative CT in patients with sarcopenia or osteoporosis. The analysis targets the thigh as many CT studies of the hip do not include entire legs The framework consists of an anatomic coordinate system, allowing delineation of reproducible volumes of interest, a robust semi-automatic 3D segmentation of the fascia and a comprehensive method to quantify of the muscle and lipid distribution within the fascia. CT density-dependent features are calibrated using subject-specific internal CT values of the SAT and external CT values of an in scan calibration phantom. Robustness of the framework with respect to operator interaction, image noise and calibration was evaluated. Specifically, the impact of inter- and intra-operator reanalysis precision and addition of Gaussian noise to simulate lower radiation exposure on muscle and AT volumes, muscle density and 3D texture features quantifying MLS within the fascia, were analyzed. Existing data of 25 subjects (age: 75.6 ± 8.7) with porous and low-contrast muscle structures were included in the analysis. Intra- and inter-operator reanalysis precision errors were below 1% and mostly comparable to 1% of cohort variation of the corresponding features. Doubling the noise changed most 3D texture features by up to 15% of the cohort variation but did not affect density and volume measurements. The application of the novel technique is easy with acceptable processing time. It can thus be employed

  7. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  8. Towards Semi-Automatic Artifact Rejection for the Improvement of Alzheimer's Disease Screening from EEG Signals.

    Science.gov (United States)

    Solé-Casals, Jordi; Vialatte, François-Benoît

    2015-07-23

    A large number of studies have analyzed measurable changes that Alzheimer's disease causes on electroencephalography (EEG). Despite being easily reproducible, those markers have limited sensitivity, which reduces the interest of EEG as a screening tool for this pathology. This is for a large part due to the poor signal-to-noise ratio of EEG signals: EEG recordings are indeed usually corrupted by spurious extra-cerebral artifacts. These artifacts are responsible for a consequent degradation of the signal quality. We investigate the possibility to automatically clean a database of EEG recordings taken from patients suffering from Alzheimer's disease and healthy age-matched controls. We present here an investigation of commonly used markers of EEG artifacts: kurtosis, sample entropy, zero-crossing rate and fractal dimension. We investigate the reliability of the markers, by comparison with human labeling of sources. Our results show significant differences with the sample entropy marker. We present a strategy for semi-automatic cleaning based on blind source separation, which may improve the specificity of Alzheimer screening using EEG signals.

  9. Semi-automatic image personalization tool for variable text insertion and replacement

    Science.gov (United States)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  10. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Science.gov (United States)

    Jiang, Dong; Huang, Yaohuan; Zhuang, Dafang; Zhu, Yunqiang; Xu, Xinliang; Ren, Hongyan

    2012-01-01

    Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1) images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization) with convenience.

  11. Conceptual design of semi-automatic wheelbarrow to overcome ergonomics problems among palm oil plantation workers

    Science.gov (United States)

    Nawik, N. S. M.; Deros, B. M.; Rahman, M. N. A.; Sukadarin, E. H.; Nordin, N.; Tamrin, S. B. M.; Bakar, S. A.; Norzan, M. L.

    2015-12-01

    An ergonomics problem is one of the main issues faced by palm oil plantation workers especially during harvesting and collecting of fresh fruit bunches (FFB). Intensive manual handling and labor activities involved have been associated with high prevalence of musculoskeletal disorders (MSDs) among palm oil plantation workers. New and safe technology on machines and equipment in palm oil plantation are very important in order to help workers reduce risks and injuries while working. The aim of this research is to improve the design of a wheelbarrow, which is suitable for workers and a small size oil palm plantation. The wheelbarrow design was drawn using CATIA ergonomic features. The characteristic of ergonomics assessment is performed by comparing the existing design of wheelbarrow. Conceptual design was developed based on the problems that have been reported by workers. From the analysis of the problem, finally have resulting concept design the ergonomic quality of semi-automatic wheelbarrow with safe and suitable used for palm oil plantation workers.

  12. Computerised volumetric analysis of lesions in multiple sclerosis using new semi-automatic segmentation software.

    Science.gov (United States)

    Dastidar, P; Heinonen, T; Vahvelainen, T; Elovaara, I; Eskola, H

    1999-01-01

    The paper describes the application of new semi-automatic segmentation software to the task of detection of anatomical structures and lesion and their three-dimensional (3D) visualisation in 23 patients with secondary progressive multiple sclerosis (MS). The purpose is to study the correlation between magnetic resonance imaging (MRI) parameters (volumes of plaques and cerebrospinal fluid spaces) and clinical deficits (neurological deficits in the form of EDSS and RFSS scores, and neuropsychological deficits). The software operates in PC/Windows and PC/NeXTstep environments and utilises graphical user interfaces. Quantitative accuracy is measured by performing segmentation of fluid-filled syringes (relative error of 1.5%), and reproducibility is measured by intra- and inter-observer studies (3% and 7% variability, respectively). The mean volumes of MS plaques show significant correlations with the total RFSS scores (p = 0.04). Relative intracranial cerebrospinal fluid (CSF) space volumes show statistically significant correlation with EDSS scores (p = 0.01). The mean volume of MS plaques shows a significant correlation with the overall neuropsychological deficits (p = 0.03). 3D visualisation helps to understand the relationship of lesions to the surrounding brain structures. The use of semiautomatic segmentation techniques is recommended in the clinical diagnosis of MS patients.

  13. Semi-Automatic Detection of Swimming Pools from Aerial High-Resolution Images and LIDAR Data

    Directory of Open Access Journals (Sweden)

    Borja Rodríguez-Cuenca

    2014-03-01

    Full Text Available Bodies of water, particularly swimming pools, are land covers of high interest. Their maintenance involves energy costs that authorities must take into consideration. In addition, swimming pools are important water sources for firefighting. However, they also provide a habitat for mosquitoes to breed, potentially posing a serious health threat of mosquito-borne disease. This paper presents a novel semi-automatic method of detecting swimming pools in urban environments from aerial images and LIDAR data. A new index for detecting swimming pools is presented (Normalized Difference Swimming Pools Index that is combined with three other decision indices using the Dempster–Shafer theory to determine the locations of swimming pools. The proposed method was tested in an urban area of the city of Alcalá de Henares in Madrid, Spain. The method detected all existing swimming pools in the studied area with an overall accuracy of 99.86%, similar to the results obtained by support vector machines (SVM supervised classification.

  14. Semi-automatic mapping for identifying complex geobodies in seismic images

    Science.gov (United States)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  15. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  16. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  17. Colon wall motility: comparison of novel quantitative semi-automatic measurements using cine MRI.

    Science.gov (United States)

    Hoad, C L; Menys, A; Garsed, K; Marciani, L; Hamy, V; Murray, K; Costigan, C; Atkinson, D; Major, G; Spiller, R C; Taylor, S A; Gowland, P A

    2016-03-01

    Recently, cine magnetic resonance imaging (MRI) has shown promise for visualizing movement of the colonic wall, although assessment of data has been subjective and observer dependent. This study aimed to develop an objective and semi-automatic imaging metric of ascending colonic wall movement, using image registration techniques. Cine balanced turbo field echo MRI images of ascending colonic motility were acquired over 2 min from 23 healthy volunteers (HVs) at baseline and following two different macrogol stimulus drinks (11 HVs drank 1 L and 12 HVs drank 2 L). Motility metrics derived from large scale geometric and small scale pixel movement parameters following image registration were developed using the post ingestion data and compared to observer grading of wall motion. Inter and intra-observer variability in the highest correlating metric was assessed using Bland-Altman analysis calculated from two separate observations on a subset of data. All the metrics tested showed significant correlation with the observer rating scores. Line analysis (LA) produced the highest correlation coefficient of 0.74 (95% CI: 0.55-0.86), p cine MRI registered data provides a quick, accurate and non-invasive method to detect wall motion within the ascending colon following a colonic stimulus in the form of a macrogol drink. © 2015 John Wiley & Sons Ltd.

  18. Semi-Automatic Detection of Indigenous Settlement Features on Hispaniola through Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Till F. Sonnemann

    2017-12-01

    Full Text Available Satellite imagery has had limited application in the analysis of pre-colonial settlement archaeology in the Caribbean; visible evidence of wooden structures perishes quickly in tropical climates. Only slight topographic modifications remain, typically associated with middens. Nonetheless, surface scatters, as well as the soil characteristics they produce, can serve as quantifiable indicators of an archaeological site, detectable by analyzing remote sensing imagery. A variety of pre-processed, very diverse data sets went through a process of image registration, with the intention to combine multispectral bands to feed two different semi-automatic direct detection algorithms: a posterior probability, and a frequentist approach. Two 5 × 5 km2 areas in the northwestern Dominican Republic with diverse environments, having sufficient imagery coverage, and a representative number of known indigenous site locations, served each for one approach. Buffers around the locations of known sites, as well as areas with no likely archaeological evidence were used as samples. The resulting maps offer quantifiable statistical outcomes of locations with similar pixel value combinations as the identified sites, indicating higher probability of archaeological evidence. These still very experimental and rather unvalidated trials, as they have not been subsequently groundtruthed, show variable potential of this method in diverse environments.

  19. Usefulness of semi-automatic volumetry compared to established linear measurements in predicting lymph node metastases in MSCT

    Energy Technology Data Exchange (ETDEWEB)

    Buerke, Boris; Puesken, Michael; Heindel, Walter; Wessling, Johannes (Dept. of Clinical Radiology, Univ. of Muenster (Germany)), email: buerkeb@uni-muenster.de; Gerss, Joachim (Dept. of Medical Informatics and Biomathematics, Univ. of Muenster (Germany)); Weckesser, Matthias (Dept. of Nuclear Medicine, Univ. of Muenster (Germany))

    2011-06-15

    Background Volumetry of lymph nodes potentially better reflect asymmetric size alterations independently of lymph node orientation in comparison to metric parameters (e.g. long-axis diameter). Purpose To distinguish between benign and malignant lymph nodes by comparing 2D and semi-automatic 3D measurements in MSCT. Material and Methods FDG-18 PET-CT was performed in 33 patients prior to therapy for malignant melanoma at stage III/IV. One hundred and eighty-six cervico-axillary, abdominal and inguinal lymph nodes were evaluated independently by two radiologists, both manually and with the use of semi-automatic segmentation software. Long axis (LAD), short axis (SAD), maximal 3D diameter, volume and elongation were obtained. PET-CT, PET-CT follow-up and/or histology served as a combined reference standard. Statistics encompassed intra-class correlation coefficients and ROC curves. Results Compared to manual assessment, semi-automatic inter-observer variability was found to be lower, e.g. at 2.4% (95% CI 0.05-4.8) for LAD. The standard of reference revealed metastases in 90 (48%) of 186 lymph nodes. Semi-automatic prediction of lymph node metastases revealed highest areas under the ROC curves for volume (reader 1 0.77, 95%CI 0.64-0.90; reader 2 0.76, 95%CI 0.59-0.86) and SAD (reader 1 0.76, 95%CI 0.64-0.88; reader 2 0.75, 95%CI 0.62-0.89). The findings for LAD (reader 1 0.73, 95%CI 0.60-0.86; reader 2 0.71, 95%CI 0.71, 95%CI 0.57-0.85) and maximal 3D diameter (reader 1 0.70, 95%CI 0.53-0.86; reader 2 0.76, 95%CI 0.50-0.80) were found substantially lower and for elongation (reader 1 0.65, 95%CI 0.50-0.79; reader 2 0.66, 95%CI 0.52-0.81) significantly lower (p < 0.05). Conclusion Semi-automatic analysis of lymph nodes in malignant melanoma is supported by high segmentation quality and reproducibility. As compared to established SAD, semi-automatic lymph node volumetry does not have an additive role for categorizing lymph nodes as normal or metastatic in malignant

  20. Semi-Automatic Selection of Ground Control Points for High Resolution Remote Sensing Data in Urban Areas

    Directory of Open Access Journals (Sweden)

    Gulbe Linda

    2016-12-01

    Full Text Available Geometrical accuracy of remote sensing data often is ensured by geometrical transforms based on Ground Control Points (GCPs. Manual selection of GCP is a time-consuming process, which requires some sort of automation. Therefore, the aim of this study is to present and evaluate methodology for easier, semi-automatic selection of ground control points for urban areas. Custom line scanning algorithm was implemented and applied to data in order to extract potential GCPs for an image analyst. The proposed method was tested for classical orthorectification and special object polygon transform. Results are convincing and show that in the test case semi-automatic methodology is able to correct locations of 70 % (thermal data – 80 % (orthophoto images of buildings. Geometrical transform for subimages of approximately 3 hectares with approximately 12 automatically found GCPs resulted in RSME approximately 1 meter with standard deviation of 1.2 meters.

  1. Feature correspondence and semi-automatic ground truthing for airborne data collection

    Science.gov (United States)

    Tiwari, Spandan; Agarwal, Sanjeev; Phan, Chung; Acinelli, Todd M.

    2005-06-01

    A significant amount of airborne data has been collected in the past and more is expected to be collected in the future to support airborne landmine detection research and evaluation under various programs. In order to evaluate mine and minefield detection performance for sensor and detection algorithms, it is essential to generate reliable and accurate ground truth for the location of the mine targets and fiducials present in raw imagery. The current ground truthing operation is primarily manual, which makes the ground truthing a time consuming and expensive exercise in the overall data collection effort. In this paper, a semi-automatic ground-truthing technique is presented which reduces the role of the operator to a few high-level input and validation actions. A correspondence is established between the high-contrast targets in the airborne imagery called the image features, and the known GPS locations of the targets on the ground called the map features by imposing various position and geometric constraints. These image and map features may include individual fiducial targets, rows of fiducial targets and triplets of non-collinear fiducials. The targets in the imagery are established using the RX anomaly detector. An affine or linear conformal transformation from map features to image features is calculated based on feature correspondence. This map-to-image transformation is used to generate ground-truth for mine targets. Since accurate and reliable flight-log data is currently not available, one-time specification of a few parameters like flight speed, flight direction, camera resolution and specification of the location of the initial frame on the map is required from the operator. These parameters are updated and corrected for subsequent frames based on the processing of previous frames. Image registration is used to ground-truth images which do not have enough high-contrast fiducials for reliable correspondence. A GUI called SemiAutoGT developed in MATLAB

  2. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  3. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-06-01

    This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  4. Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures

    Energy Technology Data Exchange (ETDEWEB)

    Christersson, Albert; Larsson, Sune [Uppsala University, Department of Orthopaedics, Uppsala (Sweden); Nysjoe, Johan; Malmberg, Filip; Sintorn, Ida-Maria; Nystroem, Ingela [Uppsala University, Centre for Image Analysis, Uppsala (Sweden); Berglund, Lars [Uppsala University, Uppsala Clinical Research Centre, UCR Statistics, Uppsala (Sweden)

    2016-06-15

    The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were ± 4.4 , ± 1.9 and ± 6.8 respectively. For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested. (orig.)

  5. NLP techniques associated with the OpenGALEN ontology for semi-automatic textual extraction of medical knowledge: abstracting and mapping equivalent linguistic and logical constructs.

    Science.gov (United States)

    do Amaral, M B; Roberts, A; Rector, A L

    2000-01-01

    This research project presents methodological and theoretical issues related to the inter-relationship between linguistic and conceptual semantics, analysing the results obtained by the application of a NLP parser to a set of radiology reports. Our objective is to define a technique for associating linguistic methods with domain specific ontologies for semi-automatic extraction of intermediate representation (IR) information formats and medical ontological knowledge from clinical texts. We have applied the Edinburgh LTG natural language parser to 2810 clinical narratives describing radiology procedures. In a second step, we have used medical expertise and ontology formalism for identification of semantic structures and abstraction of IR schemas related to the processed texts. These IR schemas are an association of linguistic and conceptual knowledge, based on their semantic contents. This methodology aims to contribute to the elaboration of models relating linguistic and logical constructs based on empirical data analysis. Advance in this field might lead to the development of computational techniques for automatic enrichment of medical ontologies from real clinical environments, using descriptive knowledge implicit in large text corpora sources.

  6. ANFSQ-7 the computer that shaped the cold war

    CERN Document Server

    Ulmann, Bernd

    2014-01-01

    One of the most impressive computer systems ever was the vacuum tube based behemoth AN/FSQ-7, which was the heart of the ""Semi Automatic Ground Environment"". Machines of this type were children of the Cold War and had a tremendous effect not only on this episode in politics but also generated a vast amount of spin-offs which still shape our world.

  7. Preliminary Investigation on the Effects of Shockwaves on Water Samples Using a Portable Semi-Automatic Shocktube

    Science.gov (United States)

    Wessley, G. Jims John

    2017-10-01

    The propagation of shock waves through any media results in an instantaneous increase in pressure and temperature behind the shockwave. The scope of utilizing this sudden rise in pressure and temperature in new industrial, biological and commercial areas has been explored and the opportunities are tremendous. This paper presents the design and testing of a portable semi-automatic shock tube on water samples mixed with salt. The preliminary analysis shows encouraging results as the salinity of water samples were reduced up to 5% when bombarded with 250 shocks generated using a pressure ratio of 2. 5. Paper used for normal printing is used as the diaphragm to generate the shocks. The impact of shocks of much higher intensity obtained using different diaphragms will lead to more reduction in the salinity of the sea water, thus leading to production of potable water from saline water, which is the need of the hour.

  8. Semi-automatic analysis of standard uptake values in serial PET/CT studies in patients with lung cancer and lymphoma

    Directory of Open Access Journals (Sweden)

    Ly John

    2012-04-01

    Full Text Available Abstract Background Changes in maximum standardised uptake values (SUVmax between serial PET/CT studies are used to determine disease progression or regression in oncologic patients. To measure these changes manually can be time consuming in a clinical routine. A semi-automatic method for calculation of SUVmax in serial PET/CT studies was developed and compared to a conventional manual method. The semi-automatic method first aligns the serial PET/CT studies based on the CT images. Thereafter, the reader selects an abnormal lesion in one of the PET studies. After this manual step, the program automatically detects the corresponding lesion in the other PET study, segments the two lesions and calculates the SUVmax in both studies as well as the difference between the SUVmax values. The results of the semi-automatic analysis were compared to that of a manual SUVmax analysis using a Philips PET/CT workstation. Three readers did the SUVmax readings in both methods. Sixteen patients with lung cancer or lymphoma who had undergone two PET/CT studies were included. There were a total of 26 lesions. Results Linear regression analysis of changes in SUVmax show that intercepts and slopes are close to the line of identity for all readers (reader 1: intercept = 1.02, R2 = 0.96; reader 2: intercept = 0.97, R2 = 0.98; reader 3: intercept = 0.99, R2 = 0.98. Manual and semi-automatic method agreed in all cases whether SUVmax had increased or decreased between the serial studies. The average time to measure SUVmax changes in two serial PET/CT examinations was four to five times longer for the manual method compared to the semi-automatic method for all readers (reader 1: 53.7 vs. 10.5 s; reader 2: 27.3 vs. 6.9 s; reader 3: 47.5 vs. 9.5 s; p Conclusions Good agreement was shown in assessment of SUVmax changes between manual and semi-automatic method. The semi-automatic analysis was four to five times faster to perform than the manual analysis. These findings show the

  9. Semi-automatic reduced order models from expert-defined transients

    Science.gov (United States)

    Class, Andreas; Prill, Dennis

    2013-11-01

    Boiling water reactors (BWRs) not only show growing power oscillations at high-power low-flow conditions but also amplitude limited oscillations with temporal flow reversal. Methodologies, applicable in the non-linear regime, allow insight into the physical mechanisms behind BWR dynamics. The proposed methodology exploits relevant simulation data computed by an expert choice of transient. Proper orthogonal modes are extracted and serve as Ansatz functions within a spectral approach, yielding a reduced order model (ROM). Required steps to achieve reliable and numerical stable ROMs are discussed, i.e. mean value handling, inner product choice, variational formulation of derivatives and boundary conditions.Two strongly non-linear systems are analyzed: The tubular reactor, including Arrhenius reaction and heat losses, yields sensitive response on transient boundary conditions. A simple natural convection loop is considered due to its dynamical similarities to BWRs. It exhibits bifurcations resulting in limit cycles. The presented POD-ROM methodology reproduces dynamics with a small number of spectral modes and reaches appreciable accuracy. Funded by AREVA GmbH.

  10. Quality Metrics of Semi Automatic DTM from Large Format Digital Camera

    Science.gov (United States)

    Narendran, J.; Srinivas, P.; Udayalakshmi, M.; Muralikrishnan, S.

    2014-11-01

    The high resolution digital images from Ultracam-D Large Format Digital Camera (LFDC) was used for near automatic DTM generation. In the past, manual method for DTM generation was used which are time consuming and labour intensive. In this study LFDC in synergy with accurate position and orientation system and processes like image matching algorithms, distributed processing and filtering techniques for near automatic DTM generation. Traditionally the DTM accuracy is reported using check points collected from the field which are limited in number, time consuming and costly. This paper discusses the reliability of near automatic DTM generated from Ultracam-D for an operational project covering an area of nearly 600 Sq. Km. using 21,000 check points captured stereoscopically by experienced operators. The reliability of the DTM for the three study areas with different morphology is presented using large number of stereo check points and parameters related to statistical distribution of residuals such as skewness, kurtosis, standard deviation and linear error at 90% confidence interval. The residuals obtained for the three areas follow normal distribution in agreement with the majority of standards on positional accuracy. The quality metrics in terms of reliability were computed for the DTMs generated and the tables and graphs show the potential of Ultracam-D for the generation of semiautomatic DTM process for different terrain types.

  11. Improving the reproducibility of MR-derived left ventricular volume and function measurements with a semi-automatic threshold-based segmentation algorithm

    NARCIS (Netherlands)

    Jaspers, Karolien; Freling, Hendrik G.; van Wijk, Kees; Romijn, Elisabeth I.; Greuter, Marcel J. W.; Willems, Tineke P.

    To validate a novel semi-automatic segmentation algorithm for MR-derived volume and function measurements by comparing it with the standard method of manual contour tracing. The new algorithms excludes papillary muscles and trabeculae from the blood pool, while the manual approach includes these

  12. Development of computational algorithms for quantification of pulmonary structures; Desenvolvimento de algoritmos computacionais para quantificacao de estruturas pulmonares

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  13. Semi-automatic 3D morphological reconstruction of neurons with densely branching morphology: Application to retinal AII amacrine cells imaged with multi-photon excitation microscopy.

    Science.gov (United States)

    Zandt, Bas-Jan; Losnegård, Are; Hodneland, Erlend; Veruki, Margaret Lin; Lundervold, Arvid; Hartveit, Espen

    2017-03-01

    Accurate reconstruction of the morphology of single neurons is important for morphometric studies and for developing compartmental models. However, manual morphological reconstruction can be extremely time-consuming and error-prone and algorithms for automatic reconstruction can be challenged when applied to neurons with a high density of extensively branching processes. We present a procedure for semi-automatic reconstruction specifically adapted for densely branching neurons such as the AII amacrine cell found in mammalian retinas. We used whole-cell recording to fill AII amacrine cells in rat retinal slices with fluorescent dyes and acquired digital image stacks with multi-photon excitation microscopy. Our reconstruction algorithm combines elements of existing procedures, with segmentation based on adaptive thresholding and reconstruction based on a minimal spanning tree. We improved this workflow with an algorithm that reconnects neuron segments that are disconnected after adaptive thresholding, using paths extracted from the image stacks with the Fast Marching method. By reducing the likelihood that disconnected segments were incorrectly connected to neighboring segments, our procedure generated excellent morphological reconstructions of AII amacrine cells. Reconstructing an AII amacrine cell required about 2h computing time, compared to 2-4days for manual reconstruction. To evaluate the performance of our method relative to manual reconstruction, we performed detailed analysis using a measure of tree structure similarity (DIADEM score), the degree of projection area overlap (Dice coefficient), and branch statistics. We expect our procedure to be generally useful for morphological reconstruction of neurons filled with fluorescent dyes. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. ROADS CENTRE-AXIS EXTRACTION IN AIRBORNE SAR IMAGES: AN APPROACH BASED ON ACTIVE CONTOUR MODEL WITH THE USE OF SEMI-AUTOMATIC SEEDING

    Directory of Open Access Journals (Sweden)

    R. G. Lotte

    2013-05-01

    Full Text Available Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM, followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes. The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  15. COMPARISON OF SEMI AUTOMATIC DTM FROM IMAGE MATCHING WITH DTM FROM LIDAR

    OpenAIRE

    Rahmayudi, Aji; Rizaldy, Aldino

    2016-01-01

    Nowadays DTM LIDAR was used extensively for generating contour line in Topographic Map. This method is very superior compared to traditionally stereomodel compilation from aerial images that consume large resource of human operator and very time consuming. Since the improvement of computer vision and digital image processing, it is possible to generate point cloud DSM from aerial images using image matching algorithm. It is also possible to classify point cloud DSM to DTM using the same techn...

  16. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  17. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  18. 3D dento-maxillary osteolytic lesion and active contour segmentation pilot study in CBCT: semi-automatic vs manual methods.

    Science.gov (United States)

    Vallaeys, K; Kacem, A; Legoux, H; Le Tenier, M; Hamitouche, C; Arbab-Chirani, R

    2015-01-01

    This study was designed to evaluate the reliability of a semi-automatic segmentation tool for dento-maxillary osteolytic image analysis compared with manually defined segmentation in CBCT scans. Five CBCT scans were selected from patients for whom periapical radiolucency images were available. All images were obtained using a ProMax® 3D Mid Planmeca (Planmeca Oy, Helsinki, Finland) and were acquired with 200-μm voxel size. Two clinicians performed the manual segmentations. Four operators applied three different semi-automatic procedures. The volumes of the lesions were measured. An analysis of dispersion was made for each procedure and each case. An ANOVA was used to evaluate the operator effect. Non-paired t-tests were used to compare semi-automatic procedures with the manual procedure. Statistical significance was set at α = 0.01. The coefficients of variation for the manual procedure were 2.5-3.5% on average. There was no statistical difference between the two operators. The results of manual procedures can be used as a reference. For the semi-automatic procedures, the dispersion around the mean can be elevated depending on the operator and case. ANOVA revealed significant differences between the operators for the three techniques according to cases. Region-based segmentation was only comparable with the manual procedure for delineating a circumscribed osteolytic dento-maxillary lesion. The semi-automatic segmentations tested are interesting but are limited to complex surface structures. A methodology that combines the strengths of both methods could be of interest and should be tested. The improvement in the image analysis that is possible through the segmentation procedure and CBCT image quality could be of value.

  19. Semi-automatic tool for segmentation and volumetric analysis of medical images.

    Science.gov (United States)

    Heinonen, T; Dastidar, P; Kauppinen, P; Malmivuo, J; Eskola, H

    1998-05-01

    Segmentation software is described, developed for medical image processing and run on Windows. The software applies basic image processing techniques through a graphical user interface. For particular applications, such as brain lesion segmentation, the software enables the combination of different segmentation techniques to improve its efficiency. The program is applied for magnetic resonance imaging, computed tomography and optical images of cryosections. The software can be utilised in numerous applications, including pre-processing for three-dimensional presentations, volumetric analysis and construction of volume conductor models.

  20. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  1. Semi-automatic people counting in aerial images of large crowds

    Science.gov (United States)

    Herrmann, Christian; Metzler, Juergen; Willersinn, Dieter

    2012-09-01

    Counting people in crowds is a common problem in visual surveillance. Many solutions are just designed to count less than one hundred people. Only few systems have been tested on large crowds of several hundred people and no known counting system has been tested on crowds of several thousand people. Furthermore, none of these large scale systems delivers people's positions, they just estimate the number. But having the position of people would be a large benefit, since this would enable a human observer to carry out a plausibility check. In addition, most approaches require video data as input or a scene model. In order to generally solve the problem, these assumptions must not be made. We propose a system that can count people on single aerial images including mosaic images generated from video data. No assumptions about crowd density will be made, i. e. the system has to work from low to very high density. The main challenge is the large variety of possible input data. Typical scenarios would be public events such as demonstrations or open air concerts. Our system uses a model-based detection of individual humans. This includes the determination of their positions and the total number. In order to cope with the given challenges we divide our system into three steps: foreground segmentation, person size determination and person detection. We evaluate our proposed system on a variety of aerial images showing large crowds with up to several thousand people

  2. COMPARISON OF SEMI AUTOMATIC DTM FROM IMAGE MATCHING WITH DTM FROM LIDAR

    Directory of Open Access Journals (Sweden)

    A. Rahmayudi

    2016-06-01

    Full Text Available Nowadays DTM LIDAR was used extensively for generating contour line in Topographic Map. This method is very superior compared to traditionally stereomodel compilation from aerial images that consume large resource of human operator and very time consuming. Since the improvement of computer vision and digital image processing, it is possible to generate point cloud DSM from aerial images using image matching algorithm. It is also possible to classify point cloud DSM to DTM using the same technique with LIDAR classification and producing DTM which is comparable to DTM LIDAR. This research will study the accuracy difference of both DTMs and the result of DTM in several different condition including urban area and forest area, flat terrain and mountainous terrain, also time calculation for mass production Topographic Map. From statistical data, both methods are able to produce 1:5.000 Topographic Map scale.

  3. Comparison of Semi Automatic DTM from Image Matching with DTM from LIDAR

    Science.gov (United States)

    Rahmayudi, Aji; Rizaldy, Aldino

    2016-06-01

    Nowadays DTM LIDAR was used extensively for generating contour line in Topographic Map. This method is very superior compared to traditionally stereomodel compilation from aerial images that consume large resource of human operator and very time consuming. Since the improvement of computer vision and digital image processing, it is possible to generate point cloud DSM from aerial images using image matching algorithm. It is also possible to classify point cloud DSM to DTM using the same technique with LIDAR classification and producing DTM which is comparable to DTM LIDAR. This research will study the accuracy difference of both DTMs and the result of DTM in several different condition including urban area and forest area, flat terrain and mountainous terrain, also time calculation for mass production Topographic Map. From statistical data, both methods are able to produce 1:5.000 Topographic Map scale.

  4. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  5. Test-retest reliability of echo intensity parameters in healthy Achilles tendons using a semi-automatic tracing procedure.

    Science.gov (United States)

    Schneebeli, Alessandro; Del Grande, Filippo; Vincenzo, Gabriele; Cescon, Corrado; Barbero, Marco

    2017-11-01

    To evaluate the test-retest reliability of the ultrasound echo intensity parameters on healthy Achilles tendon using a semi-automatic tracing procedure. Eighteen healthy volunteers participated. B-mode images were acquired in the transverse plane (mid-tendon; insertion) and used to analyze tendon echogenicity. Grayscale distribution of the pixels within the selected ROIs was represented as a histogram. Descriptive statistics of the grayscale distribution (mean, variance, skewness, kurtosis, and entropy) and parameters from the co-occurrence matrix (contrast, energy, and homogeneity) were calculated. Reliability of echo intensity parameters of the mid-Achilles tendon ranged from high to very high with an ICC 2.k of 0.94 for echogenicity, 0.87 for variance, 0.80 for skewness, 0.72 for kurtosis, 0.89 for entropy, 0.90 for contrast, 0.91 for energy, and 0.93 for homogeneity, while for tendon insertion they ranged from moderate to high with an ICC 2.k of 0.74 for echogenicity, 0.88 for variance, 0.75 for skewness, 0.55 for kurtosis, 0.87 for entropy, 0.70 for contrast, 0.77 for energy, and 0.56 for homogeneity. Ultrasound echo intensity is a reliable technique to characterize the internal structure of the Achilles tendon.

  6. a Semi-Automatic Rule Set Building Method for Urban Land Cover Classification Based on Machine Learning and Human Knowledge

    Science.gov (United States)

    Gu, H. Y.; Li, H. T.; Liu, Z. Y.; Shao, C. Y.

    2017-09-01

    Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  7. Semi-automatic identification of punching areas for tissue microarray building: the tubular breast cancer pilot study

    Directory of Open Access Journals (Sweden)

    Beltrame Francesco

    2010-11-01

    Full Text Available Abstract Background Tissue MicroArray technology aims to perform immunohistochemical staining on hundreds of different tissue samples simultaneously. It allows faster analysis, considerably reducing costs incurred in staining. A time consuming phase of the methodology is the selection of tissue areas within paraffin blocks: no utilities have been developed for the identification of areas to be punched from the donor block and assembled in the recipient block. Results The presented work supports, in the specific case of a primary subtype of breast cancer (tubular breast cancer, the semi-automatic discrimination and localization between normal and pathological regions within the tissues. The diagnosis is performed by analysing specific morphological features of the sample such as the absence of a double layer of cells around the lumen and the decay of a regular glands-and-lobules structure. These features are analysed using an algorithm which performs the extraction of morphological parameters from images and compares them to experimentally validated threshold values. Results are satisfactory since in most of the cases the automatic diagnosis matches the response of the pathologists. In particular, on a total of 1296 sub-images showing normal and pathological areas of breast specimens, algorithm accuracy, sensitivity and specificity are respectively 89%, 84% and 94%. Conclusions The proposed work is a first attempt to demonstrate that automation in the Tissue MicroArray field is feasible and it can represent an important tool for scientists to cope with this high-throughput technique.

  8. Semi-automatic extraction of sectional view from point clouds - The case of Ottmarsheim's abbey-church

    Science.gov (United States)

    Landes, T.; Bidino, S.; Guild, R.

    2014-06-01

    Today, elevations or sectional views of buildings are often produced from terrestrial laser scanning. However, due to the amount of data to process and because usually 2D maps are required by customers, the 3D point cloud is often degraded into 2D slices. In a sectional view, not only the portions of the objet which are intersected by the cutting plane but also edges and contours of other parts of the object which are visible behind the cutting plane are represented. To avoid the tedious manual drawing, the aim of this work is to propose a semi-automatic approach for creating sectional views by point cloud processing. The extraction of sectional views requires in a first step the segmentation of the point cloud into planar and non-planar entities. Since in cultural heritage buildings, arches, vaults, columns can be found, the position and the direction of the sectional view must be taken into account before contours extraction. Indeed, the edges of surfaces of revolution depend on the chosen view. The developed extraction approach is detailed based on point clouds acquired inside and outside churches. The resulting sectional view has been evaluated in a qualitative and quantitative way by comparing it with a reference sectional view made by hand. A mean deviation of 3 cm between both sections proves that the proposed approach is promising. Regarding the processing time, despite a few manual corrections, it has saved 40% of the time required for manual drawing.

  9. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  10. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    Science.gov (United States)

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  11. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  12. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  13. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Science.gov (United States)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  14. Towards Semi-Automatic Artifact Rejection for the Improvement of Alzheimer’s Disease Screening from EEG Signals

    Directory of Open Access Journals (Sweden)

    Jordi Solé-Casals

    2015-07-01

    Full Text Available A large number of studies have analyzed measurable changes that Alzheimer’s disease causes on electroencephalography (EEG. Despite being easily reproducible, those markers have limited sensitivity, which reduces the interest of EEG as a screening tool for this pathology. This is for a large part due to the poor signal-to-noise ratio of EEG signals: EEG recordings are indeed usually corrupted by spurious extra-cerebral artifacts. These artifacts are responsible for a consequent degradation of the signal quality. We investigate the possibility to automatically clean a database of EEG recordings taken from patients suffering from Alzheimer’s disease and healthy age-matched controls. We present here an investigation of commonly used markers of EEG artifacts: kurtosis, sample entropy, zero-crossing rate and fractal dimension. We investigate the reliability of the markers, by comparison with human labeling of sources. Our results show significant differences with the sample entropy marker. We present a strategy for semi-automatic cleaning based on blind source separation, which may improve the specificity of Alzheimer screening using EEG signals.

  15. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    Science.gov (United States)

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  16. Semi-automatic border detection software for the quantification of arterial lumen, intima-media and adventitia layer thickness with very-high resolution ultrasound.

    Science.gov (United States)

    Sundholm, Johnny; Gustavsson, Tomas; Sarkola, Taisto

    2014-06-01

    The aim was to evaluate the accuracy, precision and feasibility of semi-automatic border detection software (AMS) in comparison to manual electronic calipers (EC) in the analysis of arterial images obtained with transcutaneous very-high resolution vascular ultrasound (VHRU, 25-55 MHz). 100 images from central elastic and peripheral muscular arteries were obtained on two separate imaging occasions from 10 healthy subjects, and independently measured with AMS and EC. No bias between AMS and EC was found. The intraobserver coefficients of variation (CV) for carotid lumen dimension (mean dimension 5.60 mm) was lower with AMS compared with EC (0.4 vs. 1.9%, p = 0.033; N = 20). No consistently significant differences in intra, inter or test-retest CVs were observed overall for muscular artery dimensions between AMS and EC. The intra CV for adventitial thickness (AT, mean 0.111 mm; 15.6 vs 24.8%, p = 0.011; N = 41) and inter CV for intima-media thickness (IMT, mean 0.219 mm; 14.3 vs. 21.2%, p = 0.001; N = 58) obtained with AMS in higher quality thin muscular artery images was lower compared with EC. The mean reading time was significantly lower with AMS compared with EC (71.5 s vs. 156.6 s, p < 0.001). AMS is accurate, precise, and feasible in the analysis of arterial images obtained with VHRU. Minor, although statistically significant, differences in the precision of AMS and EC-systems were found. The precision of AMS was superior for AT and IMT in higher quality images likely related to a decrease in the technical variability imposed by the observer. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Reflective random indexing for semi-automatic indexing of the biomedical literature.

    Science.gov (United States)

    Vasuki, Vidya; Cohen, Trevor

    2010-10-01

    The rapid growth of biomedical literature is evident in the increasing size of the MEDLINE research database. Medical Subject Headings (MeSH), a controlled set of keywords, are used to index all the citations contained in the database to facilitate search and retrieval. This volume of citations calls for efficient tools to assist indexers at the US National Library of Medicine (NLM). Currently, the Medical Text Indexer (MTI) system provides assistance by recommending MeSH terms based on the title and abstract of an article using a combination of distributional and vocabulary-based methods. In this paper, we evaluate a novel approach toward indexer assistance by using nearest neighbor classification in combination with Reflective Random Indexing (RRI), a scalable alternative to the established methods of distributional semantics. On a test set provided by the NLM, our approach significantly outperforms the MTI system, suggesting that the RRI approach would make a useful addition to the current methodologies.

  18. YODA++: A proposal for a semi-automatic space mission control

    Science.gov (United States)

    Casolino, M.; de Pascale, M. P.; Nagni, M.; Picozza, P.

    YODA++ is a proposal for a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and are complemented by housekeeping information. Housekeeping information will be analyzed within a short time from download (1 h) in order to monitor the status of the experiment and to foreseen the mission acquisition planning. A prototype for the data visualization will run on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. Data retrieving development is in production phase, while a GUI interface for human friendly monitoring is on preliminary phase as well as a JavaServerPages/JavaServerFaces (JSP/JSF) web application facility. On a longer timescale (1 3 h from download) scientific data are analyzed. The data storage core will be a mix of CERNs ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing on ground of PAMELA data.

  19. Semi-Automatic Mark-Up and UMLS Annotation of Clinical Guidelines.

    Science.gov (United States)

    Becker, Matthias; Böckmann, Britta

    2017-01-01

    Clinical guidelines and clinical pathways are accepted and proven instruments for quality assurance and process optimization in the healthcare domain. To derive clinical pathways from clinical guidelines, the imprecise, non-formalized abstract guidelines must be formalized. The transfer of evidence-based knowledge (clinical guidelines) to care processes (clinical pathways) is not straightforward due to different information contents and semantical constructs. A complex step within this formalization process is the mark-up step and annotation of the text passages to terminologies. The Unified Medical Language System (UMLS) provides a common reference terminology as well as the semantic link for combining the clinical pathways to patient-specific information. This paper proposes a semi-automated mark-up and UMLS annotation for clinical guidelines by using natural language processing techniques. The algorithm has been tested and evaluated using a German breast cancer guideline.

  20. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  1. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  2. SEMI-AUTOMATIC CO-REGISTRATION OF PHOTOGRAMMETRIC AND LIDAR DATA USING BUILDINGS

    Directory of Open Access Journals (Sweden)

    C. Armenakis

    2012-07-01

    Full Text Available In this work, the co-registration steps between LiDAR and photogrammetric DSM 3Ddata are analyzed and a solution based on automated plane matching is proposed and implemented. For a robust 3D geometric transformation both planes and points are used. Initially planes are chosen as the co-registration primitives. To confine the search space for the plane matching a sequential automatic building matching is performed first. For matching buildings from the LiDAR and the photogrammetric data, a similarity objective function is formed based on the roof height difference (RHD, the 3D histogram of the building attributes, and the building boundary area of a building. A region growing algorithm based on a Triangulated Irregular Network (TIN is implemented to extract planes from both datasets. Next, an automatic successive process for identifying and matching corresponding planes from the two datasets has been developed and implemented. It is based on the building boundary region and determines plane pairs through a robust matching process thus eliminating outlier pairs. The selected correct plane pairs are the input data for the geometric transformation process. The 3D conformal transformation method in conjunction with the attitude quaternion is applied to obtain the transformation parameters using the normal vectors of the corresponding plane pairs. Following the mapping of one dataset onto the coordinate system of the other, the Iterative Closest Point (ICP algorithm is then applied, using the corresponding building point clouds to further refine the transformation solution. The results indicate that the combination of planes and points improve the co-registration outcomes.

  3. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  4. Geological lineament mapping in arid area by semi-automatic extraction from satellite images: example at the El Kseïbat region (Algerian Sahara)

    Energy Technology Data Exchange (ETDEWEB)

    Hammad, N.; Djidel, M.; Maabedi, N.

    2016-07-01

    Geologists in charge of a detailed lineament mapping in arid and desert area, face the extent of the land and the abundance of eolian deposits. This study presents a semi-automatic approach of extraction of lineament, different from other methods, such as the automatic extraction and manual extraction, by being both fast and objective. It consists of a series of digital processing (textural and spatial filtering, binarization by thresholding and mathematic morphology ... etc.) applied to a Landsat 7 ETM+scene. This semi-automatic approach has produced a detailed map of lineaments, while taking account of tectonic directions recognized in the region. It helps mitigate the effect of dune deposits meet the specifications of arid environment. The visual validation of these linear structures, by geoscientists and field data, allowed the identification of the majority of structural lineaments or at least those tried geological. (Author)

  5. Semi-automatic algorithm for construction of the left ventricular area variation curve over a complete cardiac cycle.

    Science.gov (United States)

    Melo, Salvador A; Macchiavello, Bruno; Andrade, Marcelino M; Carvalho, João L A; Carvalho, Hervaldo S; Vasconcelos, Daniel F; Berger, Pedro A; da Rocha, Adson F; Nascimento, Francisco A O

    2010-01-15

    Two-dimensional echocardiography (2D-echo) allows the evaluation of cardiac structures and their movements. A wide range of clinical diagnoses are based on the performance of the left ventricle. The evaluation of myocardial function is typically performed by manual segmentation of the ventricular cavity in a series of dynamic images. This process is laborious and operator dependent. The automatic segmentation of the left ventricle in 4-chamber long-axis images during diastole is troublesome, because of the opening of the mitral valve. This work presents a method for segmentation of the left ventricle in dynamic 2D-echo 4-chamber long-axis images over the complete cardiac cycle. The proposed algorithm is based on classic image processing techniques, including time-averaging and wavelet-based denoising, edge enhancement filtering, morphological operations, homotopy modification, and watershed segmentation. The proposed method is semi-automatic, requiring a single user intervention for identification of the position of the mitral valve in the first temporal frame of the video sequence. Image segmentation is performed on a set of dynamic 2D-echo images collected from an examination covering two consecutive cardiac cycles. The proposed method is demonstrated and evaluated on twelve healthy volunteers. The results are quantitatively evaluated using four different metrics, in a comparison with contours manually segmented by a specialist, and with four alternative methods from the literature. The method's intra- and inter-operator variabilities are also evaluated. The proposed method allows the automatic construction of the area variation curve of the left ventricle corresponding to a complete cardiac cycle. This may potentially be used for the identification of several clinical parameters, including the area variation fraction. This parameter could potentially be used for evaluating the global systolic function of the left ventricle.

  6. SplitRacer - a semi-automatic tool for the analysis and interpretation of teleseismic shear-wave splitting

    Science.gov (United States)

    Reiss, Miriam Christina; Rümpker, Georg

    2017-04-01

    We present a semi-automatic, graphical user interface tool for the analysis and interpretation of teleseismic shear-wave splitting in MATLAB. Shear wave splitting analysis is a standard tool to infer seismic anisotropy, which is often interpreted as due to lattice-preferred orientation of e.g. mantle minerals or shape-preferred orientation caused by cracks or alternating layers in the lithosphere and hence provides a direct link to the earth's kinematic processes. The increasing number of permanent stations and temporary experiments result in comprehensive studies of seismic anisotropy world-wide. Their successive comparison with a growing number of global models of mantle flow further advances our understanding the earth's interior. However, increasingly large data sets pose the inevitable question as to how to process them. Well-established routines and programs are accurate but often slow and impractical for analyzing a large amount of data. Additionally, shear wave splitting results are seldom evaluated using the same quality criteria which complicates a straight-forward comparison. SplitRacer consists of several processing steps: i) download of data per FDSNWS, ii) direct reading of miniSEED-files and an initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold. iii) an analysis of the particle motion of selected phases and successive correction of the sensor miss-alignment based on the long-axis of the particle motion. iv) splitting analysis of selected events: seismograms are first rotated into radial and transverse components, then the energy-minimization method is applied, which provides the polarization and delay time of the phase. To estimate errors, the analysis is done for different randomly-chosen time windows. v) joint-splitting analysis for all events for one station, where the energy content of all phases is inverted simultaneously. This allows to decrease the influence of noise and to increase robustness of the measurement

  7. Semi-automatic algorithm for construction of the left ventricular area variation curve over a complete cardiac cycle

    Directory of Open Access Journals (Sweden)

    Vasconcelos Daniel F

    2010-01-01

    Full Text Available Abstract Background Two-dimensional echocardiography (2D-echo allows the evaluation of cardiac structures and their movements. A wide range of clinical diagnoses are based on the performance of the left ventricle. The evaluation of myocardial function is typically performed by manual segmentation of the ventricular cavity in a series of dynamic images. This process is laborious and operator dependent. The automatic segmentation of the left ventricle in 4-chamber long-axis images during diastole is troublesome, because of the opening of the mitral valve. Methods This work presents a method for segmentation of the left ventricle in dynamic 2D-echo 4-chamber long-axis images over the complete cardiac cycle. The proposed algorithm is based on classic image processing techniques, including time-averaging and wavelet-based denoising, edge enhancement filtering, morphological operations, homotopy modification, and watershed segmentation. The proposed method is semi-automatic, requiring a single user intervention for identification of the position of the mitral valve in the first temporal frame of the video sequence. Image segmentation is performed on a set of dynamic 2D-echo images collected from an examination covering two consecutive cardiac cycles. Results The proposed method is demonstrated and evaluated on twelve healthy volunteers. The results are quantitatively evaluated using four different metrics, in a comparison with contours manually segmented by a specialist, and with four alternative methods from the literature. The method's intra- and inter-operator variabilities are also evaluated. Conclusions The proposed method allows the automatic construction of the area variation curve of the left ventricle corresponding to a complete cardiac cycle. This may potentially be used for the identification of several clinical parameters, including the area variation fraction. This parameter could potentially be used for evaluating the global systolic

  8. Validation of a semi-automatic protocol for the assessment of the tear meniscus central area based on open-source software

    Science.gov (United States)

    Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.

    2017-08-01

    Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.

  9. Computational systems chemical biology.

    Science.gov (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  10. Computer security for the computer systems manager

    OpenAIRE

    Helling, William D.

    1982-01-01

    Approved for public release; distribution is unlimited This thesis is a primer on the subject of computer security. It is written for the use of computer systems managers and addresses basic concepts of computer security and risk analysis. An example of the techniques employed by a typical military data processing center is included in the form of the written results of an actual on-site survey. Computer security is defined in the context of its scope and an analysis is made of those ...

  11. Computer memory management system

    Science.gov (United States)

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  12. Production optimization of {sup 99}Mo/{sup 99m}Tc zirconium molybate gel generators at semi-automatic device: DISIGEG

    Energy Technology Data Exchange (ETDEWEB)

    Monroy-Guzman, F., E-mail: fabiola.monroy@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Rivero Gutierrez, T., E-mail: tonatiuh.rivero@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Lopez Malpica, I.Z.; Hernandez Cortes, S.; Rojas Nava, P.; Vazquez Maldonado, J.C. [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Vazquez, A. [Instituto Mexicano del Petroleo, Eje Central Norte Lazaro Cardenas 152, Col. San Bartolo Atepehuacan, 07730, Mexico D.F. (Mexico)

    2012-01-15

    DISIGEG is a synthesis installation of zirconium {sup 99}Mo-molybdate gels for {sup 99}Mo/{sup 99m}Tc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium {sup 99}Mo- molybdate gels on {sup 99}Mo/{sup 99m}Tc generator performance were evaluated as well as some physical-chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium {sup 99}Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting {sup 99m}TcO{sub 4}{sup -} diffusion into the matrix. The {sup 99m}TcO{sub 4}{sup -} eluates produced by {sup 99}Mo/{sup 99m}Tc zirconium {sup 99}Mo-molybdate gel generators prepared in DISIGEG, air dried at 80 Degree-Sign C for 5 h and using an air flow of 90 mm, satisfied all the Pharmacopoeias regulations: {sup 99m}Tc yield between 70-75%, {sup 99}Mo breakthrough less than 3 Multiplication-Sign 10{sup -3}%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6. - Highlights: Black-Right-Pointing-Pointer {sup 99}Mo/{sup 99m}Tc generators based on {sup 99}Mo-molybdate gels were synthesized at a semi-automatic device. Black-Right-Pointing-Pointer Generator performances depend on synthesis conditions of the zirconium {sup 99}Mo-molybdate gel. Black-Right-Pointing-Pointer {sup 99m}TcO{sub 4}{sup -} diffusion and yield into generator depends on gel porosity and permeability. Black

  13. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate

  14. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  15. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  16. Multi-parametric (ADC/PWI/T2-w) image fusion approach for accurate semi-automatic segmentation of tumorous regions in glioblastoma multiforme.

    Science.gov (United States)

    Fathi Kazerooni, Anahita; Mohseni, Meysam; Rezaei, Sahar; Bakhshandehpour, Gholamreza; Saligheh Rad, Hamidreza

    2015-02-01

    Glioblastoma multiforme (GBM) brain tumor is heterogeneous in nature, so its quantification depends on how to accurately segment different parts of the tumor, i.e. viable tumor, edema and necrosis. This procedure becomes more effective when metabolic and functional information, provided by physiological magnetic resonance (MR) imaging modalities, like diffusion-weighted-imaging (DWI) and perfusion-weighted-imaging (PWI), is incorporated with the anatomical magnetic resonance imaging (MRI). In this preliminary tumor quantification work, the idea is to characterize different regions of GBM tumors in an MRI-based semi-automatic multi-parametric approach to achieve more accurate characterization of pathogenic regions. For this purpose, three MR sequences, namely T2-weighted imaging (anatomical MR imaging), PWI and DWI of thirteen GBM patients, were acquired. To enhance the delineation of the boundaries of each pathogenic region (peri-tumoral edema, viable tumor and necrosis), the spatial fuzzy C-means algorithm is combined with the region growing method. The results show that exploiting the multi-parametric approach along with the proposed semi-automatic segmentation method can differentiate various tumorous regions with over 80 % sensitivity, specificity and dice score. The proposed MRI-based multi-parametric segmentation approach has the potential to accurately segment tumorous regions, leading to an efficient design of the pre-surgical treatment planning.

  17. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  18. Threats to Computer Systems

    Science.gov (United States)

    1973-03-01

    subjects and objects of attacks contribute to the uniqueness of computer-related crime. For example, as the cashless , checkless society approaches...advancing computer tech- nology and security methods, and proliferation of computers in bringing about the paperless society . The universal use of...organizations do to society . Jerry Schneider, one of the known perpetrators, said that he was motivated to perform his acts to make money, for the

  19. Resource Management in Computing Systems

    OpenAIRE

    Amani, Payam

    2017-01-01

    Resource management is an essential building block of any modern computer and communication network. In this thesis, the results of our research in the following two tracks are summarized in four papers. The first track includes three papers and covers modeling, prediction and control for multi-tier computing systems. In the first paper, a NARX-based multi-step-ahead response time predictor for single server queuing systems is presented which can be applied to CPU-constrained computing system...

  20. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0T with X-ray mammograms: pilot study evaluation using dedicated semi-automatic registration software.

    Science.gov (United States)

    Dietzel, Matthias; Hopp, Torsten; Ruiter, Nicole; Zoubi, Ramy; Runnebaum, Ingo B; Kaiser, Werner A; Baltzer, Pascal A T

    2011-08-01

    To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0T) MR-mammography (MR-M) in an initial pilot study. MR-M was acquired on a high-field clinical scanner at 3.0T (T1-weighted 3D VIBE ± Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Image registration was performed successfully for all patients. Overall RE was 8.2mm (1 min after Gd; confidence interval/CI: 2.0-14.4mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Image registration of high-field 3.0T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0T, the registration was robust and the overall registration error was sufficient for clinical application. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  2. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  3. Risks in Networked Computer Systems

    OpenAIRE

    Klingsheim, André N.

    2008-01-01

    Networked computer systems yield great value to businesses and governments, but also create risks. The eight papers in this thesis highlight vulnerabilities in computer systems that lead to security and privacy risks. A broad range of systems is discussed in this thesis: Norwegian online banking systems, the Norwegian Automated Teller Machine (ATM) system during the 90's, mobile phones, web applications, and wireless networks. One paper also comments on legal risks to bank cust...

  4. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  5. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  6. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  7. Choosing the right computer system.

    Science.gov (United States)

    Freydberg, B K; Seltzer, S M; Walker, B

    1999-08-01

    We are living in a world where virtually any information you desire can be acquired in a matter of moments with the click of a mouse. The computer is a ubiquitous fixture in elementary schools, universities, small companies, large companies, and homes. Many dental offices have incorporated computers as an integral part of their management systems. However, the role of the computer is expanding in the dental office as new hardware and software advancements emerge. The growing popularity of digital radiography and photography is making the possibility of a completely digital patient record more desirable. The trend for expanding the role of dental office computer systems is reflected in the increased number of companies that offer computer packages. The purchase of one of these new systems represents a significant commitment on the part of the dentist and staff. Not only do the systems have a substantial price tag, but they require a great deal of time and effort to become fully integrated into the daily office routine. To help the reader gain some clarity on the blur of new hardware and software available, I have enlisted the help of three recognized authorities on the subject of office organization and computer systems. This article is not intended to provide a ranking of features and shortcomings of specific products that are available, but rather to present a process by which the reader might be able to make better choices when selecting or upgrading a computer system.

  8. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  9. User computer system pilot project

    Energy Technology Data Exchange (ETDEWEB)

    Eimutis, E.C.

    1989-09-06

    The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

  10. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  11. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  12. Survivable Avionics Computer System.

    Science.gov (United States)

    1980-11-01

    T. HALL AFWAL/AAA-1 SCONTRACT F33-615-80-C-1014 SRI Project 1314 E~J Approved by: CHARLES J. SHOENS, Director Systems Techniques Laboratory DAVID A...4 U3.3.3. U3.3.3. 3.3, ~l3.U OXZrZrZr -Z~ 31Zr31Zr I- o - NOIA . 3,3. 3,3. 3𔃽, 3~3, -- Zr -Zr A .4 .4 3, 3, 3, t-~ 3, ~3,3, .03~ .03, 3,3.4 3,3

  13. Management Information System & Computer Applications

    OpenAIRE

    Sreeramana Aithal

    2017-01-01

    The book contains following Chapters : Chapter 1 : Introduction to Management Information Systems, Chapter 2 : Structure of MIS, Chapter 3 : Planning for MIS, Chapter 4 : Introduction to Computers Chapter 5 : Decision Making Process in MIS Chapter 6 : Approaches for System Development Chapter 7 : Form Design Chapter 8 : Charting Techniques Chapter 9 : System Analysis & Design Chapter 10 : Applications of MIS in Functional Areas Chapter 11 : System Implement...

  14. IMAGE COMPLETION BY SPATIAL-CONTEXTUAL CORRELATION FRAMEWORK USING AUTOMATIC AND SEMI-AUTOMATIC SELECTION OF HOLE REGION

    Directory of Open Access Journals (Sweden)

    D. Beulah David

    2015-08-01

    Full Text Available An image inpainting scheme has been proposed that utilizes the spatial contextual information approach for image completion. The domain to be inpainted is smooth for texture images. It can be inpainted using exemplar and variational methods. In proposed method, the regions to be removed from the image are segmented and pull out of the image where it is tenure as hole. As the hole in the image to be inpainted is an unsupervised approach, we are computing the pixel in the holes using spatial contextual correlations. The method’s efficacy is embodied by real images.

  15. The ALICE Magnetic System Computation.

    CERN Document Server

    Klempt, W; CERN. Geneva; Swoboda, Detlef

    1995-01-01

    In this note we present the first results from the ALICE magnetic system computation performed in the 3-dimensional way with the Vector Fields TOSCA code (version 6.5) [1]. To make the calculations we have used the IBM RISC System 6000-370 and 6000-550 machines combined in the CERN PaRC UNIX cluster.

  16. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  17. Opportunity for Realizing Ideal Computing System using Cloud Computing Model

    OpenAIRE

    Sreeramana Aithal; Vaikunth Pai T

    2017-01-01

    An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...

  18. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  19. Morphological criteria of feminine upper eyelashes, quantified by a new semi-automatized image analysis: Application to the objective assessment of mascaras.

    Science.gov (United States)

    Shaiek, A; Flament, F; François, G; Vicic, M; Cointereau-Chardron, S; Curval, E; Canevet-Zaida, S; Coubard, O; Idelcaid, Y

    2018-02-01

    The wide diversity of feminine eyelashes in shape, length, and curvature makes it a complex domain that remains to be quantified in vivo, together with their changes brought by application of mascaras that are visually assessed by women themselves or make-up experts. A dedicated software was developed to semi-automatically extract and quantify, from digital images (frontal and lateral pictures), the major parameters of feminine eyelashes of Mexican and Caucasian women and to record the changes brought by the applications of various mascaras and their brushes, being self or professionally applied. The diversity of feminine eyelashes appears as a major influencing factor in the application of mascaras and their related results. Eight marketed mascaras and their respective brushes were tested and their quantitative profiles, in terms of coverage, morphology, or curvature were assessed. Standard applications by trained aestheticians led to higher and more homogeneous deposits of mascara, as compared to those resulting from self-applications. The developed software appears a precious tool for both quantifying the major characteristics of eyelashes and assessing the making-up results brought by mascaras and their associated brushes. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Image-based guidance of percutaneous abdomen intervention based on markers for semi-automatic rigid registration.

    Science.gov (United States)

    Spinczyk, Dominik

    2014-12-01

    For percutaneous abdomen intervention (e.g. liver radiofrequency (RF) tumor ablation, liver biopsy), surgeons lack real-time visual feedback about the location of the needle on planning images, typically computed tomography (CT). One difficulty lies in tracking and synchronizing both the tool movement and the patient breathing motion. To verify the correspondence between rigid registration fiducial registration error signal and breathing phase. Designed markers that are clearly visible both in planning CT and on the patient during the intervention are proposed. Registration and breathing synchronization is then performed by a point-based approach. The method was tested in a clinical environment on 10 patients with liver cancer using 3D abdominal CT in the exhale position. Median rigid fiducial registration error (FRE) in the breathing cycle was used as a criterion to distinguish the inhale and exhale phase. The correlation between breathing phase and FRE value can be observed for every patient. We obtained mean median FRE equal to 9.37 mm in exhale positions and 15.56 mm in the whole breathing cycle. The presented real time approach, based on FRE calculation, was integrated in the clinical pipeline, and can help to select the best respiratory phase for needle insertion for percutaneous abdomen intervention, in cases where only 3D CT is performed. Moreover, this method allows semi-automated rigid registration to establish the correspondence between preoperative patient anatomical model and patient position.

  1. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    Directory of Open Access Journals (Sweden)

    Alfredo Morales Pinzón

    Full Text Available The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly

  2. COMPUTER ASSISTED INVENTORY CONTROL SYSTEM ...

    African Journals Online (AJOL)

    COMPUTER ASSISTED INVENTORY CONTROL SYSTEM. Alebachew Dessalegn and R. N. Roy. Department of Mechanical Engineering. Addis Ababa University. ABSTRACT. The basic purpose of holding inventories is to provide an essential decoupling between demand and unequal flow rate of materials in a supp~v ...

  3. Semi-automatic digital image impact assessments of Maize Lethal Necrosis (MLN) at the leaf, whole plant and plot levels

    Science.gov (United States)

    Kefauver, S. C.; Vergara-Diaz, O.; El-Haddad, G.; Das, B.; Suresh, L. M.; Cairns, J.; Araus, J. L.

    2016-12-01

    Maize is the top staple crop for low-income populations in Sub-Saharan Africa and is currently suffering from the appearance of new diseases, which, together with increased abiotic stresses from climate change, are challenging the very sustainability of African societies. Current constraints in field phenotyping remain a major bottleneck for future breeding advances, but RGB-based High-Throughput Phenotyping Platforms (HTPPs) have demonstrated promise for rapidly developing both disease-resistant and weather-resilient crops. RGB HTTPs have proven cost-effective in studies assessing the effect of abiotic stresses, but have yet to be fully exploited to phenotype disease resistance. RGB image quantification using different alternate color space transforms, including BreedPix indices, were produced as part of a FIJI plug-in (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). For validation, Maize Lethal Necrosis (MLN) visual scale impact assessments from 1 to 5 were scored by the resident CIMMYT plant pathologist, with 1 being MLN resistant (healthy plants with no visual symptoms) and 5 being totally susceptible (entirely necrotic with no green tissue). Individual RGB vegetation indexes outperformed NDVI (Normalized Difference Vegetation Index), with correlation values up to 0.72, compared to 0.56 for NDVI. Specifically, Hue, Green Area (GA), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating MLN disease severity. In multivariate linear and various decision tree models, Necrosis Area (NA) and Chlorosis Area (CA), calculated similar to GA and GGA from Breedpix, also contributed significantly to estimating MLN impact scores. Results using UAS (Unmanned Aerial Systems), proximal field photography of plants and plots and flatbed scanners of individual leaves have produced similar results, demonstrating the robustness of these cost-effective RGB indexes. Furthermore, the application of the indices using

  4. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  5. Conceptual Design and Simulation of a Semi-Automatic Cell for the Washing and Preparation of a Corpse Prior to an Islamic Burial

    Directory of Open Access Journals (Sweden)

    A. Meghdari

    2012-07-01

    Full Text Available Washing the corpse and dressing the body prior to burial is an act of love and necessity in many religions. Applying robotics and automation technologies for the washing and preparation of a deceased Muslim in accordance with the Islamic Shari'at laws has been the challenging foundation of this research. With an increasing annual population growth resulting in an increase in the number of deaths (historically and/or immediately after a national disaster, automating part of this procedure to increase the speed of operation, reducing the health risks to the personnel of washing rooms “Ghassalkhaneh” at the cemeteries and enhancing their quality of life have been the primary objectives of this project. We have named and patented this semi-automated corpse preparation machine as the “PaakShooy” or “پاک شوی” in Persian (Farsi which means purifying the deceased. The whole process is composed of three operational units lined up in a series; the automatic washing chamber, drying cell and the semi-automatic shrouding table. This paper covers an introductory concept of the subject in Islam, a conceptual design of various machines and mechanisms to automate the important tasks in accordance with Islamic laws, and the final detailed design, graphic simulation and animation of the PaakShooy machine. In doing so, consultation with Islamic scholars has been a priority from the beginning of the project to the end and a few Fatwa have been issued by some high ranking Ayatollahs in support of the project. With a few modifications, the semi-automated PaakShooy machine may now be updated to conform to other religions/customs.

  6. Sensitivity-to-change and validity of semi-automatic joint space width measurements in hand osteoarthritis: a follow-up study.

    Science.gov (United States)

    Damman, W; Kortekaas, M C; Stoel, B C; van 't Klooster, R; Wolterbeek, R; Rosendaal, F R; Kloppenburg, M

    2016-07-01

    To assess sensitivity-to-change and validity of longitudinal quantitative semi-automatic joint space width (JSW) measurements and to compare this method with semi-quantitative joint space narrowing (JSN) scoring in hand osteoarthritis (OA) patients. Baseline and 2-year follow-up radiographs of 56 hand OA patients (mean age 62 years, 86% women) were used. JSN was scored 0-3 using the Osteoarthritis Research Society International atlas and JSW was quantified in millimetres (mm) in the second to fifth distal, proximal interphalangeal and metacarpal joints (DIPJs, PIPJs, MCPJs). Sensitivity-to-change was evaluated by calculating Standardized Response Means (SRMs). Change in JSW or JSN above the Smallest Detectable Difference (SDD) defined progression on joint level. To assess construct validity, progressed joints were compared by cross-tabulation and by associating baseline ultrasound variables with progression (using generalized estimating equations, adjusting for age and sex). The JSW method detected statistically significant mean changes over 2.6 years (-0.027 mm (95%CI -0.01; -0.04), -0.024 mm (-0.01; -0.03), -0.021 mm (-0.01; -0.03) for DIPJs, PIPJs, MCPJs, respectively). Sensitivity-to-change was low (SRMs: 0.174, 0.168, 0.211, respectively). 9.1% (121/1336) of joints progressed in JSW, but 3.6% (48/1336) widened. 83 (6.2%) joints progressed in JSW only, 36 (2.7%) in JSN only and 37 (2.8%) in both methods. Progression in JSW showed weaker associations with baseline inflammatory ultrasound features than progression in JSN. Assessment of progression in hand OA defined by JSW measurements is possible, but performs less well than progression defined by JSN scoring. Therefore, the value of JSW measurements in hand OA clinical trials remains questionable. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  7. In vivo semi-automatic segmentation of multicontrast cardiovascular magnetic resonance for prospective cohort studies on plaque tissue composition: initial experience.

    Science.gov (United States)

    Yoneyama, Taku; Sun, Jie; Hippe, Daniel S; Balu, Niranjan; Xu, Dongxiang; Kerwin, William S; Hatsukami, Thomas S; Yuan, Chun

    2016-01-01

    Automatic in vivo segmentation of multicontrast (multisequence) carotid magnetic resonance for plaque composition has been proposed as a substitute for manual review to save time and reduce inter-reader variability in large-scale or multicenter studies. Using serial images from a prospective longitudinal study, we sought to compare a semi-automatic approach versus expert human reading in analyzing carotid atherosclerosis progression. Baseline and 6-month follow-up multicontrast carotid images from 59 asymptomatic subjects with 16-79 % carotid stenosis were reviewed by both trained radiologists with 2-4 years of specialized experience in carotid plaque characterization with MRI and a previously reported automatic atherosclerotic plaque segmentation algorithm, referred to as morphology-enhanced probabilistic plaque segmentation (MEPPS). Agreement on measurements from individual time points, as well as on compositional changes, was assessed using the intraclass correlation coefficient (ICC). There was good agreement between manual and MEPPS reviews on individual time points for calcification (CA) (area: ICC; 0.85-0.91; volume: ICC; 0.92-0.95) and lipid-rich necrotic core (LRNC) (area: ICC; 0.78-0.82; volume: ICC; 0.84-0.86). For compositional changes, agreement was good for CA volume change (ICC; 0.78) and moderate for LRNC volume change (ICC; 0.49). Factors associated with LRNC progression as detected by MEPPS review included intraplaque hemorrhage (positive association) and reduction in low-density lipoprotein cholesterol (negative association), which were consistent with previous findings from manual review. Automatic classifier for plaque composition produced results similar to expert manual review in a prospective serial MRI study of carotid atherosclerosis progression. Such automatic classification tools may be beneficial in large-scale multicenter studies by reducing image analysis time and avoiding bias between human reviewers.

  8. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  10. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  11. Computer aided training system development

    Energy Technology Data Exchange (ETDEWEB)

    Midkiff, G.N. (Advanced Technology Engineering Systems, Inc., Savannah, GA (US))

    1987-01-01

    The first three phases of Training System Development (TSD) -- job and task analysis, curriculum design, and training material development -- are time consuming and labor intensive. The use of personal computers with a combination of commercial and custom-designed software resulted in a significant reduction in the man-hours required to complete these phases for a Health Physics Technician Training Program at a nuclear power station. This paper reports that each step in the training program project involved the use of personal computers: job survey data were compiled with a statistical package, task analysis was performed with custom software designed to interface with a commercial database management program. Job Performance Measures (tests) were generated by a custom program from data in the task analysis database, and training materials were drafted, edited, and produced using commercial word processing software.

  12. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    Science.gov (United States)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  13. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  14. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  15. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  16. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  17. A simple and unsupervised semi-automatic workflow to detect shallow landslides in Alpine areas based on VHR remote sensing data

    Science.gov (United States)

    Amato, Gabriele; Eisank, Clemens; Albrecht, Florian

    2017-04-01

    Landslide detection from Earth observation imagery is an important preliminary work for landslide mapping, landslide inventories and landslide hazard assessment. In this context, the object-based image analysis (OBIA) concept has been increasingly used over the last decade. Within the framework of the Land@Slide project (Earth observation based landslide mapping: from methodological developments to automated web-based information delivery) a simple, unsupervised, semi-automatic and object-based approach for the detection of shallow landslides has been developed and implemented in the InterIMAGE open-source software. The method was applied to an Alpine case study in western Austria, exploiting spectral information from pansharpened 4-bands WorldView-2 satellite imagery (0.5 m spatial resolution) in combination with digital elevation models. First, we divided the image into sub-images, i.e. tiles, and then we applied the workflow to each of them without changing the parameters. The workflow was implemented as top-down approach: at the image tile level, an over-classification of the potential landslide area was produced; the over-estimated area was re-segmented and re-classified by several processing cycles until most false positive objects have been eliminated. In every step a Baatz algorithm based segmentation generates polygons "candidates" to be landslides. At the same time, the average values of normalized difference vegetation index (NDVI) and brightness are calculated for these polygons; after that, these values are used as thresholds to perform an objects selection in order to improve the quality of the classification results. In combination, also empirically determined values of slope and roughness are used in the selection process. Results for each tile were merged to obtain the landslide map for the test area. For final validation, the landslide map was compared to a geological map and a supervised landslide classification in order to estimate its accuracy

  18. Semi-Automatic Mapping of Tidal Cracks in the Fast Ice Region near Zhongshan Station in East Antarctica Using Landsat-8 OLI Imagery

    Directory of Open Access Journals (Sweden)

    Fengming Hui

    2016-03-01

    Full Text Available Tidal cracks are linear features that appear parallel to coastlines in fast ice regions due to the actions of periodic and non-periodic sea level oscillations. They can influence energy and heat exchange between the ocean, ice, and atmosphere, as well as human activities. In this paper, the LINE module of Geomatics 2015 software was used to automatically extract tidal cracks in fast ice regions near the Chinese Zhongshan Station in East Antarctica from Landsat-8 Operational Land Imager (OLI data with resolutions of 15 m (panchromatic band 8 and 30 m (multispectral bands 1–7. The detected tidal cracks were determined based on matching between the output from the LINE module and manually-interpreted tidal cracks in OLI images. The ratio of the length of detected tidal cracks to the total length of interpreted cracks was used to evaluate the automated detection method. Results show that the vertical direction gradient is a better input to the LINE module than the top-of-atmosphere (TOA reflectance input for estimating the presence of cracks, regardless of the examined resolution. Data with a resolution of 15 m also gives better results in crack detection than data with a resolution of 30 m. The statistics also show that, in the results from the 15-m-resolution data, the ratios in Band 8 performed best with values of the above-mentioned ratio of 50.92 and 31.38 percent using the vertical gradient and the TOA reflectance methods, respectively. On the other hand, in the results from the 30-m-resolution data, the ratios in Band 5 performed best with ratios of 47.43 and 17.8 percent using the same methods, respectively. This implies that Band 8 was better for tidal crack detection than the multispectral fusion data (Bands 1–7, and Band 5 with a resolution of 30 m was best among the multispectral data. The semi-automatic mapping of tidal cracks will improve the safety of vehicles travel in fast ice regimes.

  19. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  20. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  1. Semi-automatized segmentation method using image-based flow cytometry to study sperm physiology: the case of capacitation-induced tyrosine phosphorylation.

    Science.gov (United States)

    Matamoros-Volante, Arturo; Moreno-Irusta, Ayelen; Torres-Rodriguez, Paulina; Giojalas, Laura; Gervasi, María G; Visconti, Pablo E; Treviño, Claudia L

    2017-11-25

    Is image-based flow cytometry a useful tool to study intracellular events in human sperm such as protein tyrosine phosphorylation or signaling processes? Image-based flow cytometry is a powerful tool to study intracellular events in a relevant number of sperm cells, which enables a robust statistical analysis providing spatial resolution in terms of the specific subcellular localization of the labeling. Sperm capacitation is required for fertilization. During this process, spermatozoa undergo numerous physiological changes, via activation of different signaling pathways, which are not completely understood. Classical approaches for studying sperm physiology include conventional microscopy, flow cytometry and Western blotting. These techniques present disadvantages for obtaining detailed subcellular information of signaling pathways in a relevant number of cells. This work describes a new semi-automatized analysis using image-based flow cytometry which enables the study, at the subcellular and population levels, of different sperm parameters associated with signaling. The increase in protein tyrosine phosphorylation during capacitation is presented as an example. Sperm cells were isolated from seminal plasma by the swim-up technique. We evaluated the intensity and distribution of protein tyrosine phosphorylation in sperm incubated in non-capacitation and capacitation supporting media for 1 and 18 hours under different experimental conditions. We used an antibody against FER kinase and the pharmacological inhibitors in an attempt to identify the kinases involved in protein tyrosine phosphorylation during human sperm capacitation. Semen samples from normospermic donors were obtained by masturbation after 2-3 days of sexual abstinence. We used the innovative technique image-based flow cytometry and image analysis tools to segment individual images of spermatozoa. We evaluated and quantified the regions of sperm where protein tyrosine phosphorylation takes place at

  2. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  3. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  4. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  5. On the Universal Computing Power of Amorphous Computing Systems

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří; Petrů, L.

    2009-01-01

    Roč. 45, č. 4 (2009), s. 995-1010 ISSN 1432-4350 R&D Projects: GA AV ČR 1ET100300517; GA ČR GD201/05/H014 Institutional research plan: CEZ:AV0Z10300504 Keywords : amorphous computing systems * universal computing * random access machine * simulation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.726, year: 2009

  6. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  7. Know Your Personal Computer The Personal Computer System ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 4. Know Your Personal Computer The Personal Computer System Software. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 4 April 1996 pp 31-36. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  9. Performance tuning for high performance computing systems

    OpenAIRE

    Pahuja, Himanshu

    2017-01-01

    A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet framework. High Performance Computing used to involve utilization of supercomputers which could churn a lot of computing power to process massively complex computational tasks, but is now evolving across distributed systems, thereby having the ability to utilize geographically distributed computing resources. We...

  10. Contributions of Cloud Computing in CRM Systems

    OpenAIRE

    Bobek, Pavel

    2013-01-01

    This work deals with contributions of cloud computing to CRM. The main objective of this work is evaluation of cloud computing and its contributions to CRM systems and determining demands on cloud solution of CRM for trading company. The first chapter deals with CRM systems characteristics. The second chapter sums up qualities and opportunities of utilization of cloud computing. The third chapter describes demands on CRM systém with utilization of cloud computing for trading company that deal...

  11. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  12. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  13. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  14. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  15. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  16. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  17. Preventive maintenance for computer systems - concepts & issues ...

    African Journals Online (AJOL)

    Performing preventive maintenance activities for the computer is not optional. The computer is a sensitive and delicate device that needs adequate time and attention to make it work properly. In this paper, the concept and issues on how to prolong the life span of the system, that is, the way to make the system last long and ...

  18. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  19. Interactive graphical computer-aided design system

    Science.gov (United States)

    Edge, T. M.

    1975-01-01

    System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.

  20. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  1. Assessing and Mitigating Risks in Computer Systems

    OpenAIRE

    Netland, Lars-Helge

    2008-01-01

    When it comes to non-trivial networked computer systems, bulletproof security is very hard to achieve. Over a system's lifetime new security risks are likely to emerge from e.g. newly discovered classes of vulnerabilities or the arrival of new threat agents. Given the dynamic environment in which computer systems are deployed, continuous evaluations and adjustments are wiser than one-shot e orts for perfection. Security risk management focuses on assessing and treating security...

  2. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  3. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  4. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...

  5. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco

    2012-01-01

    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  6. Understanding the computing system domain of advanced computing with microcomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1990-01-01

    Accepting the challenge by the Executive Office of the President, Office of Science and Technology Policy for research to keep pace with technology, the author surveys the knowledge domain of advanced microcomputers. The paper provides a general background for social scientists in technology traditionally relegated to computer science and engineering. The concept of systems integration serves as a framework of understanding for the various elements of the knowledge domain of advanced microcomputing. The systems integration framework is viewed as a series of interrelated building blocks composed of the domain elements. These elements are: the processor platform, operating system, display technology, mass storage, application software, and human-computer interface. References come from recent articles in popular magazines and journals to help emphasize the easy access of this information, its appropriate technical level for the social scientist, and its transient currency. 78 refs., 3 figs.

  7. System Upgrade of the KEK Central Computing System

    Science.gov (United States)

    Murakami, Koichi; Iwai, Go; Sasaki, Takashi; Nakamura, Tomoaki; Takase, Wataru

    2017-10-01

    The KEK central computer system (KEKCC) supports various activities in KEK, such as the Belle/Belle II, J-PARC experiments, etc. The system was totally replaced and launched in September 2016. The computing resources in the new system are much enhanced as recent increase of computing demand. We have 10,000 CPU cores, 13 PB disk storage, and 70 PB maximum capacity of tape system. In this paper, we focus on the design and performance of the new storage system. Our knowledge, experience and challenges can be usefully shared among HEP data centers as a data-intensive computing facility for the next generation of HEP experiments.

  8. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  9. Project IDEALS. Educational Applications of Computer Systems.

    Science.gov (United States)

    Hansen, Duncan; And Others

    This is a booklet in the Project IDEALS series which deals with the use of Educational Data Processing (EDP) systems. A section is devoted to the use of the computer in such varied school operations as the processing of student records, schedules, computer simulation, grade reports, business, student applications, cafeterias, and transportation.…

  10. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  11. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  12. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  13. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  14. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  15. Satellite system considerations for computer data transfer

    Science.gov (United States)

    Cook, W. L.; Kaul, A. K.

    1975-01-01

    Communications satellites will play a key role in the transmission of computer generated data through nationwide networks. This paper examines critical aspects of satellite system design as they relate to the computer data transfer task. In addition, it discusses the factors influencing the choice of error control technique, modulation scheme, multiple-access mode, and satellite beam configuration based on an evaluation of system requirements for a broad range of application areas including telemetry, terminal dialog, and bulk data transmission.

  16. Potential of Cognitive Computing and Cognitive Systems

    OpenAIRE

    Noor Ahmed K.

    2014-01-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, alo...

  17. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G. [Kaiser Engineers Hanford Co., Richland, WA (United States)

    1996-05-03

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

  18. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  19. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  20. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  1. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  2. Computer Reconstruction of Plant Growth and Chlorophyll Fluorescence Emission in Three Spatial Dimensions

    Directory of Open Access Journals (Sweden)

    Ladislav Nedbal

    2012-01-01

    Full Text Available Plant leaves grow and change their orientation as well their emission of chlorophyll fluorescence in time. All these dynamic plant properties can be semi-automatically monitored by a 3D imaging system that generates plant models by the method of coded light illumination, fluorescence imaging and computer 3D reconstruction. Here, we describe the essentials of the method, as well as the system hardware. We show that the technique can reconstruct, with a high fidelity, the leaf size, the leaf angle and the plant height. The method fails with wilted plants when leaves overlap obscuring their true area. This effect, naturally, also interferes when the method is applied to measure plant growth under water stress. The method is, however, very potent in capturing the plant dynamics under mild stress and without stress. The 3D reconstruction is also highly effective in correcting geometrical factors that distort measurements of chlorophyll fluorescence emission of naturally positioned plant leaves.

  3. Computation and design of autonomous intelligent systems

    Science.gov (United States)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  4. A COMPUTER BASED MAINTENANCE MANAGEMENT SYSTEM ...

    African Journals Online (AJOL)

    The development of a computer based maintenance management system is presented for industries using optimization models. The system which is capable of using optimization data and programs to schedule for maintenance or replacement of machines has been designed such that it enables the maintenance ...

  5. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  6. A Survey of Civilian Dental Computer Systems.

    Science.gov (United States)

    1988-01-01

    r.arketplace, the orthodontic community continued to pioneer clinical automation through diagnosis, treat- (1) patient registration, identification...Compugnath Dental Diagnostic Systems DDS Articulate Publications - Dental Management Plus Dentalis System VI Dental Office Computer Artificial...Kamp Mixed Dentition Analysis Office Management Software Key Management - Dental Office Rocky Mountain Orthodontics Receivables Insurance . CADIAS/RDE

  7. Infrastructure Support for Collaborative Pervasive Computing Systems

    DEFF Research Database (Denmark)

    Vestergaard Mogensen, Martin

    contribute by building real world Collaborative Pervasive Computing Systems, including the Activity-Based Collaboration system and the iHospital system, which has been deployed and evaluated.  Secondly, we contribute with novel hybrid and fusion Software Architectures. Moreover, we propose separating......Collaborative Pervasive Computing Systems (CPCS) are currently being deployed to support areas such as clinical work, emergency situations, education, ad-hoc meetings, and other areas involving information sharing and collaboration.These systems allow the users to work together synchronously......, but from different places, by sharing information and coordinating activities. Several researchers have shown the value of such distributed collaborative systems. However, building these systems is by no means a trivial task and introduces a lot of yet unanswered questions. The aforementioned areas...

  8. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  9. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  10. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  11. Computer-Aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1996-09-27

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This document outlines the negotiated requirements as agreed to by GTE Northwest during technical contract discussions. This system defines a commercial off-the-shelf computer dispatching system providing both test and graphic display information while interfacing with diverse alarm reporting system within the Hanford Site. This system provided expansion capability to integrate Hanford Fire and the Occurrence Notification Center. The system also provided back-up capability for the Plutonium Processing Facility (PFP).

  12. Development of a Semi-Automatic Technique for Flow Estimation using Optical Flow Registration and k-means Clustering on Two Dimensional Cardiovascular Magnetic Resonance Flow Images

    DEFF Research Database (Denmark)

    Brix, Lau; Christoffersen, Christian P. V.; Kristiansen, Martin Søndergaard

    was then categorized into groups by the k-means clustering method. Finally, the cluster containing the vessel under investigation was selected manually by a single mouse click. All calculations were performed on a Nvidia 8800 GTX graphics card using the Compute Unified Device Architecture (CUDA) extension to the C...

  13. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  14. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  15. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  16. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  17. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...

  18. Logical Access Control Mechanisms in Computer Systems.

    Science.gov (United States)

    Hsiao, David K.

    The subject of access control mechanisms in computer systems is concerned with effective means to protect the anonymity of private information on the one hand, and to regulate the access to shareable information on the other hand. Effective means for access control may be considered on three levels: memory, process and logical. This report is a…

  19. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  20. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions...

  1. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  2. Computational system for activity calculation of radiopharmaceuticals

    African Journals Online (AJOL)

    ... this is specially practised in big countries like Brazil where the distance from one state to other is bigger than one country compared to others in continents like Europe. The purpose of this paper is to describe a computational system developed to evaluate the dose of radiopharmaceuticals during the production until the ...

  3. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...

  4. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  5. Personal healthcare system using cloud computing.

    Science.gov (United States)

    Takeuchi, Hiroshi; Mayuzumi, Yuuki; Kodama, Naoki; Sato, Keiichi

    2013-01-01

    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning lifestyle and health conditions embedded in the personal big data, by using a data mining technology. The system provides three editions (Diet, Lite, and Pro) corresponding to users' needs.

  6. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  7. Space systems computer-aided design technology

    Science.gov (United States)

    Garrett, L. B.

    1984-01-01

    The interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system is described, together with planned capability increases in the IDEAS system. The system's disciplines consist of interactive graphics and interactive computing. A single user at an interactive terminal can create, design, analyze, and conduct parametric studies of earth-orbiting satellites, which represents a timely and cost-effective method during the conceptual design phase where various missions and spacecraft options require evaluation. Spacecraft concepts evaluated include microwave radiometer satellites, communication satellite systems, solar-powered lasers, power platforms, and orbiting space stations.

  8. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  9. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M. Cristina; Trugenberger, Carlo A.

    2016-01-01

    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  10. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  11. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search....... However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally...... be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method...

  12. Thermodynamics of Computational Copying in Biochemical Systems

    Science.gov (United States)

    Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein

    2017-04-01

    Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.

  13. Risk analysis of computer system designs

    Science.gov (United States)

    Vallone, A.

    1981-01-01

    Adverse events during implementation can affect final capabilities, schedule and cost of a computer system even though the system was accurately designed and evaluated. Risk analysis enables the manager to forecast the impact of those events and to timely ask for design revisions or contingency plans before making any decision. This paper presents a structured procedure for an effective risk analysis. The procedure identifies the required activities, separates subjective assessments from objective evaluations, and defines a risk measure to determine the analysis results. The procedure is consistent with the system design evaluation and enables a meaningful comparison among alternative designs.

  14. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  15. Focus stacking: Comparing commercial top-end set-ups with a semi-automatic low budget approach. A possible solution for mass digitization of type specimens.

    Science.gov (United States)

    Brecko, Jonathan; Mathys, Aurore; Dekoninck, Wouter; Leponce, Maurice; VandenSpiegel, Didier; Semal, Patrick

    2014-01-01

    In this manuscript we present a focus stacking system, composed of commercial photographic equipment. The system is inexpensive compared to high-end commercial focus stacking solutions. We tested this system and compared the results with several different software packages (CombineZP, Auto-Montage, Helicon Focus and Zerene Stacker). We tested our final stacked picture with a picture obtained from two high-end focus stacking solutions: a Leica MZ16A with DFC500 and a Leica Z6APO with DFC290. Zerene Stacker and Helicon Focus both provided satisfactory results. However, Zerene Stacker gives the user more possibilities in terms of control of the software, batch processing and retouching. The outcome of the test on high-end solutions demonstrates that our approach performs better in several ways. The resolution of the tested extended focus pictures is much higher than those from the Leica systems. The flash lighting inside the Ikea closet creates an evenly illuminated picture, without struggling with filters, diffusers, etc. The largest benefit is the price of the set-up which is approximately € 3,000, which is 8 and 10 times less than the LeicaZ6APO and LeicaMZ16A set-up respectively. Overall, this enables institutions to purchase multiple solutions or to start digitising the type collection on a large scale even with a small budget.

  16. Computer aided system engineering for space construction

    Science.gov (United States)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  17. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  18. Checkpoint triggering in a computer system

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  19. From three-dimensional long-term tectonic numerical models to synthetic structural data: semi-automatic extraction of instantaneous & finite strain quantities

    Science.gov (United States)

    Duclaux, Guillaume; May, Dave

    2017-04-01

    compute individual ellipsoid's parameters (orientation, shape, etc.) and represent the finite deformation for any region of interest in a Flinn diagram. In addition, we can use the finite strain ellipsoids to estimate the prevailing foliation and/or lineation directions anywhere in the model. These two methods are applied to measure the instantaneous and finite deformation patterns within an oblique rift zone ongoing constant extension in the absence of surface processes.

  20. Byzantine-resilient distributed computing systems

    OpenAIRE

    Patnaik, LM; S.Balaji

    1987-01-01

    This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byza...

  1. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  2. Visual Turing test for computer vision systems.

    Science.gov (United States)

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-03-24

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a "visual Turing test": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question ("just-in-time truthing"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.

  3. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  4. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase... Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research...AND SUBTITLE A Heterogeneous High-Performance System for Computational and Computer Science 5a. CONTRACT NUMBER W911NF-15-1-0023 5b

  5. System/360 Computer Assisted Network Scheduling (CANS) System

    Science.gov (United States)

    Brewer, A. C.

    1972-01-01

    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  6. Physical Optics Based Computational Imaging Systems

    Science.gov (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  7. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  8. Radiation management computer system for Monju

    Energy Technology Data Exchange (ETDEWEB)

    Aoyama, Kei; Yasutomo, Katsumi [Fuji Electric Co. Ltd., Tokyo (Japan); Sudou, Takayuki [FFC, Ltd., Tokyo (Japan); Yamashita, Masahiro [Japan Nuclear Cycle Development Inst., Monju Construction Office, Tsuruga, Fukui (Japan); Hayata, Kenichi; Ueda, Hajime [Kosokuro Gijyutsu Service K.K., Tsuruga, Fukui (Japan); Hosokawa, Hideo [Nuclear Energy System Inc., Tsuruga, Fukui (Japan)

    2002-11-01

    Radiation management of nuclear power research institutes, nuclear power stations and other such facilities are strictly managed under Japanese laws and management policies. Recently, the momentous issues of more accurate radiation dose management and increased work efficiency has been discussed. Up to now, Fuji Electric Company has supplied a large number of Radiation Management Systems to nuclear power stations and related nuclear facilities. We introduce the new radiation management computer system with adopted WWW technique for Japan Nuclear Cycle Development Institute, MONJU Fast Breeder Reactor (MONJU). (author)

  9. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  10. Some queuing network models of computer systems

    Science.gov (United States)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  11. Production Management System for AMS Computing Centres

    Science.gov (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  12. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    Science.gov (United States)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  13. TU-A-9A-06: Semi-Automatic Segmentation of Skin Cancer in High-Frequency Ultrasound Images: Initial Comparison with Histology

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Y [Univ. Alabama at Birmingham, Birmingham, AL (United States); Li, X [Medical College of Wisconsin, Milwaukee, WI (United States); Fishman, K [Sensus Healthcare, Boca Raton, FL (United States); Yang, X [Department of Radiation Oncology and Winship Cancer Institute, Emory Univ., Atlanta, GA (United States); Liu, T [Emory Univ, Atlanta, GA (United States)

    2014-06-15

    Purpose: In skin-cancer radiotherapy, the assessment of skin lesion is challenging, particularly with important features such as the depth and width hard to determine. The aim of this study is to develop interative segmentation method to delineate tumor boundary using high-frequency ultrasound images and to correlate the segmentation results with the histopathological tumor dimensions. Methods: We analyzed 6 patients who comprised a total of 10 skin lesions involving the face, scalp, and hand. The patient’s various skin lesions were scanned using a high-frequency ultrasound system (Episcan, LONGPORT, INC., PA, U.S.A), with a 30-MHz single-element transducer. The lateral resolution was 14.6 micron and the axial resolution was 3.85 micron for the ultrasound image. Semiautomatic image segmentation was performed to extract the cancer region, using a robust statistics driven active contour algorithm. The corresponding histology images were also obtained after tumor resection and served as the reference standards in this study. Results: Eight out of the 10 lesions are successfully segmented. The ultrasound tumor delineation correlates well with the histology assessment, in all the measurements such as depth, size, and shape. The depths measured by the ultrasound have an average of 9.3% difference comparing with that in the histology images. The remaining 2 cases suffered from the situation of mismatching between pathology and ultrasound images. Conclusion: High-frequency ultrasound is a noninvasive, accurate and easy-accessible modality to image skin cancer. Our segmentation method, combined with high-frequency ultrasound technology, provides a promising tool to estimate the extent of the tumor to guide the radiotherapy procedure and monitor treatment response.

  14. Computation in Dynamically Bounded Asymmetric Systems

    Science.gov (United States)

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  15. System administration of ATLAS TDAQ computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Adeel-Ur-Rehman, A [National Centre for Physics, Islamabad (Pakistan); Bujor, F; Dumitrescu, A; Dumitru, I; Leahu, M; Valsan, L [Politehnica University of Bucharest (Romania); Benes, J [Zapadoceska Univerzita v Plzni (Czech Republic); Caramarcu, C [National Institute of Physics and Nuclear Engineering (Romania); Dobson, M; Unel, G [University of California at Irvine (United States); Oreshkin, A [St. Petersburg Nuclear Physics Institute (Russian Federation); Popov, D [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Zaytsev, A, E-mail: Alexandr.Zaytsev@cern.c [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  16. [Renewal of NIHS computer network system].

    Science.gov (United States)

    Segawa, Katsunori; Nakano, Tatsuya; Saito, Yoshiro

    2012-01-01

    Updated version of National Institute of Health Sciences Computer Network System (NIHS-NET) is described. In order to reduce its electric power consumption, the main server system was newly built using the virtual machine technology. The service that each machine provided in the previous network system should be maintained as much as possible. Thus, the individual server was constructed for each service, because a virtual server often show decrement in its performance as compared with a physical server. As a result, though the number of virtual servers was increased and the network communication became complicated among the servers, the conventional service was able to be maintained, and security level was able to be rather improved, along with saving electrical powers. The updated NIHS-NET bears multiple security countermeasures. To maximal use of these measures, awareness for the network security by all users is expected.

  17. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  18. Epilepsy analytic system with cloud computing.

    Science.gov (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  19. Tutoring system for nondestructive testing using computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Koo; Koh, Sung Nam [Joong Ang Inspection Co.,Ltd., Seoul (Korea, Republic of); Shim, Yun Ju; Kim, Min Koo [Dept. of Computer Engineering, Aju University, Suwon (Korea, Republic of)

    1997-10-15

    This paper is written to introduce a multimedia tutoring system for nondestructive testing using personal computer. Nondestructive testing, one of the chief methods for inspecting welds and many other components, is very difficult for the NDT inspectors to understand its technical basis without a wide experience. And it is necessary for considerable repeated education and training for keeping their knowledge. The tutoring system that can simulate NDT works is suggested to solve the above problem based on reasonable condition. The tutoring system shows basic theories of nondestructive testing in a book-style with video images and hyper-links, and it offers practices, in which users can simulate the testing equipment. The book-style and simulation practices provide effective and individual environments for learning nondestructive testing.

  20. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.

    Science.gov (United States)

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  1. Interactive computer enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.A.; Tourtellott, J.A.

    1994-12-31

    The Interactive, Computer Enhanced, Remote Viewing System (ICERVSA) is a volumetric data system designed to help the Department of Energy (DOE) improve remote operations in hazardous sites by providing reliable and accurate maps of task spaces where robots will clean up nuclear wastes. The ICERVS mission is to acquire, store, integrate and manage all the sensor data for a site and to provide the necessary tools to facilitate its visualization and interpretation. Empirical sensor data enters through the Common Interface for Sensors and after initial processing, is stored in the Volumetric Database. The data can be analyzed and displayed via a Graphic User Interface with a variety of visualization tools. Other tools permit the construction of geometric objects, such as wire frame models, to represent objects which the operator may recognize in the live TV image. A computer image can be generated that matches the viewpoint of the live TV camera at the remote site, facilitating access to site data. Lastly, the data can be gathered, processed, and transmitted in acceptable form to a robotic controller. Descriptions are given of all these components. The final phase of the ICERVS project, which has just begun, will produce a full scale system and demonstrate it at a DOE site to be selected. A task added to this Phase will adapt the ICERVS to meet the needs of the Dismantlement and Decommissioning (D and D) work at the Oak Ridge National Laboratory (ORNL).

  2. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  3. Semi-Automatic Methods of Knowledge Enhancement

    Science.gov (United States)

    1988-12-05

    Z? (2.1/2.2) Z? =C &E (2.3) Z? ~=A (2.4) for example city c=pop- mr & palace & parks & cathedral city =castle & pop-Im & parks Z? v= palace... cathedral Z? = castle what shall I call Z? sights 3) Absorption. Given a set of rules, the body of which is completely contained within the body of the...using reverse polish notation. Reverse polish allows us to ignore the requirement for bracketting and separators. Let sym(S) represent the combined set of

  4. pinktoe: Semi-automatic Traversal of Trees

    OpenAIRE

    Guy Nason

    2005-01-01

    Tree based methods in S or R are extremely useful and popular. For simple trees and memorable variables it is easy to predict the outcome for a new case using only a standard decision tree diagram. However, for large trees or trees where the variable description is complex the decision tree diagram is often not enough. This article describes pinktoe: an R package containing two tools to assist with the semiautomatic traversal of trees. The PT tool creates a widget for each node to be visited ...

  5. pinktoe: Semi-automatic Traversal of Trees

    Directory of Open Access Journals (Sweden)

    Guy P. Nason

    2005-04-01

    Full Text Available Tree based methods in S or R are extremely useful and popular. For simple trees and memorable variables it is easy to predict the outcome for a new case using only a standard decision tree diagram. However, for large trees or trees where the variable description is complex the decision tree diagram is often not enough. This article describes pinktoe: an R package containing two tools to assist with the semiautomatic traversal of trees. The PT tool creates a widget for each node to be visited in the tree that is needed to make a decision and permits the user to make decisions using radiobuttons. The pinktoe function generates a suite of HTML and Perl files that permit a CGI-enabled website to issue step-by-step questions to a user wishing to make a prediction using a tree.

  6. QUASI-RANDOM TESTING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    S. V. Yarmolik

    2013-01-01

    Full Text Available Various modified random testing approaches have been proposed for computer system testing in the black box environment. Their effectiveness has been evaluated on the typical failure patterns by employing three measures, namely, P-measure, E-measure and F-measure. A quasi-random testing, being a modified version of the random testing, has been proposed and analyzed. The quasi-random Sobol sequences and modified Sobol sequences are used as the test patterns. Some new methods for Sobol sequence generation have been proposed and analyzed.

  7. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  8. Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems

    Science.gov (United States)

    1981-08-01

    This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...

  9. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  10. Potential of Cognitive Computing and Cognitive Systems

    Science.gov (United States)

    Noor, Ahmed K.

    2014-11-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  11. COMPUTER-BASED REASONING SYSTEMS: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    CIPRIAN CUCU

    2012-12-01

    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  12. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  13. Kwf-Grid workflow management system for Earth science applications

    Science.gov (United States)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  14. Spectrum optimization for computed radiography mammography systems.

    Science.gov (United States)

    Figl, Michael; Homolka, Peter; Semturs, Friedrich; Kaar, Marcus; Hummel, Johann

    2016-08-01

    Technical quality assurance is a key issue in breast screening protocols. While full-field digital mammography systems produce excellent image quality at low dose, it appears difficult with computed radiography (CR) systems to fulfill the requirements for image quality, and to keep the dose below the limits. However, powder plate CR systems are still widely used, e.g., they represent ∼30% of the devices in the Austrian breast cancer screening program. For these systems the selection of an optimal spectrum is a key issue. We investigated different anode/filter (A/F) combinations over the clinical range of tube voltages. The figure-of-merit (FOM) to be optimized was squared signal-difference-to-noise ratio divided by glandular dose. Measurements were performed on a Siemens Mammomat 3000 with a Fuji Profect reader (SiFu) and on a GE Senograph DMR with a Carestream reader (GECa). For 50mm PMMA the maximum FOM was found with a Mo/Rh spectrum between 27kVp and 29kVp, while with 60mm Mo/Rh at 28kVp (GECa) and W/Rh 25kVp (SiFu) were superior. For 70mm PMMA the Rh/Rh spectrum had a peak at about 31kVp (GECa). FOM increases from 10% to >100% are demonstrated. Optimization as proposed in this paper can either lead to dose reduction with comparable image quality or image quality improvement if necessary. For systems with limited A/F combinations the choice of tube voltage is of considerable importance. In this work, optimization of AEC parameters such as anode-filter combination and tube potential was demonstrated for mammographic CR systems. Copyright © 2016. Published by Elsevier Ltd.

  15. System Matrix Analysis for Computed Tomography Imaging.

    Directory of Open Access Journals (Sweden)

    Liubov Flores

    Full Text Available In practical applications of computed tomography imaging (CT, it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data.

  16. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  17. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  18. Self-Configurable FPGA-Based Computer Systems

    Directory of Open Access Journals (Sweden)

    MELNYK, A.

    2013-05-01

    Full Text Available Method of information processing in reconfigurable computer systems is formulated and its improvements that allow an information processing efficiency to increase are proposed. New type of high-performance computer systems, which are named self-configurable FPGA-based computer systems and perform information processing according to this improved method, is proposed. The structure of self-configurable FPGA-based computer systems, rules of application of computer software and hardware means, which are necessary for these systems implementation, are described and their execution time characteristics are estimated. The directions for further works are discussed.

  19. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  20. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  1. Evolution: The Computer Systems Engineer Designing Minds

    Directory of Open Access Journals (Sweden)

    Aaron Sloman

    2011-10-01

    Full Text Available What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which we have only recently learnt how to design and build, and could not even have been thought about in Darwin’s time, can interact with the physical machinery in which they are implemented, without being identical with their physical implementation, nor mere aggregates of physical structures and processes. The existence of various kinds of virtual machinery (including both “platform” virtual machines that can host other virtual machines, e.g. operating systems, and “application” virtual machines, e.g. spelling checkers, and computer games depends on complex webs of causal connections involving hardware and software structures, events and processes, where the specification of such causal webs requires concepts that cannot be defined in terms of concepts of the physical sciences. That indefinability, plus the possibility of various kinds of self-monitoring within virtual machinery, seems to explain some of the allegedly mysterious and irreducible features of consciousness that motivated Darwin’s critics and also more recent philosophers criticising AI. There are consequences for philosophy, psychology, neuroscience and robotics.

  2. Reliable timing systems for computer controlled accelerators

    Science.gov (United States)

    Knott, Jürgen; Nettleton, Robert

    1986-06-01

    Over the past decade the use of computers has set new standards for control systems of accelerators with ever increasing complexity coupled with stringent reliability criteria. In fact, with very slow cycling machines or storage rings any erratic operation or timing pulse will cause the loss of precious particles and waste hours of time and effort of preparation. Thus, for the CERN linac and LEAR (Low Energy Antiproton Ring) timing system reliability becomes a crucial factor in the sense that all components must operate practically without fault for very long periods compared to the effective machine cycle. This has been achieved by careful selection of components and design well below thermal and electrical limits, using error detection and correction where possible, as well as developing "safe" decoding techniques for serial data trains. Further, consistent structuring had to be applied in order to obtain simple and flexible modular configurations with very few components on critical paths and to minimize the exchange of information to synchronize accelerators. In addition, this structuring allows the development of efficient strategies for on-line and off-line fault diagnostics. As a result, the timing system for Linac 2 has, so far, been operating without fault for three years, the one for LEAR more than one year since its final debugging.

  3. A computing system for LBB considerations

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, K.; Miettinen, J.; Raiko, H.; Keskinen, R.

    1997-04-01

    A computing system has been developed at VTT Energy for making efficient leak-before-break (LBB) evaluations of piping components. The system consists of fracture mechanics and leak rate analysis modules which are linked via an interactive user interface LBBCAL. The system enables quick tentative analysis of standard geometric and loading situations by means of fracture mechanics estimation schemes such as the R6, FAD, EPRI J, Battelle, plastic limit load and moments methods. Complex situations are handled with a separate in-house made finite-element code EPFM3D which uses 20-noded isoparametric solid elements, automatic mesh generators and advanced color graphics. Analytical formulas and numerical procedures are available for leak area evaluation. A novel contribution for leak rate analysis is the CRAFLO code which is based on a nonequilibrium two-phase flow model with phase slip. Its predictions are essentially comparable with those of the well known SQUIRT2 code; additionally it provides outputs for temperature, pressure and velocity distributions in the crack depth direction. An illustrative application to a circumferentially cracked elbow indicates expectedly that a small margin relative to the saturation temperature of the coolant reduces the leak rate and is likely to influence the LBB implementation to intermediate diameter (300 mm) primary circuit piping of BWR plants.

  4. A computed tomographic imaging system for experimentation

    Science.gov (United States)

    Lu, Yanping; Wang, Jue; Liu, Fenglin; Yu, Honglin

    2008-03-01

    Computed tomography (CT) is a non-invasive imaging technique, which is widely applied in medicine for diagnosis and surgical planning, and in industry for non-destructive testing (NDT) and non-destructive evaluation (NDE). So, it is significant for college students to understand the fundamental of CT. In this work, A CT imaging system named CD-50BG with 50mm field-of-view has been developed for experimental teaching at colleges. With the translate-rotate scanning mode, the system makes use of a 7.4×10 8Bq (20mCi) activity 137Cs radioactive source which is held in a tungsten alloy to shield the radiation and guarantee no harm to human body, and a single plastic scintillator + photomultitude detector which is convenient for counting because of its short-time brightness and good single pulse. At same time, an image processing software with the functions of reconstruction, image processing and 3D visualization has also been developed to process the 16 bits acquired data. The reconstruction time for a 128×128 image is less than 0.1 second. High quality images with 0.8mm spatial resolution and 2% contrast sensitivity can be obtained. So far in China, more than ten institutions of higher education, including Tsinghua University and Peking University, have already applied the system for elementary teaching.

  5. 07271 Summary -- Computational Social Systems and the Internet

    OpenAIRE

    Cramton, Peter; Müller, Rudolf; Tardos, Eva; Tennenholtz, Moshe

    2007-01-01

    The seminar "Computational Social Systems and the Internet" facilitated a very fruitful interaction between economists and computer scientists, which intensified the understanding of the other disciplines' tool sets. The seminar helped to pave the way to a unified theory of social systems on the Internet that takes into account both the economic and the computational issues---and their deep interaction.

  6. A computer fault inquiry system of quick navigation

    Science.gov (United States)

    Guo-cheng, Yin

    The computer maintains depend on the experience and knowledge of the experts. The paper poses a computer fault inquiry system of quick navigation to achieve the reusing and sharing of the knowledge of the computer maintenance. The paper presents the needs analysis of the computer fault inquiry system, and gives the partition of the system function, and then designs the system, including the database logical design, the main form menu design and directory query module design; Finally, the code implementation of the query module is given and the implementation of the computer fault quick navigation methods of the keywords-based is stress introduced.

  7. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  8. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  9. Optical Computing-Optical Components and Storage Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 6. Optical Computing - Optical Components and Storage Systems ... Keywords. Advanced materials. optical switching. pulse shaping. optical storage device. high-performance computing. imaging; nanotechnology. photonics. telecommunications ...

  10. System for Computer Automated Typesetting ((SCAT) of Computer Authored Texts.

    Science.gov (United States)

    1980-07-01

    line, the isolated word is re- ferred to as an "orphan." Both widows and orphans are anathema to compositors and typographers. 9 Approximately 25...possible that further material savings could accrue through the use of a more sophisticated compositor system. Such a system would make more effective use...on the last line of a paragraph. Orphans are anathema to compositors and typographers. photoprocessor An electro-chemical-mechanical device for

  11. Dynamical Systems Theory for Transparent Symbolic Computation in Neuronal Networks

    OpenAIRE

    Carmantini, Giovanni Sirio

    2017-01-01

    In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize...

  12. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  13. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Biocellion: accelerating computer simulation of multicellular biological system models

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  15. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  16. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  17. Sensitometer/densitometer system with computer communication

    Energy Technology Data Exchange (ETDEWEB)

    Elbern, Martin Kruel; Souto, Eduardo de Brito [Pro-Rad, Consultores em Radioprotecao S/S Ltda., Porto Alegre, RS (Brazil)]. E-mails: martin@prorad.com.br; ebsouto@prorad.com.br; Van der Laan, Flavio Tadeu [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Dept. de Engenharia Nuclear]. E-mail: ftvdl@ufrgs.br

    2007-07-01

    Health institutions that work with X-Rays use the Sensitometer/Densitometer system in the routine ascertainments of the image quality of the used films. These are very important tests. When done properly, they help to reduce chemical replenishment, film rejection ratio and the patient dose. This way, it does also reduce the cost of every exam. In Brazil, the quality control tests should be daily for mammography and monthly for other radiographic equipment. They are not done that often, because of the high cost equipment and the knowledge required using it properly. A sensitometer and a densitometer were developed. The Sensitometer is the equipment used to sensitize the films and the densitometer is used to measure the optic density of the sensitizations caused by the sensitometer. The developed densitometer will show in a display not only the optical densities read, but also the results of the important parameters for the quality control in the revelation process of a radiographic film. This densitometer also has computer communication for a more careful analysis and confection of the reports through proper software. As these equipment are designed for the Brazilian market they will help popularize the test, for having a low cost and for calculating the parameters of interest. At least they will help to reduce the collective dose. (author)

  18. Lightness computation by the human visual system

    Science.gov (United States)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  19. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Computer Graphics for System Effectiveness Analysis.

    Science.gov (United States)

    1986-05-01

    using the round operation when computing the number of shots: a real number must be converted into an integer number [ Chapra and ... Canale, 1985]. Then...02139, August 1982. Chapra , Steven C., and Raymond P. Canale, (1985), Numerical Methods for Engineers with Personal Computer Applications New York

  1. Computable Analysis with Applications to Dynamic Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter)

    2010-01-01

    htmlabstractIn this article we develop a theory of computation for continuous mathematics. The theory is based on earlier developments of computable analysis, especially that of the school of Weihrauch, and is presented as a model of intuitionistic type theory. Every effort has been made to keep the

  2. The hack attack - Increasing computer system awareness of vulnerability threats

    Science.gov (United States)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  3. Computer system in Prolog for legal consultation relating to radiations

    Energy Technology Data Exchange (ETDEWEB)

    Kaminishi, Tokishi; Matsuda, Hideharu; Koshino, Masao

    1988-05-01

    A computer consulting system on legal questions relating to radiations was developed, which system was described with Prolog in BASIC using personal computer. A remarkable feature of this system is easiness and simplicity in its operation. Furthermore the programming for answer is simple and flexible owing to Prolog. This consulting system is similar to CAI rather than Expert System. An outline of the system is described and several examples are shown with the executed results in this report.

  4. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  5. Application of computational intelligence in emerging power systems

    African Journals Online (AJOL)

    ... in the electrical engineering applications. This paper highlights the application of computational intelligence methods in power system problems. Various types of CI methods, which are widely used in power system, are also discussed in the brief. Keywords: Power systems, computational intelligence, artificial intelligence.

  6. Configurable computing for high-security/high-performance ambient systems

    OpenAIRE

    Gogniat, Guy; Bossuet, Lilian; Burleson, Wayne

    2005-01-01

    This paper stresses why configurable computing is a promising target to guarantee the hardware security of ambient systems. Many works have focused on configurable computing to demonstrate its efficiency but as far as we know none have addressed the security issue from system to circuit levels. This paper recalls main hardware attacks before focusing on issues to build secure systems on configurable computing. Two complementary views are presented to provide a guide for security and main issues ...

  7. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold

    2013-01-01

    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  8. Computational Modeling of Flow Control Systems for Aerospace Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop computational methods for designing active flow control systems on aerospace vehicles with the primary objective of...

  9. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  10. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  11. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  12. Computer-aided Instructional System for Transmission Line Simulation.

    Science.gov (United States)

    Reinhard, Erwin A.; Roth, Charles H., Jr.

    A computer-aided instructional system has been developed which utilizes dynamic computer-controlled graphic displays and which requires student interaction with a computer simulation in an instructional mode. A numerical scheme has been developed for digital simulation of a uniform, distortionless transmission line with resistive terminations and…

  13. Cloud Computing and Online Operating System

    OpenAIRE

    Mohit Jain; Mohd. Danish; Hemant Yadav

    2011-01-01

    HOW YOU DO FEEL WHEN YOU ARE USING THE SOFTWARE WITHOUT INSTALLING IT IN YOUR COMPUTER? Isn’t it miracle? Yes it is .Cloud computing makes it possible in today’s world. It saves your memory both primary and secondary because your data is on centralized data center located outside your house which is highly secure. It is not in your computer memory so that it can be accessed anywhere by you. It also saves money which you don’t need to buy any expensive hardware to access the particular softw...

  14. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  15. BRAHMS: Novel middleware for integrated systems computation

    OpenAIRE

    Mitchinson, B.; Chan, T. S.; Chambers, J.; Pearson, M; Humphries, M; Fox, C; Gurney, K.; Prescott, T. J.

    2010-01-01

    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practica...

  16. Actor Model of Computation for Scalable Robust Information Systems : One computer is no computer in IoT

    OpenAIRE

    Hewitt, Carl

    2015-01-01

    International audience; The Actor Model is a mathematical theory that treats “Actors” as the universal conceptual primitives of digital computation. Hypothesis: All physically possible computation can be directly implemented using Actors.The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. The advent of massive concurrency through client-cloud computing and many-cor...

  17. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  18. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    Science.gov (United States)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  19. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  20. Design technologies for green and sustainable computing systems

    CERN Document Server

    Ganguly, Amlan; Chakrabarty, Krishnendu

    2013-01-01

    This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high-performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking. ·         Offers readers a single-source reference for addressing the challenges of power efficiency and sustainability in embedded computing systems; ·         Provides in-depth coverage of the key underlying design technologies for green and sustainable computing; ·         Covers a wide range of topics, from chip-level design to architectures, computing systems, and networks.

  1. A comparison of queueing, cluster and distributed computing systems

    Science.gov (United States)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  2. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  3. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski

    2012-01-01

    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  4. Turning text into research networks: information retrieval and computational ontologies in the creation of scientific databases.

    Science.gov (United States)

    Ceci, Flávio; Pietrobon, Ricardo; Gonçalves, Alexandre Leopoldo

    2012-01-01

    Web-based, free-text documents on science and technology have been increasing growing on the web. However, most of these documents are not immediately processable by computers slowing down the acquisition of useful information. Computational ontologies might represent a possible solution by enabling semantically machine readable data sets. But, the process of ontology creation, instantiation and maintenance is still based on manual methodologies and thus time and cost intensive. We focused on a large corpus containing information on researchers, research fields, and institutions. We based our strategy on traditional entity recognition, social computing and correlation. We devised a semi automatic approach for the recognition, correlation and extraction of named entities and relations from textual documents which are then used to create, instantiate, and maintain an ontology. We present a prototype demonstrating the applicability of the proposed strategy, along with a case study describing how direct and indirect relations can be extracted from academic and professional activities registered in a database of curriculum vitae in free-text format. We present evidence that this system can identify entities to assist in the process of knowledge extraction and representation to support ontology maintenance. We also demonstrate the extraction of relationships among ontology classes and their instances. We have demonstrated that our system can be used for the conversion of research information in free text format into database with a semantic structure. Future studies should test this system using the growing number of free-text information available at the institutional and national levels.

  5. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  6. The emergent computational potential of evolving artificial living systems

    NARCIS (Netherlands)

    Wiedermann, J.; Leeuwen, J. van

    2002-01-01

    The computational potential of articial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of so- called cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such

  7. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  8. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  9. On the Computation of Lyapunov Functions for Interconnected Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer

    2016-01-01

    This paper addresses the computation of additively separable Lyapunov functions for interconnected systems. The presented results can be applied to reduce the complexity of the computations associated with stability analysis of large scale systems. We provide a necessary and sufficient condition ...

  10. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Manual Brachytherapy § 35.457 Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning...

  11. Entrepreneurial Health Informatics for Computer Science and Information Systems Students

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti

    2014-01-01

    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  12. A computer controlled pulse penerator for an ST Radar System ...

    African Journals Online (AJOL)

    A computer controlled pulse genarator for an ST radar system is described. It uses a highly flexible software and a hardware with a small IC count, making the system compact and highly programmable. The parameters of the signals of the pulse generator are initially entered from the keyboard. The computer then generates ...

  13. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  14. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  15. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  16. Interactive Computer Graphics for System Analysis.

    Science.gov (United States)

    1983-12-01

    confusing distortion to the picture. George Washington Univesity has documented this problem [38:37] and provided a solution for FORTRAN users but makes...Analysis. New York: Holt, Rinehart, and Winston, 1974. 21. Moore M.V. and L.H. Nawrocki, The Educational Effectiveness of Graphic Displays for Computer

  17. Computers and Information Systems in Education.

    Science.gov (United States)

    Goodlad, John I.; And Others

    In an effort to increase the awareness of educators about the potential of electronic data processing (EDP) in education and acquaint the EDP specialists with current educational problems, this book discusses the routine uses of EDP for business and student accounting, as well as its innovative uses in instruction. A description of computers and…

  18. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  19. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  20. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  1. Security for small computer systems a practical guide for users

    CERN Document Server

    Saddington, Tricia

    1988-01-01

    Security for Small Computer Systems: A Practical Guide for Users is a guidebook for security concerns for small computers. The book provides security advice for the end-users of small computers in different aspects of computing security. Chapter 1 discusses the security and threats, and Chapter 2 covers the physical aspect of computer security. The text also talks about the protection of data, and then deals with the defenses against fraud. Survival planning and risk assessment are also encompassed. The last chapter tackles security management from an organizational perspective. The bo

  2. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  3. Computer simulation of electrokinetics in colloidal systems

    Science.gov (United States)

    Schmitz, R.; Starchenko, V.; Dünweg, B.

    2013-11-01

    The contribution gives a brief overview outlining how our theoretical understanding of the phenomenon of colloidal electrophoresis has improved over the decades. Particular emphasis is put on numerical calculations and computer simulation models, which have become more and more important as the level of description became more detailed and refined. Due to computational limitations, it has so far not been possible to study "perfect" models. Different complementary models have hence been developed, and their various strengths and deficiencies are briefly discussed. This is contrasted with the experimental situation, where there are still observations waiting for theoretical explanation. The contribution then outlines our recent development of a numerical method to solve the electrokinetic equations for a finite volume in three dimensions, and describes some new results that could be obtained by the approach.

  4. A Reliable Distributed Computing System Architecture for Planetary Rover

    Science.gov (United States)

    Jingping, C.; Yunde, J.

    Computing system is one of the most important parts in planetary rover Computing system is crucial to the rover function capability and survival probability When the planetary rover executes some tasks it needs to react to the events in time and to tolerant the faults cause by the environment or itself To meet the requirements the planetary rover computing system architecture should be reactive high reliable adaptable consistent and extendible This paper introduces reliable distributed computing system architecture for planetary rover This architecture integrates the new ideas and technologies of hardware architecture software architecture network architecture fault tolerant technology and the intelligent control system architecture The planetary computing system architecture defines three dimensions of fault containment regions the channel dimension the lane dimension and the integrity dimension The whole computing system has three channels The channels provide the main fault containment regions for system hardware It is the ultimate line of defense of a single physical fault The lanes are the secondary fault containment regions for physical faults It can be used to improve the capability for fault diagnosis within a channel and can improve the coverage with respect to design faults through hardware and software diversity It also can be used as backups for each others to improve the availability and can improve the computing capability The integrity dimension provides faults containment region for software design Its purpose

  5. An Annotated and Cross-Referenced Bibliography on Computer Security and Access Control in Computer Systems.

    Science.gov (United States)

    Bergart, Jeffrey G.; And Others

    This paper represents a careful study of published works on computer security and access control in computer systems. The study includes a selective annotated bibliography of some eighty-five important published results in the field and, based on these papers, analyzes the state of the art. In annotating these works, the authors try to be…

  6. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  7. Safe Neighborhood Computation for Hybrid System Verification

    Directory of Open Access Journals (Sweden)

    Yi Deng

    2015-01-01

    Full Text Available For the design and implementation of engineering systems, performing model-based analysis can disclose potential safety issues at an early stage. The analysis of hybrid system models is in general difficult due to the intrinsic complexity of hybrid dynamics. In this paper, a simulation-based approach to formal verification of hybrid systems is presented.

  8. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  9. DNA-Enabled Integrated Molecular Systems for Computation and Sensing

    Science.gov (United States)

    2014-05-21

    a PE incapacitated , semifunctional, or even detrimental to other nodes. At the architectural level, internode connections may be omitted or broken. In... architectures and systems. Over a decade of work at the intersection of DNA nanotechnology and computer system design has shown several key elements and...introduces a framework for optical computing at the molecular level. This Account also highlights several architectural system studies that

  10. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  11. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  12. Open Source Live Distributions for Computer Forensics

    Science.gov (United States)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  13. THE USE OF COMPUTER ALGEBRA SYSTEMS IN THE TEACHING PROCESS

    Directory of Open Access Journals (Sweden)

    Mychaylo Paszeczko

    2014-11-01

    Full Text Available This work discusses computational capabilities of the programs belonging to the CAS (Computer Algebra Systems. A review of commercial and non-commercial software has been done here as well. In addition, there has been one of the programs belonging to the this group (program Mathcad selected and its application to the chosen example has been presented. Computational capabilities and ease of handling were decisive factors for the selection.

  14. SOME PARADIGMS OF ARTIFICIAL INTELLIGENCE IN FINANCIAL COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2015-12-01

    Full Text Available The article discusses some paradigms of artificial intelligence in the context of their applications in computer financial systems. The proposed approach has a significant po-tential to increase the competitiveness of enterprises, including financial institutions. However, it requires the effective use of supercomputers, grids and cloud computing. A reference is made to the computing environment for Bitcoin. In addition, we characterized genetic programming and artificial neural networks to prepare investment strategies on the stock exchange market.

  15. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  16. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  17. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  18. Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering

    CERN Document Server

    Elleithy, Khaled

    2013-01-01

    Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning. This book includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2010). The proceedings are a set of rigorously reviewed world-class manuscripts presenting the state of international practice in Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications.

  19. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  20. Computing Operating Characteristics Of Bearing/Shaft Systems

    Science.gov (United States)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  1. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  2. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  3. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  4. AUTOMATED COMPUTER SYSTEM OF VEHICLE VOICE CONTROL

    Directory of Open Access Journals (Sweden)

    A. Kravchenko

    2009-01-01

    Full Text Available Domestic cars and foreign analogues are considered. Failings are marked related to absence of the auxiliary electronic system which serves for the increase of safety and comfort of vehicle management. Innovative development of the complex system of vocal management which provides reliability, comfort and simplicity of movement in a vehicle is offered.

  5. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  6. modeling workflow management in a distributed computing system ...

    African Journals Online (AJOL)

    Dr Obe

    It is a fact of life that various organisations and individuals are becoming increasingly dependent on distributed computing systems. According to V. Glushkov, a well-known. Soviet Scientist, "the development of computer networks and terminals results in a situation where the ever greater part of information, first and foremost ...

  7. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  8. On the programmability of heterogeneous massively-parallel computing systems

    OpenAIRE

    Gelado Fernández, Isaac

    2010-01-01

    Heterogeneous parallel computing combines general purpose processors with accelerators to efficiently execute both sequential control-intensive and data-parallel phases of applications. Existing programming models for heterogeneous parallel computing impose added coding complexity when compared to traditional sequential shared-memory programming models for homogeneous systems. This extra code complexity is assumable in supercomputing environments, where programmability is sacrificed in pursui...

  9. The Handbook for the Computer Security Certification of Trusted Systems

    Science.gov (United States)

    1992-10-12

    The Navy has designated the Naval Research Laboratory (NRL) as its Center for Computer Security Research and Evaluation. NRL is actively developing a...certification criteria through the production of the Handbook for the Computer Security Certification of Trusted Systems. Through this effort, NRL hopes to

  10. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor

    2016-01-01

    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...

  11. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  12. Computing handbook information systems and information technology

    CERN Document Server

    Topi, Heikki

    2014-01-01

    Disciplinary Foundations and Global ImpactEvolving Discipline of Information Systems Heikki TopiDiscipline of Information Technology Barry M. Lunt and Han ReichgeltInformation Systems as a Practical Discipline Juhani IivariInformation Technology Han Reichgelt, Joseph J. Ekstrom, Art Gowan, and Barry M. LuntSociotechnical Approaches to the Study of Information Systems Steve Sawyer and Mohammad Hossein JarrahiIT and Global Development Erkki SutinenUsing ICT for Development, Societal Transformation, and Beyond Sherif KamelTechnical Foundations of Data and Database ManagementData Models Avi Silber

  13. A computer-controlled adaptive antenna system

    Science.gov (United States)

    Fetterolf, P. C.; Price, K. M.

    The problem of active pattern control in multibeam or phased array antenna systems is one that is well suited to technologies based upon microprocessor feedback control systems. Adaptive arrays can be realized by incorporating microprocessors as control elements in closed-loop feedback paths. As intelligent controllers, microprocessors can detect variations in arrays and implement suitable configuration changes. The subject of this paper is the application of the Howells-Applebaum power inversion algorithm in a C-band multibeam antenna system. A proof-of-concept, microprocessor controlled, adaptive beamforming network (BFN) was designed, assembled, and subsequent tests were performed demonstrating the algorithm's capacity for nulling narrowband jammers.

  14. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  15. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  16. Research on the Teaching System of the University Computer Foundation

    Directory of Open Access Journals (Sweden)

    Ji Xiaoyun

    2016-01-01

    Full Text Available Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic programming course, promote the cultivation of university students in the computer foundation, thinking methods and innovative practice ability, achieve the goal of individualized educ the College of computer basic course teaching, the specific circumstances of the need for students, professiation.

  17. Mechatronic sensory system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-05-01

    Full Text Available Changing manufacturing environment is characterised by aggressive competition on a global scale and rapid changes in process technology requires to create manufacturing systems that are be able to quickly adapt to new products, processes as well...

  18. Architecture of a Computer Based Instructional System

    Directory of Open Access Journals (Sweden)

    Emilia PECHEANU

    2000-12-01

    Full Text Available The paper describes the architecture of a tutorial system that can be used at various engineering graduate and postgraduate courses. The tutorial is using Internet-style WWW services to provide access to the teaching information and the evaluation exercises maintained with a RDMS. The tutorial will consist of server-side applications that process and present teaching material and assessing exercises to the student using the well-known Web interface. All information in the system will be stored in a relational database. By closely sticking to the ANSI SQL specifications, the system can take advantage of a free database managing system running on Linux, the mini-SQL. The tutorial can be used to on-line deliver any courses, creating new, continuing education opportunities. Taking advantage of the modern deployment techniques, the instructional/assessing tutorial offer high degrees of accessibility.

  19. Distributed Computation in a Quadrupedal Robotic System

    Directory of Open Access Journals (Sweden)

    Daniel Kuehn

    2014-07-01

    Full Text Available Today's and future space missions (will have to deal with increasing requirements regarding autonomy and flexibility in the locomotor system. To cope with these requirements, a higher bandwidth for sensor information is needed. In this paper, a robotic system is presented that is equipped with artificial feet and a spine incorporating increased sensing capabilities for walking robots. In the proposed quadrupedal robotic system, the front and rear parts are connected via an actuated spinal structure with six degrees of freedom. In order to increase the robustness of the system's locomotion in terms of traction and stability, a foot-like structure equipped with various sensors has been developed. In terms of distributed local control, both structures are as self-contained as possible with regard to sensing, sensor preprocessing, control and communication. This allows the robot to respond rapidly to occurring events with only minor latency.

  20. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  1. Morphable Computer Architectures for Highly Energy Aware Systems

    National Research Council Canada - National Science Library

    Kogge, Peter

    2004-01-01

    To achieve a revolutionary reduction in overall power consumption, computing systems must be constructed out of both inherently low-power structures and power-aware or energy-aware hardware and software subsystems...

  2. Computing Differential Invariants of Hybrid Systems as Fixedpoints

    National Research Council Canada - National Science Library

    Platzer, Andre; Clarke, Edmund M

    2008-01-01

    .... In order to verify non-trivial systems without solving their differential equations and without numerical errors, we use a continuous generalization of induction, for which our algorithm computes...

  3. Securing Cloud Computing from Different Attacks Using Intrusion Detection Systems

    Directory of Open Access Journals (Sweden)

    Omar Achbarou

    2017-03-01

    Full Text Available Cloud computing is a new way of integrating a set of old technologies to implement a new paradigm that creates an avenue for users to have access to shared and configurable resources through internet on-demand. This system has many common characteristics with distributed systems, hence, the cloud computing also uses the features of networking. Thus the security is the biggest issue of this system, because the services of cloud computing is based on the sharing. Thus, a cloud computing environment requires some intrusion detection systems (IDSs for protecting each machine against attacks. The aim of this work is to present a classification of attacks threatening the availability, confidentiality and integrity of cloud resources and services. Furthermore, we provide literature review of attacks related to the identified categories. Additionally, this paper also introduces related intrusion detection models to identify and prevent these types of attacks.

  4. Proceedings: Computer Science and Data Systems Technical Symposium, volume 1

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.

  5. Proceedings: Computer Science and Data Systems Technical Symposium, volume 2

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.

  6. Towards a global monitoring system for CMS computing operations

    CERN Multimedia

    CERN. Geneva; Bauerdick, Lothar A.T.

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and ...

  7. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  8. User-Oriented Computer-Aided Hydraulic System Design.

    Science.gov (United States)

    1983-06-01

    NATIONAL BUREAU Of STANDARt P63 A - - o_ • • - • -. ° ...- -. -..-- - ---.-. Q~ %. i0 Report No. FPRC 83-A-Fl USER-ORIENTED COMPUTER-AIDED HYDRAULIC SYSTEM...computer aided design user oriented system simulation power flow modeling problem oriented language transient state steady state valves * FORTRAN PL/I pumps...a problem oriented language for use with the developed program, and the models of commonly used hydraulic valves, pumps, motors, and cylinders are

  9. Effective Methodology for Security Risk Assessment of Computer Systems

    OpenAIRE

    Daniel F. García; Adrián Fernández

    2013-01-01

    Today, computer systems are more and more complex and support growing security risks. The security managers need to find effective security risk assessment methodologies that allow modeling well the increasing complexity of current computer systems but also maintaining low the complexity of the assessment procedure. This paper provides a brief analysis of common security risk assessment methodologies leading to the selection of a proper methodology to fulfill these requirements. Then, a detai...

  10. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal

    2017-01-01

    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.

  11. CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM

    OpenAIRE

    Taha Chaabouni1; Maher Khemakhem

    2012-01-01

    Cloud computing is a new emerging system which offers information technologies via Internet. Clients use services they need when they need and at the place they want and pay only for what they have consumed. So, cloud computing offers many advantages especially for business. A deep study and understanding of this emerging system and the inherent components help a lot in identifying what should we do in order to improve its performance. In this work, we present first cloud compu...

  12. Towards accurate quantum simulations of large systems with small computers.

    Science.gov (United States)

    Yang, Yonggang

    2017-01-24

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  13. Redberry: a computer algebra system designed for tensor manipulation

    Science.gov (United States)

    Poslavsky, Stanislav; Bolotin, Dmitry

    2015-05-01

    In this paper we focus on the main aspects of computer-aided calculations with tensors and present a new computer algebra system Redberry which was specifically designed for algebraic tensor manipulation. We touch upon distinctive features of tensor software in comparison with pure scalar systems, discuss the main approaches used to handle tensorial expressions and present the comparison of Redberry performance with other relevant tools.

  14. Exploring Computation-Communication Tradeoffs in Camera Systems

    OpenAIRE

    Mazumdar, Amrita; Moreau, Thierry; Kim, Sung; Cowan, Meghan; Alaghi, Armin; Ceze, Luis; Oskin, Mark; Sathe, Visvesh

    2017-01-01

    Cameras are the defacto sensor. The growing demand for real-time and low-power computer vision, coupled with trends towards high-efficiency heterogeneous systems, has given rise to a wide range of image processing acceleration techniques at the camera node and in the cloud. In this paper, we characterize two novel camera systems that use acceleration techniques to push the extremes of energy and performance scaling, and explore the computation-communication tradeoffs in their design. The firs...

  15. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  16. Secure system design and trustable computing

    CERN Document Server

    Potkonjak, Miodrag

    2016-01-01

    This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes issues related to security and trust in a variety of electronic devices and systems related to the security of hardware, firmware and software, spanning system applications, online transactions, and networking services.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society’s microelectronic-supported infrastructures.

  17. Decentralized Resource Management in Distributed Computer Systems.

    Science.gov (United States)

    1982-02-01

    Interprocess Communication 14 2.3.2.5 Decentralized Resource Management 15 2.3.3 MicroNet 16 * 2.3.3.1 System Goals and Objectives 16 2.3.3.2 Physical...executive level) is moderately low. 16 Background 2.3.3 MicroNet 2.3.3.1 System Goals and Objectives MicroNet [47] was designed to support multiple...tolerate the loss of nodes, allow for a wide variety of interconnect topologies, and adapt to dynamic variations in loading. The designers of MicroNet

  18. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  19. A neuromuscular monitoring system based on a personal computer.

    Science.gov (United States)

    White, D A; Hull, M

    1992-07-01

    We have developed a computerized neuromuscular monitoring system (NMMS) using commercially available subsystems, i.e., computer equipment, clinical nerve stimulator, force transducer, and strip-chart recorder. This NMMS was developed for acquisition and analysis of data for research and teaching purposes. Computer analysis of the muscle response to stimulation allows graphic and numeric presentation of the twitch response and calculated ratios. Since the system can store and recall data, research data can be accessed for analysis and graphic presentation. An IBM PC/AT computer is used as the central controller and data processor. The computer controls timing of the nerve stimulator output, initiates data acquisition, and adjusts the paper speed of the strip chart recorder. The data processing functions include establishing control response values (when no neuromuscular blockade is present), displaying force versus time and calculated data graphically and numerically, and storing these data for further analysis. The general purpose nature of the computer and strip chart recording equipment allow modification of the system primarily by changes in software. For example, new patterns of nerve stimulation, such as the posttetanic count, can be programmed into the computer system along with appropriate data display and analysis routines. The NMMS has functioned well in the operating room environment. We have had no episodes of electrocautery interference with the computer functions. The automated features have enhanced the utility of the NMMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Data Acquisition, Control, Communication and Computation System ...

    Indian Academy of Sciences (India)

    SOXS aims to study solar flares, which are the most violent and energetic phenomena in the solar system, in the energy range of 4–56 keV with high spectral and temporal resolution. SOXS employs state-of-the-art semiconductor devices, viz., Si-Pin and CZT detectors to achieve sub-keV energy resolution requirements.

  1. STAR Network Distributed Computer Systems Evaluation Results.

    Science.gov (United States)

    1981-02-12

    image processing systems. Further, because of the small data require- ments a segment of TOTT is a good candidate for VLSI. It can attain the...broadcast capabilities of the distributed architecture to isolate the overhead of accounting and enhacing of fault isolation (see Figure B-1). B-1 The

  2. Cloud computing principles, systems and applications

    CERN Document Server

    Antonopoulos, Nick

    2017-01-01

    This essential reference is a thorough and timely examination of the services, interfaces and types of applications that can be executed on cloud-based systems. Among other things, it identifies and highlights state-of-the-art techniques and methodologies.

  3. Programming Languages for Distributed Computing Systems

    NARCIS (Netherlands)

    Bal, H.E.; Steiner, J.G.; Tanenbaum, A.S.

    1989-01-01

    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less

  4. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  5. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  6. Human computer interaction issues in Clinical Trials Management Systems.

    Science.gov (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R

    2006-01-01

    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  7. High performance computing system for flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1991-01-01

    The computer architecture and components used in the NASA Langley Advanced Real-Time Simulation System (ARTSS) are briefly described and illustrated with diagrams and graphs. Particular attention is given to the advanced Convex C220 processing units, the UNIX-based operating system, the software interface to the fiber-optic-linked Computer Automated Measurement and Control system, configuration-management and real-time supervisor software, ARTSS hardware modifications, and the current implementation status. Simulation applications considered include the Transport Systems Research Vehicle, the Differential Maneuvering Simulator, the General Aviation Simulator, and the Visual Motion Simulator.

  8. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  9. Computing enclosures for uncertain biochemical systems.

    Science.gov (United States)

    August, E; Koeppl, H

    2012-12-01

    In this study, the authors present a novel method that provides enclosures for state trajectories of a non-linear dynamical system with uncertainties in initial conditions and parameter values. It is based on solving positivity conditions by means of semi-definite programmes and sum of squares decompositions. The method accounts for the indeterminacy of kinetic parameters, measurement uncertainties and fluctuations in the reaction rates because of extrinsic noise. This is particularly useful in the field of systems biology when one seeks to determine model behaviour quantitatively or, if this is not possible, semiquantitatively. The authors also demonstrate the significance of the proposed method to model selection in biology. The authors illustrate the applicability of their method on the mitogen-activated protein kinase signalling pathway, which is an important and reoccurring network motif that apparently also plays a crucial role in the development of cancer.

  10. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    Directory of Open Access Journals (Sweden)

    Ezio Bartocci

    2016-01-01

    Full Text Available As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  11. Development of a Computer Writing System Based on EOG.

    Science.gov (United States)

    López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian

    2017-06-26

    The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  12. Development of a Computer Writing System Based on EOG

    Directory of Open Access Journals (Sweden)

    Alberto López

    2017-06-01

    Full Text Available The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1 A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2 A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3 A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.

  13. COMPUTERS IN SYSTEMS OF HIGHER EDUCATION

    Science.gov (United States)

    governing or regulatory boards. Steps will have to be taken to provide training and orientation for all levels of management in higher education , especially...in the training of novice administrators who will manage tomorrow’s systems of higher education . Using the existing technology (not all of it as...instructional program of the institution. The main problem at the moment is not the technology, which has outpaced its users in higher education , but dissemination, development, and the training of appropriate personnel.

  14. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    background nearly always exposes an individual to fundamientals of mathematics and statistics. These traits of systematic thinking and a kmowledge of math and...It may be derived rigorously through the use of measurement, simulation, or mathematics or it may be literally estimated based on observation and...systematic identification of a. comuter performance management system. 5. Administration of Group and Project 11anagement Depending on the size and

  15. Cluster computer based education delivery system

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, D.M.; Bitzer, D.L.; Rader, R.K.; Sherwood, B.A.; Tucker, P.T.

    1987-01-13

    This patent describes an interactive instructional multi-processor system for providing instructional programs for execution at one or more processor stations while relieving memory requirements at the processor stations without allowing a perceivable delay to users at the processor stations as a result of paging of instructional program segments. The system comprises: a cluster subsystem and a plurality of processor stations interconnected by a high speed multi-access communication subsystem, in which the cluster subsystem comprises: at least one mass storage device for storing a library of instructional programs averaging at least about 50 kilobytes in length, high speed buffer means coupled to the mass storage device for simultaneously storing a plurality of instructional programs, an interface for the speed communication sub-system, and processor means including a digital processor for managing the mass storage device, the high speed buffer means and the interface. The processor means further includes a bus interconnecting the mass storage device, the high speed buffer means, the interface and the digital processor. The digital processor includes controller means for transferring a requested instructional program from the mass storage device to the high speed buffer means and for retaining the instructional program in the high speed buffer means for at least a target time related to the processor stations coupled to the cluster subsystem.

  16. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  18. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  19. Evolution and development of complex computational systems using the paradigm of metabolic computing in Epigenetic Tracking

    OpenAIRE

    Alessandro Fontana; Borys Wróbel,

    2013-01-01

    Epigenetic Tracking (ET) is an Artificial Embryology system which allows for the evolution and development of large complex structures built from artificial cells. In terms of the number of cells, the complexity of the bodies generated with ET is comparable with the complexity of biological organisms. We have previously used ET to simulate the growth of multicellular bodies with arbitrary 3-dimensional shapes which perform computation using the paradigm of ``metabolic computing''. In this pap...

  20. The Design Methodology of Distributed Computer Systems.

    Science.gov (United States)

    1980-12-01

    TRANSMITTAL TO DDC This technical report has been reviewed and is approved for public release lAW AR 190-12 (7b). Distribution is unlimited. A. D. BLOSE...Evaluation of Asynchronous Concurrent System 3.1 Review of Petri Nets 3.1.1 Basic Properties of Petri Nets Petri nets (PET 77, AGE 75) are a formal...the 2nd International Conference on Software Engineering, October, 1976. ([IZ 72) Brinch Hanse , P., "Structure Multiprogramming," Comm. ACM, Vol, 15

  1. Computer-aided measurement system analysis

    Directory of Open Access Journals (Sweden)

    J. Feliks

    2010-07-01

    Full Text Available Product analysis with the alternative method is commonly used, especially where the direct or indirect measurement taken as a numerical value of the interesting feature of the product is infeasible, difficult or too expensive. Such an analysis results in deciding whether a given product meets the specified requirements or not. The product may also be analysed in several categories. Neither the measurement itself, nor its result provides information on the extent to which the requirements are met with respect to the analysed feature. The measurement only supports the decision whether to accept the part inspected as ‘good’ or reject and deem it ‘bad’ (made improperly. Several analysis methods for systems of this type have been described in the literature: the Analytic Method, the Signal Detection Method, Cohen’s Kappa (Cross Tab Method. The paper discusses selected methods of measurement system analysis for alternative parameters in the scope of requirements related to the application of statistical process control and quality control. The feasibility is presented of using the MSExcel® package to procedure implementation and result analysis for measurement experiments.

  2. Industrial Personal Computer based Display for Nuclear Safety System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min [KEPCO, Youngin (Korea, Republic of)

    2014-08-15

    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view.

  3. Computer model of cardiovascular control system responses to exercise

    Science.gov (United States)

    Croston, R. C.; Rummel, J. A.; Kay, F. J.

    1973-01-01

    Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.

  4. Towards a Global Monitoring System for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Sciaba, Andrea [CERN

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and drive future software development. This contribution gives a complete overview of the CMS monitoring system and a description of all the recent activities that have been started with the goal of providing a more integrated, modern and functional global monitoring system for computing operations.

  5. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  6. Computer aided system for parametric design of combination die

    Science.gov (United States)

    Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.

    2017-09-01

    In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.

  7. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  8. Advanced computer architecture specification for automated weld systems

    Science.gov (United States)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  9. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  10. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  11. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  12. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  13. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  14. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg

    2000-07-01

    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  15. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  16. [Filing and processing systems of ultrasonic images in personal computers].

    Science.gov (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V

    1994-01-01

    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  17. Development of an Intelligent Instruction System for Mathematical Computation

    Science.gov (United States)

    Kim, Du Gyu; Lee, Jaemu

    2013-01-01

    In this paper, we propose the development of a web-based, intelligent instruction system to help elementary school students for mathematical computation. We concentrate on the intelligence facilities which support diagnosis and advice. The existing web-based instruction systems merely give information on whether the learners' replies are…

  18. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  19. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  20. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.