WorldWideScience

Sample records for image analysis software

  1. Image Processing Software

    Science.gov (United States)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  2. SIMA: Python software for analysis of dynamic fluorescence imaging data

    Directory of Open Access Journals (Sweden)

    Patrick eKaifosh

    2014-09-01

    Full Text Available Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs, and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  3. Stromatoporoid biometrics using image analysis software: A first order approach

    Science.gov (United States)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  4. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  5. Pattern recognition software and techniques for biological image analysis.

    Science.gov (United States)

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  6. Biological Imaging Software Tools

    Science.gov (United States)

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  7. Image analysis software versus direct anthropometry for breast measurements.

    Science.gov (United States)

    Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako

    2014-10-01

    To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.

  8. AMIDE: A Free Software Tool for Multimodality Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    Andreas Markus Loening

    2003-07-01

    Full Text Available Amide's a Medical Image Data Examiner (AMIDE has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  9. A simple method of image analysis to estimate CAM vascularization by APERIO ImageScope software.

    Science.gov (United States)

    Marinaccio, Christian; Ribatti, Domenico

    2015-01-01

    The chick chorioallantoic membrane (CAM) assay is a well-established method to test the angiogenic stimulation or inhibition induced by molecules and cells administered onto the CAM. The quantification of blood vessels in the CAM assay relies on a semi-manual image analysis approach which can be time consuming when considering large experimental groups. Therefore we present here a simple and fast volumetric method to inspect differences in vascularization between experimental conditions related to the stimulation and inhibition of CAM angiogenesis based on the Positive Pixel Count algorithm embedded in the APERIO ImageScope software.

  10. Public-domain software for root image analysis

    Directory of Open Access Journals (Sweden)

    Mirian Cristina Gomes Costa

    2014-10-01

    Full Text Available In the search for high efficiency in root studies, computational systems have been developed to analyze digital images. ImageJ and Safira are public-domain systems that may be used for image analysis of washed roots. However, differences in root properties measured using ImageJ and Safira are supposed. This study compared values of root length and surface area obtained with public-domain systems with values obtained by a reference method. Root samples were collected in a banana plantation in an area of a shallower Typic Carbonatic Haplic Cambisol (CXk, and an area of a deeper Typic Haplic Ta Eutrophic Cambisol (CXve, at six depths in five replications. Root images were digitized and the systems ImageJ and Safira used to determine root length and surface area. The line-intersect method modified by Tennant was used as reference; values of root length and surface area measured with the different systems were analyzed by Pearson's correlation coefficient and compared by the confidence interval and t-test. Both systems ImageJ and Safira had positive correlation coefficients with the reference method for root length and surface area data in CXk and CXve. The correlation coefficient ranged from 0.54 to 0.80, with lowest value observed for ImageJ in the measurement of surface area of roots sampled in CXve. The IC (95 % revealed that root length measurements with Safira did not differ from that with the reference method in CXk (-77.3 to 244.0 mm. Regarding surface area measurements, Safira did not differ from the reference method for samples collected in CXk (-530.6 to 565.8 mm² as well as in CXve (-4231 to 612.1 mm². However, measurements with ImageJ were different from those obtained by the reference method, underestimating length and surface area in samples collected in CXk and CXve. Both ImageJ and Safira allow an identification of increases or decreases in root length and surface area. However, Safira results for root length and surface area are

  11. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    Science.gov (United States)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  12. Fluorescence Image Analyzer - FLIMA: software for quantitative analysis of fluorescence in situ hybridization.

    Science.gov (United States)

    Silva, H C M; Martins-Júnior, M M C; Ribeiro, L B; Matoso, D A

    2017-03-30

    The Fluorescence Image Analyzer (FLIMA) software was developed for the quantitative analysis of images generated by fluorescence in situ hybridization (FISH). Currently, the images of FISH are examined without a coefficient that enables a comparison between them. Through GD Graphics Library, the FLIMA software calculates the amount of pixels on image and recognizes each present color. The coefficient generated by the algorithm shows the percentage of marks (probes) hybridized on the chromosomes. This software can be used for any type of image generated by a fluorescence microscope and is able to quantify digoxigenin probes exhibiting a red color, biotin probes exhibiting a green color, and double-FISH probes (digoxigenin and biotin used together), where the white color is displayed.

  13. Software Analysis of Mining Images for Objects Detection

    Directory of Open Access Journals (Sweden)

    Jan Tomecek

    2013-11-01

    Full Text Available The contribution is dealing with the development of the new module of robust FOTOMNG system for editing images from a video or miningimage from measurements for subsequent improvement of detection of required objects in the 2D image. The generated module allows create a finalhigh-quality picture by combination of multiple images with the search objects. We can combine input data according to the parameters or basedon reference frames. Correction of detected 2D objects is also part of this module. The solution is implemented intoFOTOMNG system and finishedwork has been tested in appropriate frames, which were validated core functionality and usability. Tests confirmed the function of each part of themodule, its accuracy and implications of integration.

  14. A software framework for the analysis of complex microscopy image data.

    Science.gov (United States)

    Chao, Jerry; Ward, E Sally; Ober, Raimund J

    2010-07-01

    Technological advances in both hardware and software have made possible the realization of sophisticated biological imaging experiments using the optical microscope. As a result, modern microscopy experiments are capable of producing complex image datasets. For a given data analysis task, the images in a set are arranged, based on the requirements of the task, by attributes such as the time and focus levels at which they were acquired. Importantly, different tasks performed over the course of an analysis are often facilitated by the use of different arrangements of the images. We present a software framework that supports the use of different logical image arrangements to analyze a physical set of images. This framework, called the Microscopy Image Analysis Tool (MIATool), realizes the logical arrangements using arrays of pointers to the images, thereby removing the need to replicate and manipulate the actual images in their storage medium. In order that they may be tailored to the specific requirements of disparate analysis tasks, these logical arrangements may differ in size and dimensionality, with no restrictions placed on the number of dimensions and the meaning of each dimension. MIATool additionally supports processing flexibility, extensible image processing capabilities, and data storage management.

  15. PyElph - a software tool for gel images analysis and phylogenetics

    Directory of Open Access Journals (Sweden)

    Pavel Ana Brânduşa

    2012-01-01

    Full Text Available Abstract Background This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do. Results PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism, AFLP (Amplified Fragment Length Polymorphism, RAPD

  16. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  17. Plume Ascent Tracker: Interactive Matlab software for analysis of ascending plumes in image data

    Science.gov (United States)

    Valade, S. A.; Harris, A. J. L.; Cerminara, M.

    2014-05-01

    This paper presents Matlab-based software designed to track and analyze an ascending plume as it rises above its source, in image data. It reads data recorded in various formats (video files, image files, or web-camera image streams), and at various wavelengths (infrared, visible, or ultra-violet). Using a set of filters which can be set interactively, the plume is first isolated from its background. A user-friendly interface then allows tracking of plume ascent and various parameters that characterize plume evolution during emission and ascent. These include records of plume height, velocity, acceleration, shape, volume, ash (fine-particle) loading, spreading rate, entrainment coefficient and inclination angle, as well as axial and radial profiles for radius and temperature (if data are radiometric). Image transformations (dilatation, rotation, resampling) can be performed to create new images with a vent-centered metric coordinate system. Applications may interest both plume observers (monitoring agencies) and modelers. For the first group, the software is capable of providing quantitative assessments of plume characteristics from image data, for post-event analysis or in near real-time analysis. For the second group, extracted data can serve as benchmarks for plume ascent models, and as inputs for cloud dispersal models. We here describe the software's tracking methodology and main graphical interfaces, using thermal infrared image data of an ascending volcanic ash plume at Santiaguito volcano.

  18. Software requirements and support for image-algebraic analysis, detection, and recognition of small targets

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Forsman, Robert H.; Yang, Chyuan-Huei T.; Hu, Wen-Chen; Porter, Ryan A.; McTaggart, Gary; Hranicky, James F.; Davis, James F.

    1995-06-01

    The detection of hazardous targets frequently requires a multispectral approach to image acquisition and analysis, which we have implemented in a software system called MATRE (multispectral automated target recognition and enhancement). MATRE provides capabilities of image enhancement, image database management, spectral signature extraction and visualization, statistical analysis of greyscale imagery, as well as 2D and 3D image processing operations. Our system is based upon a client-server architecture that is amenable to distributed implementation. In this paper, we discuss salient issues and requirements for multispectral recognition of hazardous targets, and show that our software fulfills or exceeds such requirements. MATRE's capabilities, as well as statistical and morphological analysis results, are exemplified with emphasis upon computational cost, ease of installation, and maintenance on various Unix platforms. Additionally, MATRE's image processing functions can be coded in vector-parallel form, for ease of implementation of SIMD-parallel processors. Our algorithms are expressed in terms of image algebra, a concise, rigorous notation that unifies linear and nonlinear mathematics in the image domain. An image algebra class library for the C + + language has been incorporated into the our system, which facilitates fast algorithm prototyping without the numerous drawbacks of descrete coding.

  19. Image Processing Software

    Science.gov (United States)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  20. WHIPPET: a collaborative software environment for medical image processing and analysis

    Science.gov (United States)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  1. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    Science.gov (United States)

    Markiewicz, Tomasz

    2011-03-30

    The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server

  2. An advanced software suite for the processing and analysis of silicon luminescence images

    Science.gov (United States)

    Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.

    2017-06-01

    Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.

  3. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    Science.gov (United States)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  4. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    Science.gov (United States)

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p stereoscopic VCD.

  5. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    Directory of Open Access Journals (Sweden)

    Tian Xia

    2016-01-01

    Full Text Available Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS in assessing optic nerve cup-to-disc ratio (VCD from stereoscopic optic nerve images (SONI of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan. VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC were calculated. Results. 32 patients had mean age of 40±14 years. Mean VCD on SONI was 0.36±0.09, with DAS 0.38±0.08, and with nonstereoscopic 0.29±0.12. The difference between stereoscopic and DAS assisted was not significant (p=0.45. ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p<0.05 and nonstereoscopic and DAS (p<0.005 recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD.

  6. Analysis of nuclear organization with TANGO, software for high-throughput quantitative analysis of 3D fluorescence microscopy images.

    Science.gov (United States)

    Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas

    2015-01-01

    The cell nucleus is a highly organized cellular organelle that contains the genome. An important step to understand the relationships between genome positioning and genome functions is to extract quantitative data from three-dimensional (3D) fluorescence imaging. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here we present a practical approach using TANGO (Tools for Analysis of Nuclear Genome Organization), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a generic tool able to process large sets of images, allowing quantitative study of nuclear organization. In this chapter a practical description of the software is drawn in order to give an overview of its different concepts and functionalities. This description is illustrated with a precise example that can be performed step-by-step on experimental data provided on the website http://biophysique.mnhn.fr/tango/HomePage.

  7. Free digital image analysis software helps to resolve equivocal scores in HER2 immunohistochemistry.

    Science.gov (United States)

    Helin, Henrik O; Tuominen, Vilppu J; Ylinen, Onni; Helin, Heikki J; Isola, Jorma

    2016-02-01

    Evaluation of human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) is subject to interobserver variation and lack of reproducibility. Digital image analysis (DIA) has been shown to improve the consistency and accuracy of the evaluation and its use is encouraged in current testing guidelines. We studied whether digital image analysis using a free software application (ImmunoMembrane) can assist in interpreting HER2 IHC in equivocal 2+ cases. We also compared digital photomicrographs with whole-slide images (WSI) as material for ImmunoMembrane DIA. We stained 750 surgical resection specimens of invasive breast cancers immunohistochemically for HER2 and analysed staining with ImmunoMembrane. The ImmunoMembrane DIA scores were compared with the originally responsible pathologists' visual scores, a researcher's visual scores and in situ hybridisation (ISH) results. The originally responsible pathologists reported 9.1 % positive 3+ IHC scores, for the researcher this was 8.4 % and for ImmunoMembrane 9.5 %. Equivocal 2+ scores were 34 % for the pathologists, 43.7 % for the researcher and 10.1 % for ImmunoMembrane. Negative 0/1+ scores were 57.6 % for the pathologists, 46.8 % for the researcher and 80.8 % for ImmunoMembrane. There were six false positive cases, which were classified as 3+ by ImmunoMembrane and negative by ISH. Six cases were false negative defined as 0/1+ by IHC and positive by ISH. ImmunoMembrane DIA using digital photomicrographs and WSI showed almost perfect agreement. In conclusion, digital image analysis by ImmunoMembrane can help to resolve a majority of equivocal 2+ cases in HER2 IHC, which reduces the need for ISH testing.

  8. Identifying Image Manipulation Software from Image Features

    Science.gov (United States)

    2015-03-26

    an overview of the DCT based encoding process [5]. When an image is processed by lossless compression, a file’s size is reduced while still...IDENTIFYING IMAGE MANIPULATION SOFTWARE FROM IMAGE FEATURES THESIS Devlin T. Boyter, CPT, USA AFIT-ENG-MS-15-M-051 DEPARTMENT OF THE AIR FORCE AIR...to copyright protection in the United States. AFIT-ENG-MS-15-M-051 IDENTIFYING IMAGE MANIPULATION SOFTWARE FROM IMAGE FEATURES THESIS Presented to

  9. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software

    Directory of Open Access Journals (Sweden)

    Tânia Perestrelo

    2017-08-01

    Full Text Available Image-based assays, such as alkaline phosphatase staining or immunocytochemistry for pluripotent markers, are common methods used in the stem cell field to assess pluripotency. Although an increased number of image-analysis approaches have been described, there is still a lack of software availability to automatically quantify pluripotency in large images after pluripotency staining. To address this need, we developed a robust and rapid image processing software, Pluri-IQ, which allows the automatic evaluation of pluripotency in large low-magnification images. Using mouse embryonic stem cells (mESC as a model, we combined an automated segmentation algorithm with a supervised machine-learning platform to classify colonies as pluripotent, mixed, or differentiated. In addition, Pluri-IQ allows the automatic comparison between different culture conditions. This efficient user-friendly open-source software can be easily implemented in images derived from pluripotent cells or cells that express pluripotent markers (e.g., OCT4-GFP and can be routinely used, decreasing image assessment bias.

  10. NeuroGam Software Analysis in Epilepsy Diagnosis Using 99mTc-ECD Brain Perfusion SPECT Imaging

    OpenAIRE

    Fu, Peng; Zhang, Fang; Gao, Jianqing; Jing, Jianmin; Pan, Liping; Li, Dongxue; Wei, Lingge

    2015-01-01

    Background The aim of this study was to explore the value of NeuroGam software in diagnosis of epilepsy by 99Tcm-ethyl cysteinate dimer (ECD) brain imaging. Material/Methods NeuroGam was used to analyze 52 cases of clinically proven epilepsy by 99Tcm-ECD brain imaging. The results were compared with EEG and MRI, and the positive rates and localization to epileptic foci were analyzed. Results NeuroGam analysis showed that 42 of 52 epilepsy cases were abnormal. 99Tcm-ECD brain imaging revealed ...

  11. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    Science.gov (United States)

    van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël

    2014-01-01

    Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be.

  12. Two-Dimensional Gel Electrophoresis Image Analysis via Dedicated Software Packages.

    Science.gov (United States)

    Maurer, Martin H

    2016-01-01

    Analyzing two-dimensional gel electrophoretic images is supported by a number of freely and commercially available software. Although the respective program is highly specific, all the programs follow certain standardized algorithms. General steps are: (1) detecting and separating individual spots, (2) subtracting background, (3) creating a reference gel and (4) matching the spots to the reference gel, (5) modifying the reference gel, (6) normalizing the gel measurements for comparison, (7) calibrating for isoelectric point and molecular weight markers, and moreover, (8) constructing a database containing the measurement results and (9) comparing data by statistical and bioinformatic methods.

  13. Review of free software tools for image analysis of fluorescence cell micrographs.

    Science.gov (United States)

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface.

  14. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    Science.gov (United States)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the

  15. Software safety hazard analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, J.D. [Lawrence Livermore National Lab., CA (United States)

    1996-02-01

    Techniques for analyzing the safety and reliability of analog-based electronic protection systems that serve to mitigate hazards in process control systems have been developed over many years, and are reasonably well understood. An example is the protection system in a nuclear power plant. The extension of these techniques to systems which include digital computers is not well developed, and there is little consensus among software engineering experts and safety experts on how to analyze such systems. One possible technique is to extend hazard analysis to include digital computer-based systems. Software is frequently overlooked during system hazard analyses, but this is unacceptable when the software is in control of a potentially hazardous operation. In such cases, hazard analysis should be extended to fully cover the software. A method for performing software hazard analysis is proposed in this paper.

  16. plusTipTracker: Quantitative image analysis software for the measurement of microtubule dynamics.

    Science.gov (United States)

    Applegate, Kathryn T; Besson, Sebastien; Matov, Alexandre; Bagonis, Maria H; Jaqaman, Khuloud; Danuser, Gaudenz

    2011-11-01

    Here we introduce plusTipTracker, a Matlab-based open source software package that combines automated tracking, data analysis, and visualization tools for movies of fluorescently-labeled microtubule (MT) plus end binding proteins (+TIPs). Although +TIPs mark only phases of MT growth, the plusTipTracker software allows inference of additional MT dynamics, including phases of pause and shrinkage, by linking collinear, sequential growth tracks. The algorithm underlying the reconstruction of full MT trajectories relies on the spatially and temporally global tracking framework described in Jaqaman et al. (2008). Post-processing of track populations yields a wealth of quantitative phenotypic information about MT network architecture that can be explored using several visualization modalities and bioinformatics tools included in plusTipTracker. Graphical user interfaces enable novice Matlab users to track thousands of MTs in minutes. In this paper, we describe the algorithms used by plusTipTracker and show how the package can be used to study regional differences in the relative proportion of MT subpopulations within a single cell. The strategy of grouping +TIP growth tracks for the analysis of MT dynamics has been introduced before (Matov et al., 2010). The numerical methods and analytical functionality incorporated in plusTipTracker substantially advance this previous work in terms of flexibility and robustness. To illustrate the enhanced performance of the new software we thus compare computer-assembled +TIP-marked trajectories to manually-traced MT trajectories from the same movie used in Matov et al. (2010).

  17. Initial Work on the Characterization of Additive Manufacturing (3D Printing Using Software Image Analysis

    Directory of Open Access Journals (Sweden)

    Jeremy Straub

    2015-04-01

    Full Text Available A current challenge in additive manufacturing (commonly known as 3D printing is the detection of defects. Detection of defects (or the lack thereof in bespoke industrial manufacturing may be safety critical and reduce or eliminate the need for testing of printed objects. In consumer and prototype printing, early defect detection may facilitate the printer taking corrective measures (or pausing printing and alerting a user, preventing the need to re-print objects after the compounding of a small error occurs. This paper considers one approach to defect detection. It characterizes the efficacy of using a multi-camera system and image processing software to assess printing progress (thus detecting completion failure defects and quality. The potential applications and extrapolations of this type of a system are also discussed.

  18. Toolkits and Software for Developing Biomedical Image Processing and Analysis Applications

    Science.gov (United States)

    Wolf, Ivo

    Solutions in biomedical image processing and analysis usually consist of much more than a single method. Typically, a whole pipeline of algorithms is necessary, combined with visualization components to display and verify the results as well as possibilities to interact with the data. Therefore, successful research in biomedical image processing and analysis requires a solid base to start from. This is the case regardless whether the goal is the development of a new method (e.g., for segmentation) or to solve a specific task (e.g., computer-assisted planning of surgery).

  19. NeuroGam Software Analysis in Epilepsy Diagnosis Using 99mTc-ECD Brain Perfusion SPECT Imaging.

    Science.gov (United States)

    Fu, Peng; Zhang, Fang; Gao, Jianqing; Jing, Jianmin; Pan, Liping; Li, Dongxue; Wei, Lingge

    2015-09-20

    BACKGROUND The aim of this study was to explore the value of NeuroGam software in diagnosis of epilepsy by 99Tcm-ethyl cysteinate dimer (ECD) brain imaging. MATERIAL AND METHODS NeuroGam was used to analyze 52 cases of clinically proven epilepsy by 99Tcm-ECD brain imaging. The results were compared with EEG and MRI, and the positive rates and localization to epileptic foci were analyzed. RESULTS NeuroGam analysis showed that 42 of 52 epilepsy cases were abnormal. 99Tcm-ECD brain imaging revealed a positive rate of 80.8% (42/52), with 36 out of 42 patients (85.7%) clearly showing an abnormal area. Both were higher than that of brain perfusion SPECT, with a consistency of 64.5% (34/52) using these 2 methods. Decreased regional cerebral blood flow (rCBF) was observed in frontal (18), temporal (20), and parietal lobes (2). Decreased rCBF was seen in frontal and temporal lobes in 4 out of 36 patients, and in temporal and parietal lobes of 2 out of 36 patients. NeuroGam further showed that the abnormal area was located in a different functional area of the brain. EEG abnormalities were detected in 29 out of 52 patients (55.8%) with 16 cases (55.2%) clearly showing an abnormal area. MRI abnormalities were detected in 17 out of 43 cases (39.5%), including 9 cases (52.9%) clearly showing an abnormal area. The consistency of NeuroGam software analysis, and EEG and MRI were 48.1% (25/52) and 34.9% (15/43), respectively. CONCLUSIONS NeuroGam software analysis offers a higher sensitivity in detecting epilepsy than EEG or MRI. It is a powerful tool in 99Tcm-ECD brain imaging.

  20. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    Directory of Open Access Journals (Sweden)

    Liesbeth van Oeffelen

    Full Text Available Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad and the Intelligent or Advanced Quantifier (Bio Image do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs. For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be.

  1. Evaluation of two software tools dedicated to an automatic analysis of the CT scanner image spatial resolution.

    Science.gov (United States)

    Torfeh, Tarraf; Beaumont, Stéphane; Guédon, Jean Pierre; Denis, Eloïse

    2007-01-01

    An evaluation of two software tools dedicated to an automatic analysis of the CT scanner image spatial resolution is presented in this paper. The methods evaluated consist of calculating the Modulation Transfer Function (MTF) of the CT scanners; the first uses an image of an impulse source, while the second method proposed by Droege and Morin uses an image of cyclic bar patterns. Two Digital Test Objects (DTO) are created to this purpose. These DTOs are then blurred by doing a convolution with a two-dimensional Gaussian Point Spread Function (PSF(Ref)), which has a well known Full Width at Half Maximum (FWHM). The evaluation process consists then of comparing the Fourier transform of the PSF on the one hand, and the two mentioned methods on the other hand.

  2. Analysis of a marine phototrophic biofilm by confocal laser scanning microscopy using the new image quantification software PHLIP

    Directory of Open Access Journals (Sweden)

    Almeida Jonas S

    2006-01-01

    Full Text Available Abstract Background Confocal laser scanning microscopy (CLSM is the method of choice to study interfacial biofilms and acquires time-resolved three-dimensional data of the biofilm structure. CLSM can be used in a multi-channel modus where the different channels map individual biofilm components. This communication presents a novel image quantification tool, PHLIP, for the quantitative analysis of large amounts of multichannel CLSM data in an automated way. PHLIP can be freely downloaded from http://phlip.sourceforge.net. Results PHLIP is an open source public license Matlab toolbox that includes functions for CLSM imaging data handling and ten image analysis operations describing various aspects of biofilm morphology. The use of PHLIP is here demonstrated by a study of the development of a natural marine phototrophic biofilm. It is shown how the examination of the individual biofilm components using the multi-channel capability of PHLIP allowed the description of the dynamic spatial and temporal separation of diatoms, bacteria and organic and inorganic matter during the shift from a bacteria-dominated to a diatom-dominated phototrophic biofilm. Reflection images and weight measurements complementing the PHLIP analyses suggest that a large part of the biofilm mass consisted of inorganic mineral material. Conclusion The presented case study reveals new insight into the temporal development of a phototrophic biofilm where multi-channel imaging allowed to parallel monitor the dynamics of the individual biofilm components over time. This application of PHLIP presents the power of biofilm image analysis by multi-channel CLSM software and demonstrates the importance of PHLIP for the scientific community as a flexible and extendable image analysis platform for automated image processing.

  3. Software Design for Smile Analysis

    Directory of Open Access Journals (Sweden)

    A. Sarkhosh

    2010-12-01

    Full Text Available Introduction: Esthetics and attractiveness of the smile is one of the major demands in contemporary orthodontic treatment. In order to improve a smile design, it is necessary to record “posed smile” as an intentional, non-pressure, static, natural and reproduciblesmile. The record then should be analyzed to determine its characteristics. In this study,we intended to design and introduce a software to analyze the smile rapidly and precisely in order to produce an attractive smile for the patients.Materials and Methods: For this purpose, a practical study was performed to design multimedia software “Smile Analysis” which can receive patients’ photographs and videographs. After giving records to the software, the operator should mark the points and lines which are displayed on the system’s guide and also define the correct scale for each image. Thirty-three variables are measured by the software and displayed on the report page. Reliability of measurements in both image and video was significantly high(=0.7-1.Results: In order to evaluate intra- operator and inter-operator reliability, five cases were selected randomly. Statistical analysis showed that calculations performed in smile analysis software were both valid and highly reliable (for both video and photo.Conclusion: The results obtained from smile analysis could be used in diagnosis,treatment planning and evaluation of the treatment progress.

  4. Software development for dynamic position emission tomography: Dynamic image analysis (DIA) tool

    Energy Technology Data Exchange (ETDEWEB)

    Pyeon, Do Yeong; Jung, Young Jin [Dongseo University, Busan (Korea, Republic of); Kim, Jung Su [Dept. of Radilogical Science, Dongnam Health University, Suwon (Korea, Republic of)

    2016-09-15

    Positron Emission Tomography(PET) is nuclear medical tests which is a combination of several compounds with a radioactive isotope that can be injected into body to quantitatively measure the metabolic rate (in the body). Especially, Phenomena that increase (sing) glucose metabolism in cancer tissue using the 18F-FDG (Fluorodeoxyglucose) is utilized widely in cancer diagnosis. And then, Numerous studies have been reported that incidence seems high availability even in the modern diagnosis of dementia and Parkinson's (disease) in brain disease. When using a dynamic PET image including the time information in the static information that is provided for the diagnosis many can increase the accuracy of diagnosis. For this reason, clinical researchers getting great attention but, it is the lack of tools to conduct research. And, it interfered complex mathematical algorithm and programming skills for activation of research. In this study, in order to easy to use and enable research dPET, we developed the software based graphic user interface(GUI). In the future, by many clinical researcher using DIA-Tool is expected to be of great help to dPET research.

  5. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement

    Science.gov (United States)

    Hadjisolomou, Stavros P.; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, “SpotMetrics,” that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines. PMID:28298896

  6. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement.

    Science.gov (United States)

    Hadjisolomou, Stavros P; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.

  7. Acoustic image-processing software

    Science.gov (United States)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  8. Preliminary studies for a CBCT imaging protocol for offline organ motion analysis: registration software validation and CTDI measurements.

    Science.gov (United States)

    Falco, Maria Daniela; Fontanarosa, Davide; Miceli, Roberto; Carosi, Alessandra; Santoni, Riccardo; D'Andrea, Marco

    2011-01-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index has been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4°. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22°). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be

  9. AutoRoot: open-source software employing a novel image analysis approach to support fully-automated plant phenotyping.

    Science.gov (United States)

    Pound, Michael P; Fozard, Susan; Torres Torres, Mercedes; Forde, Brian G; French, Andrew P

    2017-01-01

    Computer-based phenotyping of plants has risen in importance in recent years. Whilst much software has been written to aid phenotyping using image analysis, to date the vast majority has been only semi-automatic. However, such interaction is not desirable in high throughput approaches. Here, we present a system designed to analyse plant images in a completely automated manner, allowing genuine high throughput measurement of root traits. To do this we introduce a new set of proxy traits. We test the system on a new, automated image capture system, the Microphenotron, which is able to image many 1000s of roots/h. A simple experiment is presented, treating the plants with differing chemical conditions to produce different phenotypes. The automated imaging setup and the new software tool was used to measure proxy traits in each well. A correlation matrix was calculated across automated and manual measures, as a validation. Some particular proxy measures are very highly correlated with the manual measures (e.g. proxy length to manual length, r(2) > 0.9). This suggests that while the automated measures are not directly equivalent to classic manual measures, they can be used to indicate phenotypic differences (hence the term, proxy). In addition, the raw discriminative power of the new proxy traits was examined. Principal component analysis was calculated across all proxy measures over two phenotypically-different groups of plants. Many of the proxy traits can be used to separate the data in the two conditions. The new proxy traits proposed tend to correlate well with equivalent manual measures, where these exist. Additionally, the new measures display strong discriminative power. It is suggested that for particular phenotypic differences, different traits will be relevant, and not all will have meaningful manual equivalent measures. However, approaches such as PCA can be used to interrogate the resulting data to identify differences between datasets. Select images can

  10. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    Science.gov (United States)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  11. FITSH -- a software package for image processing

    CERN Document Server

    Pál, András

    2011-01-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (incl. image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. This set of utilities found in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently us...

  12. WASI-2D: A software tool for regionally optimized analysis of imaging spectrometer data from deep and shallow waters

    Science.gov (United States)

    Gege, Peter

    2014-01-01

    An image processing software has been developed which allows quantitative analysis of multi- and hyperspectral data from oceanic, coastal and inland waters. It has been implemented into the Water Colour Simulator WASI, which is a tool for the simulation and analysis of optical properties and light field parameters of deep and shallow waters. The new module WASI-2D can import atmospherically corrected images from airborne sensors and satellite instruments in various data formats and units like remote sensing reflectance or radiance. It can be easily adapted by the user to different sensors and to optical properties of the studied area. Data analysis is done by inverse modelling using established analytical models. The bio-optical model of the water column accounts for gelbstoff (coloured dissolved organic matter, CDOM), detritus, and mixtures of up to 6 phytoplankton classes and 2 spectrally different types of suspended matter. The reflectance of the sea floor is treated as sum of up to 6 substrate types. An analytic model of downwelling irradiance allows wavelength dependent modelling of sun glint and sky glint at the water surface. The provided database covers the spectral range from 350 to 1000 nm in 1 nm intervals. It can be exchanged easily to represent the optical properties of water constituents, bottom types and the atmosphere of the studied area.

  13. Counting radon tracks in Makrofol detectors with the 'image reduction and analysis facility' (IRAF) software package

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, F. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain)]. E-mail: fimerall@ull.es; Gonzalez-Manrique, S. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Karlsson, L. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Hernandez-Armas, J. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Aparicio, A. [Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife (Spain); Departamento de Astrofisica, Universidad de La Laguna. Avenida. Astrofisico Francisco Sanchez s/n, 38071 La Laguna, Tenerife (Spain)

    2007-03-15

    Makrofol detectors are commonly used for long-term radon ({sup 222}Rn) measurements in houses, schools and workplaces. The use of this type of passive detectors for the determination of radon concentrations requires the counting of the nuclear tracks produced by alpha particles on the detecting material. The 'image reduction and analysis facility' (IRAF) software package is a piece of software commonly used in astronomical applications. It allows detailed counting and mapping of sky sections where stars are grouped very closely, even forming clusters. In order to count the nuclear tracks in our Makrofol radon detectors, we have developed an inter-disciplinary application that takes advantage of the similitude that exist between counting stars in a dark sky and tracks in a track-etch detector. Thus, a low cost semi-automatic system has been set up in our laboratory which utilises a commercially available desktop scanner and the IRAF software package. A detailed description of the proposed semi-automatic method and its performance, in comparison to ocular counting, is described in detail here. In addition, the calibration factor for this procedure, 2.97+/-0.07kBqm{sup -3}htrack{sup -1}cm{sup 2}, has been calculated based on the results obtained from exposing 46 detectors to certified radon concentrations. Furthermore, the results of a preliminary radon survey carried out in 62 schools in Tenerife island (Spain), using Makrofol detectors, counted with the mentioned procedure, are briefly presented. The results reported here indicate that the developed procedure permits a fast, accurate and unbiased determination of the radon tracks in a large number of detectors. The measurements carried out in the schools showed that the radon concentrations in at least 12 schools were above 200Bqm{sup -3} and, in two of them, above 400Bqm{sup -3}. Further studies should be performed at those schools following the European Union recommendations about radon concentrations in

  14. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    Science.gov (United States)

    Mei, Ying; Tang, Zhiping

    2016-01-01

    Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP). Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC) and repetitive coefficient (COR) were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505.

  15. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    Directory of Open Access Journals (Sweden)

    Ying Mei

    2016-01-01

    Full Text Available Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP. Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC and repetitive coefficient (COR were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505.

  16. Quantification of Abdominal Fat in Obese and Healthy Adolescents Using 3 Tesla Magnetic Resonance Imaging and Free Software for Image Analysis

    Science.gov (United States)

    Eloi, Juliana Cristina; Epifanio, Matias; de Gonçalves, Marília Maia; Pellicioli, Augusto; Vieira, Patricia Froelich Giora; Dias, Henrique Bregolin; Bruscato, Neide; Soder, Ricardo Bernardi; Santana, João Carlos Batista; Mouzaki, Marialena; Baldisserotto, Matteo

    2017-01-01

    Background and Aims Computed tomography, which uses ionizing radiation and expensive software packages for analysis of scans, can be used to quantify abdominal fat. The objective of this study is to measure abdominal fat with 3T MRI using free software for image analysis and to correlate these findings with anthropometric and laboratory parameters in adolescents. Methods This prospective observational study included 24 overweight/obese and 33 healthy adolescents (mean age 16.55 years). All participants underwent abdominal MRI exams. Visceral and subcutaneous fat area and percentage were correlated with anthropometric parameters, lipid profile, glucose metabolism, and insulin resistance. Student’s t test and Mann-Whitney’s test was applied. Pearson’s chi-square test was used to compare proportions. To determine associations Pearson’s linear correlation or Spearman’s correlation were used. Results In both groups, waist circumference (WC) was associated with visceral fat area (P = 0.001 and P = 0.01 respectively), and triglycerides were associated with fat percentage (P = 0.046 and P = 0.071 respectively). In obese individuals, total cholesterol/HDL ratio was associated with visceral fat area (P = 0.03) and percentage (P = 0.09), and insulin and HOMA-IR were associated with visceral fat area (P = 0.001) and percentage (P = 0.005). Conclusions 3T MRI can provide reliable and good quality images for quantification of visceral and subcutaneous fat by using a free software package. The results demonstrate that WC is a good predictor of visceral fat in obese adolescents and visceral fat area is associated with total cholesterol/HDL ratio, insulin and HOMA-IR. PMID:28129354

  17. Infrared Imaging Data Reduction Software and Techniques

    CERN Document Server

    Sabbey, C N; Lewis, J R; Irwin, M J; Sabbey, Chris N.; Mahon, Richard G. Mc; Lewis, James R.; Irwin, Mike J.

    2001-01-01

    We describe the InfraRed Data Reduction (IRDR) software package, a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. We developed the software to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient). The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and coaddition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although we currently use the software to process data taken with CIRSI (a near-IR mosaic imager), the software is modular and concise and should be easy to adapt/reuse for other work. IRDR is available from anonymous ftp to ftp.ast.cam.ac.uk in pub/sabbey.

  18. Software for multistate analysis

    NARCIS (Netherlands)

    Willekens, Frans; Putter, H.

    2014-01-01

    Background: The growing interest in pathways, the increased availability of life-history data, innovations in statistical and demographic techniques, and advances in software technology have stimulated the development of software packages for multistate modeling of life histories. Objective: In the

  19. Software for multistate analysis

    NARCIS (Netherlands)

    Willekens, Frans; Putter, H.

    2014-01-01

    Background: The growing interest in pathways, the increased availability of life-history data, innovations in statistical and demographic techniques, and advances in software technology have stimulated the development of software packages for multistate modeling of life histories.Objective: In the

  20. Application of Photoshop Software in Analysis of Immunohistochemical Image%Photoshop软件在免疫组化图像分析中的应用

    Institute of Scientific and Technical Information of China (English)

    周洲; 张军锋; 陈建; 赖娅娜

    2012-01-01

    [目的]结合教学与科研工作实践,探讨采用图像处理软件Photoshop分析免疫组化图像的使用规范.[方法]选用合适方法对免疫组化图像进行分割与灰度转换处理,再采用软件测量与分析免疫组化图像,同时与Image-Pro Plus、ImageJ专业软件分析结果比较.[结果]该方法操作便捷,数据准确可靠.[结论]该方法值得在实验教学和科学研究中推广使用.%[Objective] To discuss the normative application of Photoshop software in immunohistochemical image analysis combine with practical leaching and research. [ Method] The suitable methods of image segmentation and gradation conversion were chosen; afterwards immunohistochemical images were analyzed with Photoshop software, which was with specialized software Image-Pro Plus and ImageJ. [ Result] The method was convenient and the data was precise. [Conclusion] The method was good for the experimental teaching and scientific research.

  1. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    Directory of Open Access Journals (Sweden)

    Martin Rueckl

    2017-06-01

    Full Text Available The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs, an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines; the meso-scale, (i.e., whole cell and population imaging with single-cell resolution; and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution. The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  2. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    Science.gov (United States)

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca(2+)-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca(2+) imaging datasets, particularly when these have been acquired at different spatial scales.

  3. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    Science.gov (United States)

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  4. Software for multistate analysis

    Directory of Open Access Journals (Sweden)

    Frans J. Willekens

    2014-08-01

    Full Text Available Background: The growing interest in pathways, the increased availability of life-history data,innovations in statistical and demographic techniques, and advances in softwaretechnology have stimulated the development of software packages for multistatemodeling of life histories. Objective: In the paper we list and briefly discuss several software packages for multistate analysisof life-history data. The packages cover the estimation of multistate models (transitionrates and transition probabilities, multistate life tables, multistate populationprojections, and microsimulation. Methods: Brief description of software packages in a historical and comparative perspective. Results: During the past 10 years the advances in multistate modeling software have beenimpressive. New computational tools accompany the development of new methods instatistics and demography. The statistical theory of counting processes is the preferredmethod for the estimation of multistate models and R is the preferred programmingplatform. Conclusions: Innovations in method, data, and computer technology have removed the traditionalbarriers to multistate modeling of life histories and the computation of informative lifecourseindicators. The challenge ahead of us is to model and predict individual lifehistories.

  5. Software Image J to study soil pore distribution

    Directory of Open Access Journals (Sweden)

    Sabrina Passoni

    2014-04-01

    Full Text Available In the soil science, a direct method that allows the study of soil pore distribution is the bi-dimensional (2D digital image analysis. Such technique provides quantitative results of soil pore shape, number and size. The use of specific softwares for the treatment and processing of images allows a fast and efficient method to quantify the soil porous system. However, due to the high cost of commercial softwares, public ones can be an interesting alternative for soil structure analysis. The objective of this work was to evaluate the quality of data provided by the Image J software (public domain used to characterize the voids of two soils, characterized as Geric Ferralsol and Rhodic Ferralsol, from the southeast region of Brazil. The pore distribution analysis technique from impregnated soil blocks was utilized for this purpose. The 2D image acquisition was carried out by using a CCD camera coupled to a conventional optical microscope. After acquisition and treatment of images, they were processed and analyzed by the software Noesis Visilog 5.4® (chosen as the reference program and ImageJ. The parameters chosen to characterize the soil voids were: shape, number and pore size distribution. For both soils, the results obtained for the image total porosity (%, the total number of pores and the pore size distribution showed that the Image J is a suitable software to be applied in the characterization of the soil sample voids impregnated with resin.

  6. Multiparametric Cell Cycle Analysis Using the Operetta High-Content Imager and Harmony Software with PhenoLOGIC.

    Directory of Open Access Journals (Sweden)

    Andrew J Massey

    Full Text Available High-content imaging is a powerful tool for determining cell phenotypes at the single cell level. Characterising the effect of small molecules on cell cycle distribution is important for understanding their mechanism of action especially in oncology drug discovery but also for understanding potential toxicology liabilities. Here, a high-throughput phenotypic assay utilising the PerkinElmer Operetta high-content imager and Harmony software to determine cell cycle distribution is described. PhenoLOGIC, a machine learning algorithm within Harmony software was employed to robustly separate single cells from cell clumps. DNA content, EdU incorporation and pHH3 (S10 expression levels were subsequently utilised to separate cells into the various phases of the cell cycle. The assay is amenable to multiplexing with an additional pharmacodynamic marker to assess cell cycle changes within a specific cellular sub-population. Using this approach, the cell cycle distribution of γH2AX positive nuclei was determined following treatment with DNA damaging agents. Likewise, the assay can be multiplexed with Ki67 to determine the fraction of quiescent cells and with BrdU dual labelling to determine S-phase duration. This methodology therefore provides a relatively cheap, quick and high-throughput phenotypic method for determining accurate cell cycle distribution for small molecule mechanism of action and drug toxicity studies.

  7. Software for producing trichromatic images in astronomy

    CERN Document Server

    Morel, S; Morel, Sebastien; Davoust, Emmanuel

    1995-01-01

    We present a software package for combining three monochromatic images of an astronomical object into a trichromatic color image. We first discuss the meaning of "true" colors in astronomical images. We then describe the different steps of our method, choosing the relevant dynamic intensity range in each filter, inventorying the different colors, optimizing the color map, modifying the balance of colors, and enhancing contrasts at low intensity levels. While the first steps are automatic, the last two are interactive.

  8. Using Rscript for Software Analysis

    NARCIS (Netherlands)

    Klint, P.

    2008-01-01

    RSCRIPT is a concept language that explores the design space of relation-based languages for software analysis. We briefly sketch the RSCRIPT language by way of a stan- dard example, summarize our experience, and point at fu- ture developments.

  9. Analysis of image sharpness reproducibility on a novel engineered micro-CT scanner with variable geometry and embedded recalibration software.

    Science.gov (United States)

    Panetta, D; Belcari, N; Del Guerra, A; Bartolomei, A; Salvadori, P A

    2012-04-01

    This study investigates the reproducibility of the reconstructed image sharpness, after modifications of the geometry setup, for a variable magnification micro-CT (μCT) scanner. All the measurements were performed on a novel engineered μCT scanner for in vivo imaging of small animals (Xalt), which has been recently built at the Institute of Clinical Physiology of the National Research Council (IFC-CNR, Pisa, Italy), in partnership with the University of Pisa. The Xalt scanner is equipped with an integrated software for on-line geometric recalibration, which will be used throughout the experiments. In order to evaluate the losses of image quality due to modifications of the geometry setup, we have made 22 consecutive acquisitions by changing alternatively the system geometry between two different setups (Large FoV - LF, and High Resolution - HR). For each acquisition, the tomographic images have been reconstructed before and after the on-line geometric recalibration. For each reconstruction, the image sharpness was evaluated using two different figures of merit: (i) the percentage contrast on a small bar pattern of fixed frequency (f = 5.5 lp/mm for the LF setup and f = 10 lp/mm for the HR setup) and (ii) the image entropy. We have found that, due to the small-scale mechanical uncertainty (in the order of the voxel size), a recalibration is necessary for each geometric setup after repositioning of the system's components; the resolution losses due to the lack of recalibration are worse for the HR setup (voxel size = 18.4 μm). The integrated on-line recalibration algorithm of the Xalt scanner allowed to perform the recalibration quickly, by restoring the spatial resolution of the system to the reference resolution obtained after the initial (off-line) calibration. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Selection of Software Development Environment for Image Processing and Analysis%图象处理与分析软件开发环境选择

    Institute of Scientific and Technical Information of China (English)

    张昕; 童恒建; 陈晓文; 王海

    2012-01-01

    With the rapid development of aerospace technologies, remote sensing sensor technologies, communication technologies and computer technologies, high spatial resolution or hyperspeetral remote sensing images are widely applied to various industries and a- gricultures. Quick display and browse of a large remote sensing image is an important feature for remote sensing image processing and analysis software. To select software development environments and tools is the first important task for sciences research or com- mercial software development. This paper discusses the advantages and disadvantages of MFC, DirectX, OpenGL, Qt in the image display. The conclusion can be referred while selecting software development environments.%随着航空航天、传感器、通信和计算机等技术的发展,高空间分辨率或高光谱等遥感图象已应泛地应用于各行各业中。快速显示与浏览大的遥感图象是遥感图象处理与分析软件的一个重要功能。无论是科学研究、还是商品化软件开发,首要的任务是选择软件开发的环境和工具。本文主要讨论MFC、DirectX、OpenGL、Qt在图象显示方面的优势和缺点,供大家在选择开发环境和工具时参考。

  11. Image quality dependence on image processing software in computed radiography

    Directory of Open Access Journals (Sweden)

    Lourens Jochemus Strauss

    2012-06-01

    Full Text Available Background. Image post-processing gives computed radiography (CR a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different image appearance was recently released: MUSICA2. Aim. This study quantitatively compares the image quality of images acquired without post-processing (flatfield with images processed using these two software packages. Methods. Four aspects of image quality were evaluated. An aluminium step-wedge was imaged using constant mA at tube voltages varying from 40 to 117kV. Signal-to-noise ratios (SNRs and contrast-to-noise Ratios (CNRs were calculated from all steps. Contrast variation with object size was evaluated with visual assessment of images of a Perspex contrast-detail phantom, and an image quality figure (IQF was calculated. Resolution was assessed using modulation transfer functions (MTFs. Results. SNRs for MUSICA2 were generally higher than the other two methods. The CNRs were comparable between the two software versions, although MUSICA2 had slightly higher values at lower kV. The flatfield CNR values were better than those for the processed images. All images showed a decrease in CNRs with tube voltage. The contrast-detail measurements showed that both MUSICA programmes improved the contrast of smaller objects. MUSICA2 was found to give the lowest (best IQF; MTF measurements confirmed this, with values at 3.5 lp/mm of 10% for MUSICA2, 8% for MUSICA and 5% for flatfield. Conclusion. Both MUSICA software packages produced images with better contrast resolution than unprocessed images. MUSICA2 has slightly improved image quality than MUSICA.

  12. Development of automated conjunctival hyperemia analysis software.

    Science.gov (United States)

    Sumi, Tamaki; Yoneda, Tsuyoshi; Fukuda, Ken; Hoshikawa, Yasuhiro; Kobayashi, Masahiko; Yanagi, Masahide; Kiuchi, Yoshiaki; Yasumitsu-Lovell, Kahoko; Fukushima, Atsuki

    2013-11-01

    Conjunctival hyperemia is observed in a variety of ocular inflammatory conditions. The evaluation of hyperemia is indispensable for the treatment of patients with ocular inflammation. However, the major methods currently available for evaluation are based on nonquantitative and subjective methods. Therefore, we developed novel software to evaluate bulbar hyperemia quantitatively and objectively. First, we investigated whether the histamine-induced hyperemia of guinea pigs could be quantified by image analysis. Bulbar conjunctival images were taken by means of a digital camera, followed by the binarization of the images and the selection of regions of interest (ROIs) for evaluation. The ROIs were evaluated by counting the number of absolute pixel values. Pixel values peaked significantly 1 minute after histamine challenge was performed and were still increased after 5 minutes. Second, we applied the same method to antigen (ovalbumin)-induced hyperemia of sensitized guinea pigs, acquiring similar results except for the substantial upregulation in the first 5 minutes after challenge. Finally, we analyzed human bulbar hyperemia using the new software we developed especially for human usage. The new software allows the automatic calculation of pixel values once the ROIs have been selected. In our clinical trials, the percentage of blood vessel coverage of ROIs was significantly higher in the images of hyperemia caused by allergic conjunctival diseases and hyperemia induced by Bimatoprost, compared with those of healthy volunteers. We propose that this newly developed automated hyperemia analysis software will be an objective clinical tool for the evaluation of ocular hyperemia.

  13. Method for the calculation of volumetric fraction of retained austenite through the software for analysis of digital images; Metodo para o calculo da fracao volumetrica de austenita retida atraves do software de analise digital de imagens

    Energy Technology Data Exchange (ETDEWEB)

    Lombardo, S.; Costa, F.H.; Hashimoto, T.M.; Pereira, M.S., E-mail: sandro_Lombardo@hotmail.co [UNESP, Guaratingueta, SP (Brazil). Fac. de Engenharia; Abdalla, A.J. [Centro Tecnico Aeroespacial (CTA-IEAv), Sao Jose dos Campos, SP (Brazil). Inst. de Estudos Avancados

    2010-07-01

    In order to calculate the volume fraction of the retained austenite in aeronautic multiphase steels, it was used a digital analysis software for image processing. The materials studied were steels AISI 43XX with carbon content between 30, 40 and 50%, heat treated by conventional quenching and isothermal cooling in bainitic and intercritical region, characterized by optical microscopy, etching by reagent Sodium Metabisulfite (10%) for 30 seconds, with forced drying. The results were compared with the methods of X-Ray Diffraction and Magnetic Saturation through photomicrographs, showing that with this technic it is possible to quantify the percentage of retained austenite in the martensitic matrix, in the different types of steels. (author)

  14. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  15. SUPRIM: easily modified image processing software.

    Science.gov (United States)

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest.

  16. Microscopic image analysis techniques for the morphological characterization of pharmaceutical particles: influence of the software, and the factor algorithms used in the shape factor estimation.

    Science.gov (United States)

    Almeida-Prieto, Sergio; Blanco-Méndez, José; Otero-Espinar, Francisco J

    2007-11-01

    The present report highlights the difficulties of particle shape characterizations of multiparticulate systems obtained using different image analysis techniques. The report describes and discusses a number of shape factors that are widely used in pharmaceutical research. Using photographs of 16 pellets of different shapes, obtained by extrusion-spheronization, we investigated how shape factor estimates vary depending on method of calculation, and among different software packages. The results obtained indicate that the algorithms used (both for estimation of basic dimensions such as perimeter and maximum diameter, and for estimation of shape factors on the basis of these basic dimensions) have marked influences on the shape factor values obtained. These findings suggest that care is required when comparing results obtained using different image analysis programs.

  17. Engineering analysis with ANSYS software

    CERN Document Server

    Nakasone, Y; Stolarski, T A

    2006-01-01

    For all engineers and students coming to finite element analysis or to ANSYS software for the first time, this powerful hands-on guide develops a detailed and confident understanding of using ANSYS's powerful engineering analysis tools. The best way to learn complex systems is by means of hands-on experience. With an innovative and clear tutorial based approach, this powerful book provides readers with a comprehensive introduction to all of the fundamental areas of engineering analysis they are likely to require either as part of their studies or in getting up to speed fast with the use of ANSYS software in working life. Opening with an introduction to the principles of the finite element method, the book then presents an overview of ANSYS technologies before moving on to cover key applications areas in detail.

  18. Terahertz/mm wave imaging simulation software

    Science.gov (United States)

    Fetterman, M. R.; Dougherty, J.; Kiser, W. L., Jr.

    2006-10-01

    We have developed a mm wave/terahertz imaging simulation package from COTS graphic software and custom MATLAB code. In this scheme, a commercial ray-tracing package was used to simulate the emission and reflections of radiation from scenes incorporating highly realistic imagery. Accurate material properties were assigned to objects in the scenes, with values obtained from the literature, and from our own terahertz spectroscopy measurements. The images were then post-processed with custom Matlab code to include the blur introduced by the imaging system and noise levels arising from system electronics and detector noise. The Matlab code was also used to simulate the effect of fog, an important aspect for mm wave imaging systems. Several types of image scenes were evaluated, including bar targets, contrast detail targets, a person in a portal screening situation, and a sailboat on the open ocean. The images produced by this simulation are currently being used as guidance for a 94 GHz passive mm wave imaging system, but have broad applicability for frequencies extending into the terahertz region.

  19. Application of Software Safety Analysis Methods

    Energy Technology Data Exchange (ETDEWEB)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, S. J.; Koo, Y. H. [Doosan Heavy Industries and Construction Co., Daejeon (Korea, Republic of)

    2009-05-15

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  20. Analyzing huge pathology images with open source software.

    Science.gov (United States)

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  1. OARDAS stray radiation analysis software

    Science.gov (United States)

    Rock, David F.

    1999-09-01

    OARDAS (Off-Axis Rejection Design Analysis Software) is a Raytheon in-house code designed to aid in stray light analysis. The code development started in 1982, and by 1986 the program was fully operational. Since that time, the work has continued--not with the goal of creating a marketable product, but with a focus on creating a powerful, user- friendly, highly graphical tool that makes stray light analysis as easy and efficient as possible. The goal has been to optimize the analysis process, with a clear emphasis on designing an interface between computer and user that allows each to do what he does best. The code evolution has resulted in a number of analysis features that are unique to the industry. This paper looks at a variety of stray light analysis problems that the analyst is typically faced with and shows how they are approached using OARDAS.

  2. The National Alliance for Medical Image Computing, a roadmap initiative to build a free and open source software infrastructure for translational research in medical image analysis.

    Science.gov (United States)

    Kapur, Tina; Pieper, Steve; Whitaker, Ross; Aylward, Stephen; Jakab, Marianna; Schroeder, Will; Kikinis, Ron

    2012-01-01

    The National Alliance for Medical Image Computing (NA-MIC), is a multi-institutional, interdisciplinary community of researchers, who share the recognition that modern health care demands improved technologies to ease suffering and prolong productive life. Organized under the National Centers for Biomedical Computing 7 years ago, the mission of NA-MIC is to implement a robust and flexible open-source infrastructure for developing and applying advanced imaging technologies across a range of important biomedical research disciplines. A measure of its success, NA-MIC is now applying this technology to diseases that have immense impact on the duration and quality of life: cancer, heart disease, trauma, and degenerative genetic diseases. The targets of this technology range from group comparisons to subject-specific analysis.

  3. Software Performs Complex Design Analysis

    Science.gov (United States)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  4. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  5. Current and future trends in marine image annotation software

    Science.gov (United States)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images

  6. Human Factors Analysis in Software Engineering

    Institute of Scientific and Technical Information of China (English)

    Xu Ren-zuo; Ma Ruo-feng; Liu Li-na; Xiong Zhong-wei

    2004-01-01

    The general human factors analysis analyzes human functions, effects and influence in a system. But in a narrow sense, it analyzes human influence upon the reliability of a system, it includes traditional human reliability analysis, human error analysis, man-machine interface analysis, human character analysis, and others. A software development project in software engineering is successful or not to be completely determined by human factors. In this paper, we discuss the human factors intensions, declare the importance of human factors analysis for software engineering by listed some instances. At last, we probe preliminarily into the mentality that a practitioner in software engineering should possess.

  7. Integrating NASA's Land Analysis System (LAS) image processing software with an appropriate Geographic Information System (GIS): A review of candidates in the public domain

    Science.gov (United States)

    Rochon, Gilbert L.

    1989-01-01

    A user requirements analysis (URA) was undertaken to determine and appropriate public domain Geographic Information System (GIS) software package for potential integration with NASA's LAS (Land Analysis System) 5.0 image processing system. The necessity for a public domain system was underscored due to the perceived need for source code access and flexibility in tailoring the GIS system to the needs of a heterogenous group of end-users, and to specific constraints imposed by LAS and its user interface, Transportable Applications Executive (TAE). Subsequently, a review was conducted of a variety of public domain GIS candidates, including GRASS 3.0, MOSS, IEMIS, and two university-based packages, IDRISI and KBGIS. The review method was a modified version of the GIS evaluation process, development by the Federal Interagency Coordinating Committee on Digital Cartography. One IEMIS-derivative product, the ALBE (AirLand Battlefield Environment) GIS, emerged as the most promising candidate for integration with LAS. IEMIS (Integrated Emergency Management Information System) was developed by the Federal Emergency Management Agency (FEMA). ALBE GIS is currently under development at the Pacific Northwest Laboratory under contract with the U.S. Army Corps of Engineers' Engineering Topographic Laboratory (ETL). Accordingly, recommendations are offered with respect to a potential LAS/ALBE GIS linkage and with respect to further system enhancements, including coordination with the development of the Spatial Analysis and Modeling System (SAMS) GIS in Goddard's IDM (Intelligent Data Management) developments in Goddard's National Space Science Data Center.

  8. Fault tree analysis of KNICS RPS software

    Energy Technology Data Exchange (ETDEWEB)

    Park, Gee Yong; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Koh, Kwang Yong; Jee, Eun Kyoung; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Lee, Dae Hyung [Doosan Heavy Industries and Construction, Yongin (Korea, Republic of)

    2008-08-15

    This paper describes the application of a software Fault Tree Analysis (FTA) as one of the analysis techniques for a Software Safety Analysis (SSA) at the design phase and its analysis results for the safety-critical software of a digital reactor protection system, which is called the KNICS RPS, being developed in the KNICS (Korea Nuclear Instrumentation and Control Systems) project. The software modules in the design description were represented by Function Blocks (FBs), and the software FTA was performed based on the well-defined fault tree templates for the FBs. The SSA, which is part of the verification and validation (V and V) activities, was activated at each phase of the software lifecycle for the KNICS RPS. At the design phase, the software HAZOP (Hazard and Operability) and the software FTA were employed in the SSA in such a way that the software HAZOP was performed first and then the software FTA was applied. The software FTA was applied to some critical modules selected from the software HAZOP analysis.

  9. Integrating security analysis and safeguards software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, D.D.; Axline, R.M.

    1989-01-01

    These initiatives will work together to provide more secure safeguards software, as well as other critical systems software. The resulting design tools and methodologies, the evolving guidelines for software security, and the adversary-resistant software components will be applied to the software design at each stage to increase the design's inherent security and to make the design easier to analyze. The resident hardware monitor or other architectural innovations will provide complementary additions to the design to remove some of the burden of security from the software. The security analysis process, supported by new analysis methodologies and tools, will be applied to the software design as it evolves in an attempt to identify and remove vulnerabilities at the earliest possible point in the safeguards system life cycle. The result should be better and more verifiably secure software systems.

  10. Software and Algorithms for Biomedical Image Data Processing and Visualization

    Science.gov (United States)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  11. Integrated Methodology for Software Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2012-01-01

    Full Text Available The most used techniques to ensure safety and reliability of the systems are applied together as a whole, and in most cases, the software components are usually overlooked or to little analyzed. The present paper describes the applicability of fault trees analysis software system, analysis defined as Software Fault Tree Analysis (SFTA, fault trees are evaluated using binary decision diagrams, all of these being integrated and used with help from Java library reliability.

  12. Image processing and enhancement provided by commercial dental software programs

    National Research Council Canada - National Science Library

    Lehmann, T M; Troeltsch, E; Spitzer, K

    2002-01-01

    To identify and analyse methods/algorithms for image processing provided by various commercial software programs used in direct digital dental imaging and to map them onto a standardized nomenclature...

  13. Software architecture analysis of usability

    NARCIS (Netherlands)

    Folmer, Eelke

    2005-01-01

    One of the qualities that has received increased attention in recent decades is usability. A software product with poor usability is likely to fail in a highly competitive market; therefore software developing organizations are paying more and more attention to ensuring the usability of their softwa

  14. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  15. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    Science.gov (United States)

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier.

  16. Next-Generation Bioacoustic Analysis Software

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Next- Generation Bioacoustic Analysis Software David K...estimates are in one dimension (bearing), two (X-Y position), or three (X-Y- Z position), analysis software is necessary. Marine mammal acoustic data is

  17. Development of image-processing software for automatic segmentation of brain tumors in MR images

    Directory of Open Access Journals (Sweden)

    C Vijayakumar

    2011-01-01

    Full Text Available Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called ′Prometheus,′ which performs neural system-based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively.

  18. An assessment of the diagnostic criteria for sessile serrated adenoma/polyps: SSA/Ps using image processing software analysis for Ki67 immunohistochemistry

    Directory of Open Access Journals (Sweden)

    Fujimori Yukari

    2012-05-01

    Full Text Available Abstract Background Serrated polyps belong to a heterogeneous group of lesions that are generally characterized morphologically. This type of lesion is thought to be the precursor of sporadic carcinomas with microsatellite instability, and probably also the precursor for CpG island-methylated microsatellite-stable carcinomas. For practical purposes, according to the 2010 WHO classification, the diagnostic criteria for sessile serrated adenomas/polyps (SSA/Ps was established by the research project “Potential of Cancerization of Colorectal Serrated Lesions” led by the Japanese Society for Cancer of the Colon and Rectum. The aim of this study was to evaluate the validity of the morphologic characteristics established in Japan by using immunohistochemical staining for Ki-67. Methods To calculate the target cells, 2 contiguous crypts which could be detected from the bottom of the crypt to the surface of the colorectal epithelium were selected. To validate the proliferative activity, we evaluated the percentage and the asymmetrical staining pattern of Ki67 positive cells in each individual crypt. To examine the immunoreactivity of Ki67, computer-assisted cytometrical analysis was performed. Results SSA/Ps had a higher proliferative activity as compared to hyperplastic polyps (HPs based on the difference in incidence of Ki67 positive cells, and the former also exhibited a significantly higher asymmetric distribution of these cells as compared to HPs, even in lesions with a diameter Conclusion We conclude that assessment of the pathological findings of SSA/Ps, including crypt dilation, irregularly branching crypts, and horizontally arranged basal crypts (inverted T- and/or L-shaped crypts is appropriate to show a significantly higher proliferative activity as compared to HPs. Further, the use of two-dimensional image analysis software is an objective and reproducible method for this type of histological examination. Virtual slides The virtual

  19. GRACAT, Software for grounding and collision analysis

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Simonsen, Bo Cerup

    2002-01-01

    From 1998 to 2001 an integrated software package for grounding and collision analysis was developed at the Technical University of Denmark within the ISESO project at the cost of six man years (0.75M US$). The software provides a toolbox for a multitude of analyses related to collision and ground......From 1998 to 2001 an integrated software package for grounding and collision analysis was developed at the Technical University of Denmark within the ISESO project at the cost of six man years (0.75M US$). The software provides a toolbox for a multitude of analyses related to collision...... route where the result is the probability density functions for the cost of oil outflow in a given area per year for the two vessels. In this paper we describe the basic modelling principles and the capabilities of the software package. The software package can be downloaded for research purposes from...

  20. Software Package for Bio-Signal Analysis

    Science.gov (United States)

    2002-10-15

    We have developed a MatlabTM based software package for bio -signal analysis. The software is based on modular design and can thus be easily adapted...to fit on analysis of various kind of time variant or event-related bio -signals. Currently analysis programs for event-related potentials (ERP) heart...rate variability (HRV), galvanic skin responses (GSR) and quantitative EEG (qEEG) are implemented. A tool for time varying spectral analysis of bio

  1. Software architecture analysis of usability

    NARCIS (Netherlands)

    Folmer, E; van Gurp, J; Bosch, J; Bastide, R; Palanque, P; Roth, J

    2005-01-01

    Studies of software engineering projects show that a large number of usability related change requests are made after its deployment. Fixing usability problems during the later stages of development often proves to be costly, since many of the necessary changes require changes to the system that can

  2. Software Development for Ring Imaging Detector

    Science.gov (United States)

    Torisky, Benjamin

    2016-03-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging Cherenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting an update on my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  3. STAR: Software Toolkit for Analysis Research

    Energy Technology Data Exchange (ETDEWEB)

    Doak, J.E.; Prommel, J.M.; Whiteson, R.; Hoffbauer, B.L.; Thomas, T.R. [Los Alamos National Lab., NM (United States); Helman, P. [New Mexico Univ., Albuquerque, NM (United States). Dept. of Computer Science

    1993-08-01

    Analyzing vast quantities of data from diverse information sources is an increasingly important element for nonproliferation and arms control analysis. Much of the work in this area has used human analysts to assimilate, integrate, and interpret complex information gathered from various sources. With the advent of fast computers, we now have the capability to automate this process thereby shifting this burden away from humans. In addition, there now exist huge data storage capabilities which have made it possible to formulate large integrated databases comprising many thereabouts of information spanning a variety of subjects. We are currently designing a Software Toolkit for Analysis Research (STAR) to address these issues. The goal of STAR is to Produce a research tool that facilitates the development and interchange of algorithms for locating phenomena of interest to nonproliferation and arms control experts. One major component deals with the preparation of information. The ability to manage and effectively transform raw data into a meaningful form is a prerequisite for analysis by any methodology. The relevant information to be analyzed can be either unstructured text structured data, signals, or images. Text can be numerical and/or character, stored in raw data files, databases, streams of bytes, or compressed into bits in formats ranging from fixed, to character-delimited, to a count followed by content The data can be analyzed in real-time or batch mode. Once the data are preprocessed, different analysis techniques can be applied. Some are built using expert knowledge. Others are trained using data collected over a period of time. Currently, we are considering three classes of analyzers for use in our software toolkit: (1) traditional machine learning techniques, (2) the purely statistical system, and (3) expert systems.

  4. Numerical methods in software and analysis

    CERN Document Server

    Rice, John R

    1992-01-01

    Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm

  5. Automatic Image Registration Using Free and Open Source Software

    Science.gov (United States)

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the process, as it involves human interaction, does not converge with multiple operations at different times. Automated procedures rely on accurately determining the matching locations or points from both the images under comparison and the procedures are robust and consistent over time. Different algorithms are available to achieve this, based on pattern recognition, feature based detection, similarity techniques etc. In the present study and implementation, Correlation based methods have been used with a improvement over newly developed technique of identifying and pruning the false points of match. Free and Open Source Software (FOSS) have been used to develop the methodology to reach a wider audience, without any dependency on COTS (Commercially off the shelf) software. Standard deviation from foci of the ellipse of correlated points, is a statistical means of ensuring the best match of the points of interest based on both intensity values and location correspondence. The methodology is developed and standardised by enhancements to meet the registration requirements of remote sensing imagery. Results have shown a performance improvement, nearly matching the visual techniques and have been implemented in remote sensing operational projects. The main advantage of the proposed methodology is its viability in production mode environment. This paper also shows that the visualization capabilities of MapWinGIS, GDAL's image handling abilities and OSSIM's correlation facility can be efficiently

  6. Development of Software for Analyzing Breakage Cutting ToolsBased on Image Processing

    Institute of Scientific and Technical Information of China (English)

    赵彦玲; 刘献礼; 王鹏; 王波; 王红运

    2004-01-01

    As the present day digital microsystems do not provide specialized microscopes that can detect cutting-tool, analysis software has been developed using VC++. A module for verge test and image segmentation is designed specifically for cutting-tools. Known calibration relations and given postulates are used in scale measurements. Practical operations show that the software can perform accurate detection.

  7. An Automated Solar Synoptic Analysis Software System

    Science.gov (United States)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  8. Objective detection of apoptosis in rat renal tissue sections using light microscopy and free image analysis software with subsequent machine learning: Detection of apoptosis in renal tissue.

    Science.gov (United States)

    Macedo, Nayana Damiani; Buzin, Aline Rodrigues; de Araujo, Isabela Bastos Binotti Abreu; Nogueira, Breno Valentim; de Andrade, Tadeu Uggere; Endringer, Denise Coutinho; Lenz, Dominik

    2017-02-01

    The current study proposes an automated machine learning approach for the quantification of cells in cell death pathways according to DNA fragmentation. A total of 17 images of kidney histological slide samples from male Wistar rats were used. The slides were photographed using an Axio Zeiss Vert.A1 microscope with a 40x objective lens coupled with an Axio Cam MRC Zeiss camera and Zen 2012 software. The images were analyzed using CellProfiler (version 2.1.1) and CellProfiler Analyst open-source software. Out of the 10,378 objects, 4970 (47,9%) were identified as TUNEL positive, and 5408 (52,1%) were identified as TUNEL negative. On average, the sensitivity and specificity values of the machine learning approach were 0.80 and 0.77, respectively. Image cytometry provides a quantitative analytical alternative to the more traditional qualitative methods more commonly used in studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    Science.gov (United States)

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  10. Software safety analysis practice in installation phase

    Energy Technology Data Exchange (ETDEWEB)

    Huang, H. W.; Chen, M. H.; Shyu, S. S., E-mail: hwhwang@iner.gov.t [Institute of Nuclear Energy Research, No. 1000 Wenhua Road, Chiaan Village, Longtan Township, 32546 Taoyuan County, Taiwan (China)

    2010-10-15

    This work performed a software safety analysis in the installation phase of the Lung men nuclear power plant in Taiwan, under the cooperation of Institute of Nuclear Energy Research and Tpc. The US Nuclear Regulatory Commission requests licensee to perform software safety analysis and software verification and validation in each phase of software development life cycle with Branch Technical Position 7-14. In this work, 37 safety grade digital instrumentation and control systems were analyzed by failure mode and effects analysis, which is suggested by IEEE standard 7-4.3.2-2003. During the installation phase, skew tests for safety grade network and point to point tests were performed. The failure mode and effects analysis showed all the single failure modes can be resolved by the redundant means. Most of the common mode failures can be resolved by operator manual actions. (Author)

  11. CADDIS Volume 4. Data Analysis: Download Software

    Science.gov (United States)

    Overview of the data analysis tools available for download on CADDIS. Provides instructions for downloading and installing CADStat, access to Microsoft Excel macro for computing SSDs, a brief overview of command line use of R, a statistical software.

  12. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  13. Perspective of regulation on software safety analysis: experience of software safety analysis activity of Lungmen project

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chuan Chung [Taiwan Power Company, Taipei TW (China)

    2005-11-15

    Software Safety Analysis is one of the essential tasks must be performed in the design work of digital computer software used in safety system of Nuclear Power Station. While there is more experience in Software Verification and Validation and Configuration Management in software industry, Software Safety Analysis (SSA) is a new task. What is the scope of SSA? What should be done in SSA? Various SSA related code and Standards were reviewed and from the evolvement of code and standards, it was concluded that Abnormal Condition and Events should be treated as part of SSA activities and SSA activities could be one of the activities in Software V and V SSA case study on NUMAC as Pervious Developed System was presented and a new method on SSA - 'Hazard Analysis and Defense in Depth for Software Safety Analysis' to enhance the confidence in SSA activities in Lungmen project was introduced.

  14. Software for Acquiring Image Data for PIV

    Science.gov (United States)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  15. Bosque: integrated phylogenetic analysis software.

    Science.gov (United States)

    Ramírez-Flandes, Salvador; Ulloa, Osvaldo

    2008-11-01

    Phylogenetic analyses today involve dealing with computer files in different formats and often several computer programs. Although some widely used applications have integrated important functionalities for such analyses, they still work with local resources only: input/output files (users have to manage them) and local computing (users have sometimes to leave their programs, on their desktop computers, running for extended periods of time). To address these problems we have developed 'Bosque', a multi-platform client-server software that performs standard phylogenetic tasks either locally or remotely on servers, and integrates the results on a local relational database. Bosque performs sequence alignments and graphical visualization and editing of trees, thus providing a powerful environment that integrates all the steps of phylogenetic analyses. http://bosque.udec.cl

  16. Color Image Quality in Presentation Software

    Directory of Open Access Journals (Sweden)

    María S. Millán

    2008-11-01

    Full Text Available The color image quality of presentation programs is evaluated and measured using S-CIELAB and CIEDE2000 color difference formulae. A color digital image in its original format is compared with the same image already imported by the program and introduced as a part of a slide. Two widely used presentation programs—Microsoft PowerPoint 2004 for Mac and Apple's Keynote 3.0.2—are evaluated in this work.

  17. Basic image analysis and manipulation in ImageJ.

    Science.gov (United States)

    Hartig, Sean M

    2013-01-01

    Image analysis methods have been developed to provide quantitative assessment of microscopy data. In this unit, basic aspects of image analysis are outlined, including software installation, data import, image processing functions, and analytical tools that can be used to extract information from microscopy data using ImageJ. Step-by-step protocols for analyzing objects in a fluorescence image and extracting information from two-color tissue images collected by bright-field microscopy are included.

  18. Control and analysis software for a laser scanning microdensitometer

    Indian Academy of Sciences (India)

    H R Bundel; C P Navathe; P A Naik; P D Gupta

    2006-02-01

    A PC-based control software and data acquisition system is developed for an existing commercial microdensitometer (Biomed make model No. SL-2D/1D UV/VIS) to facilitate scanning and analysis of X-ray films. The software is developed in Labview, which includes operation of the microdensitometer in 1D and 2D scans and analysis of spatial or spectral data on X-ray films, such as optical density, intensity and wavelength. It provides a user-friendly Graphical User Interface (GUI) to analyse the scanned data and also store the analysed data/image in popular formats like data in Excel and images in jpeg. It has also on-line calibration facility with standard optical density tablets. The control software and data acquisition system is simple, inexpensive and versatile.

  19. Differences in granular materials for analogue modelling: Insights from repeated compression tests analyzed with X-ray Computed Tomography and image analysis software

    Science.gov (United States)

    Klinkmueller, M.; Schreurs, G.

    2009-12-01

    Six different granular materials for analogue modelling have been investigated using a sandbox with a compressional set-up and X-ray computed tomography (XRCT). The evolving structures were evaluated with image analysis software. The sandbox has one movable sidewall that is driven by a computer-controlled servomotor at 20 cm/h. A 12 cm wide and 20 cm long sheet of hard cardboard was placed on the base of the sandbox and attached to the moving sidewall creating a velocity discontinuity. The whole sandbox was covered on the inside with Alkor foil to reduce sidewall friction. Computed Tomography was used to scan the whole volume in 3 mm increments of shortening until 15 mm maximum deformation was reached. The second approach was a scanning procedure to a maximum deformation of 80 mm in 2 mm increments of shortening for the first 10 mm and in 5 mm increments for the last 70 mm. The short deformation scans were repeated three times to investigate reproducibility. The long deformation scans were performed twice. The physical properties of the materials (table 1) have been described in a previous material benchmark. Four natural quartz sands and two artificial granular materials, corundum brown sand and glass beads, have been used. The two artificial materials were used for this experimental series as examples for very angular and very rounded sands in contrast to the sub-rounded to angular natural quartz sands. The short deformation experiments show partly large differences in thrust angles of both front and back-thrust, in timing of thrust initiation, and in the degree of undulation of thrusts. The coarse-grained sands show smooth and low undulating thrusts that are only affected by the sidewall friction whereas the thrusts in fine-grained sands undulate significantly and partly divide and merge in an anastomosing fashion. The coarse-grained sand thrusts are clearer visualized by XRCT, which indicates a wider shear zone where the material dilates. Furthermore, the

  20. The RUMBA software: tools for neuroimaging data analysis.

    Science.gov (United States)

    Bly, Benjamin Martin; Rebbechi, Donovan; Hanson, Stephen Jose; Grasso, Giorgio

    2004-01-01

    The enormous scale and complexity of data sets in functional neuroimaging makes it crucial to have well-designed and flexible software for image processing, modeling, and statistical analysis. At present, researchers must choose between general purpose scientific computing environments (e.g., Splus and Matlab), and specialized human brain mapping packages that implement particular analysis strategies (e.g., AFNI, SPM, VoxBo, FSL or FIASCO). For the vast majority of users in Human Brain Mapping and Cognitive Neuroscience, general purpose computing environments provide an insufficient framework for a complex data-analysis regime. On the other hand, the operational particulars of more specialized neuroimaging analysis packages are difficult or impossible to modify and provide little transparency or flexibility to the user for approaches other than massively multiple comparisons based on inferential statistics derived from linear models. In order to address these problems, we have developed open-source software that allows a wide array of data analysis procedures. The RUMBA software includes programming tools that simplify the development of novel methods, and accommodates data in several standard image formats. A scripting interface, along with programming libraries, defines a number of useful analytic procedures, and provides an interface to data analysis procedures. The software also supports a graphical functional programming environment for implementing data analysis streams based on modular functional components. With these features, the RUMBA software provides researchers programmability, reusability, modular analysis tools, novel data analysis streams, and an analysis environment in which multiple approaches can be contrasted and compared. The RUMBA software retains the flexibility of general scientific computing environments while adding a framework in which both experts and novices can develop and adapt neuroimaging-specific analyses.

  1. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    Science.gov (United States)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  2. An Analysis of Software Design Methodologies

    Science.gov (United States)

    1979-08-01

    Technical Report 401-, # "AN ANALYSIS OF SOFTWARE DESIGN METHODOLOGIES H. Rudy Ramsey, Michael E. Atwood , and Gary D. Campbell Science...H. Rudy Ramsey, Michael E. Atwood , and Gary D. Campbell Science Applications, Incorporated Submitted by: Edgar M. Johnson, Chief HUMAN FACTORS...expressed by members ot the Integrated Software Research and Development Working Group (ISRAD). The authors are indebted to Martha Cichelli, Margaret

  3. Colonoscopy tutorial software made with a cadaver's sectioned images.

    Science.gov (United States)

    Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo

    2016-11-01

    Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.

  4. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    Science.gov (United States)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to

  5. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  6. NIH Image to ImageJ: 25 years of image analysis.

    Science.gov (United States)

    Schneider, Caroline A; Rasband, Wayne S; Eliceiri, Kevin W

    2012-07-01

    For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.

  7. Finding Fidelity: Advancing Audiovisual Analysis Using Software

    Directory of Open Access Journals (Sweden)

    Christina Silver

    2011-01-01

    Full Text Available Specialised software for the analysis of qualitative data has been in development for the last thirty years. However, its adoption is far from widespread. Additionally, qualitative research itself is evolving, from projects that utilised small, text-based data sets to those which involve the collection, management, and analysis of enormous quantities of multimedia data or data of multiple types. Software has struggled to keep up with these changes for several reasons: 1. meeting the needs of researchers is complicated by the lack of documentation and critique by those who are implementing software use and 2. audiovisual data is particularly challenging due to the multidimensionality of data and substantial variety in research project aims and output requirements. This article discusses the history of Computer Assisted Qualitative Data AnalysiS (CAQDAS as it relates to audiovisual data, and introduces the term "fidelity" as a conceptual mechanism to match software tools and researcher needs. Currently available software tools are examined and areas found lacking are highlighted. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1101372

  8. Software abstractions logic, language, and analysis

    CERN Document Server

    Jackson, Daniel

    2011-01-01

    In Software Abstractions Daniel Jackson introduces an approach to software design that draws on traditional formal methods but exploits automated tools to find flaws as early as possible. This approach--which Jackson calls "lightweight formal methods" or "agile modeling"--takes from formal specification the idea of a precise and expressive notation based on a tiny core of simple and robust concepts but replaces conventional analysis based on theorem proving with a fully automated analysis that gives designers immediate feedback. Jackson has developed Alloy, a language that captures the essence of software abstractions simply and succinctly, using a minimal toolkit of mathematical notions. This revised edition updates the text, examples, and appendixes to be fully compatible with the latest version of Alloy (Alloy 4). The designer can use automated analysis not only to correct errors but also to make models that are more precise and elegant. This approach, Jackson says, can rescue designers from "the tarpit of...

  9. Power and performance software analysis and optimization

    CERN Document Server

    Kukunas, Jim

    2015-01-01

    Power and Performance: Software Analysis and Optimization is a guide to solving performance problems in modern Linux systems. Power-efficient chips are no help if the software those chips run on is inefficient. Starting with the necessary architectural background as a foundation, the book demonstrates the proper usage of performance analysis tools in order to pinpoint the cause of performance problems, and includes best practices for handling common performance issues those tools identify. Provides expert perspective from a key member of Intel's optimization team on how processors and memory

  10. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    Science.gov (United States)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  11. Software for computerised analysis of cardiotocographic traces.

    Science.gov (United States)

    Romano, M; Bifulco, P; Ruffo, M; Improta, G; Clemente, F; Cesarelli, M

    2016-02-01

    Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians' evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians.

  12. Intraprocedural Dataflow Analysis for Software Product Lines

    DEFF Research Database (Denmark)

    Brabrand, Claus; Ribeiro, Márcio; Tolêdo, Társis;

    2013-01-01

    Software product lines (SPLs) developed using annotative approaches such as conditional compilation come with an inherent risk of constructing erroneous products. For this reason, it is essential to be able to analyze such SPLs. However, as dataflow analysis techniques are not able to deal with SPLs...

  13. JPL multipolarization workstation - Hardware, software and examples of data analysis

    Science.gov (United States)

    Burnette, Fred; Norikane, Lynne

    1987-01-01

    A low-cost stand-alone interactive image processing workstation has been developed for operations on multipolarization JPL aircraft SAR data, as well as data from future spaceborne imaging radars. A recently developed data compression technique is used to reduce the data volume to 10 Mbytes, for a typical data set, so that interactive analysis may be accomplished in a timely and efficient manner on a supermicrocomputer. In addition to presenting a hardware description of the work station, attention is given to the software that has been developed. Three illustrative examples of data analysis are presented.

  14. Digital radiography: optimization of image quality and dose using multi-frequency software

    Energy Technology Data Exchange (ETDEWEB)

    Precht, H. [University College Lillebelt, Conrad Research Center, Odense (Denmark); Gerke, O. [Odense University Hospital, Department of Nuclear Medicine, Odense (Denmark); University of Southern Denmark, Research Unit of Health Economics, Odense (Denmark); Rosendahl, K. [Haukeland University Hospital, Section of Pediatric Radiology, Bergen (Norway); University of Bergen, Institute of Surgical Sciences, Bergen (Norway); Tingberg, A. [Skaane University Hospital, Lund University (Sweden); Medical Radiation Physics, Department of Clinical Sciences, Malmoe (Sweden); Waaler, D. [Gjoevik University College, Gjoevik (Norway)

    2012-09-15

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality. (orig.)

  15. Combining speech recognition software with Digital Imaging and Communications in Medicine (DICOM) workstation software on a Microsoft Windows platform.

    Science.gov (United States)

    Ernst, R; Carpenter, W; Torres, W; Wheeler, S

    2001-06-01

    This presentation describes our experience in combining speech recognition software, clinical review software, and other software products on a single computer. Different processor speeds, random access memory (RAM), and computer costs were evaluated. We found that combining continuous speech recognition software with Digital Imaging and Communications in Medicine (DICOM) workstation software on the same platform is feasible and can lead to substantial savings of hardware cost. This combination optimizes use of limited workspace and can improve radiology workflow.

  16. Combining speech recognition software with digital imaging and communications in medicine (DICOM) workstation software on a microsoft windows platform

    OpenAIRE

    Ernst, Randy; Carpenter, Walter; Torres, William; Wheeler, Scott

    2001-01-01

    This presentation describes our experience in combining speech recognition software, clinical review software, and other software products on a single computer. Different processor speeds, random access memory (RAM), and computer costs were evaluated. We found that combining continuous speech recognition software with Digital Imaging and Communications in Medicine (DICOM) workstation software on the same platform is feasible and can lead to substantial savings of hardware cost. This combinati...

  17. A bio-inspired software for segmenting digital images.

    OpenAIRE

    Díaz Pernil, Daniel; Molina Abril, Helena; Real Jurado, Pedro; Gutiérrez Naranjo, Miguel Ángel

    2010-01-01

    Segmentation in computer vision refers to the process of partitioning a digital image into multiple segments (sets of pixels). It has several features which make it suitable for techniques inspired by nature. It can be parallelized, locally solved and the input data can be easily encoded by bio-inspired representations. In this paper, we present a new software for performing a segmentation of 2D digital images based on Membrane Computing techniques.

  18. Software for analysis of visual meteor data

    Science.gov (United States)

    Veljković, Kristina; Ivanović, Ilija

    2014-02-01

    In this paper, we will present new software for analysis of IMO data collected from visual observations. The software consists of a package of functions written in the statistical programming language R, as well as a Java application which uses these functions in a user friendly environment. R code contains various filters for selection of data, methods for calculation of Zenithal Hourly Rate (ZHR), solar longitude, population index and graphical representation of ZHR and distribution of observed magnitudes. The Java application allows everyone to use these functions without any knowledge of R. Both R code and the Java application are open source and free with user manuals and examples provided.

  19. Advanced Software Methods for Physics Analysis

    Science.gov (United States)

    Lista, L.

    2006-01-01

    Unprecedented data analysis complexity is experienced in modern High Energy Physics experiments. The complexity arises from the growing size of recorded data samples, the large number of data analyses performed by different users in each single experiment, and the level of complexity of each single analysis. For this reason, the requirements on software for data analysis impose a very high level of reliability. We present two concrete examples: the former from BaBar experience with the migration to a new Analysis Model with the definition of a new model for the Event Data Store, the latter about a toolkit for multivariate statistical and parametric Monte Carlo analysis developed using generic programming.

  20. Techniques and software architectures for medical visualisation and image processing

    NARCIS (Netherlands)

    Botha, C.P.

    2005-01-01

    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use

  1. Software windows for the display of CT-images

    Energy Technology Data Exchange (ETDEWEB)

    Gell, G.; Sager, W.D.; Toelly, E.

    1983-03-01

    Software windows are a flexible and general method for defining arbitrary functions for the mapping of Hounsfield-numbers of CT-scans on to the grey levels of the display image. The method which is illustrated with the aid of a few examples has been implemented on an EMI viewing console.

  2. Improving Software Systems By Flow Control Analysis

    Directory of Open Access Journals (Sweden)

    Piotr Poznanski

    2012-01-01

    Full Text Available Using agile methods during the implementation of the system that meets mission critical requirements can be a real challenge. The change in the system built of dozens or even hundreds of specialized devices with embedded software requires the cooperation of a large group of engineers. This article presents a solution that supports parallel work of groups of system analysts and software developers. Deployment of formal rules to the requirements written in natural language enables using formal analysis of artifacts being a bridge between software and system requirements. Formalism and textual form of requirements allowed the automatic generation of message flow graph for the (sub system, called the “big-picture-model”. Flow diagram analysis helped to avoid a large number of defects whose repair cost in extreme cases could undermine the legitimacy of agile methods in projects of this scale. Retrospectively, a reduction of technical debt was observed. Continuous analysis of the “big picture model” improves the control of the quality parameters of the software architecture. The article also tries to explain why the commercial platform based on UML modeling language may not be sufficient in projects of this complexity.

  3. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  4. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  5. 76 FR 60939 - Metal Fatigue Analysis Performed by Computer Software

    Science.gov (United States)

    2011-09-30

    ... COMMISSION Metal Fatigue Analysis Performed by Computer Software AGENCY: Nuclear Regulatory Commission... applicants' analyses and methodologies using the computer software package, WESTEMS TM , to demonstrate... by Computer Software Addressees All holders of, and applicants for, a power reactor operating...

  6. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  7. Complexity of software trustworthiness and its dynamical statistical analysis methods

    Institute of Scientific and Technical Information of China (English)

    ZHENG ZhiMing; MA ShiLong; LI Wei; JIANG Xin; WEI Wei; MA LiLi; TANG ShaoTing

    2009-01-01

    Developing trusted softwares has become an important trend and a natural choice in the development of software technology and applications.At present,the method of measurement and assessment of software trustworthiness cannot guarantee safe and reliable operations of software systems completely and effectively.Based on the dynamical system study,this paper interprets the characteristics of behaviors of software systems and the basic scientific problems of software trustworthiness complexity,analyzes the characteristics of complexity of software trustworthiness,and proposes to study the software trustworthiness measurement in terms of the complexity of software trustworthiness.Using the dynamical statistical analysis methods,the paper advances an invariant-measure based assessment method of software trustworthiness by statistical indices,and hereby provides a dynamical criterion for the untrustworthiness of software systems.By an example,the feasibility of the proposed dynamical statistical analysis method in software trustworthiness measurement is demonstrated using numerical simulations and theoretical analysis.

  8. [The Development of a Normal Database of Elderly People for Use with the Statistical Analysis Software Easy Z-score Imaging System with 99mTc-ECD SPECT].

    Science.gov (United States)

    Nemoto, Hirobumi; Iwasaka, Akemi; Hashimoto, Shingo; Hara, Tadashi; Nemoto, Kiyotaka; Asada, Takashi

    2015-11-01

    We created a new normal database of elderly individuals (Tsukuba-NDB) for easy Z-score Imaging System (eZIS), a statistical imaging analysis software, comprised of 44 healthy individuals aged 75 to 89 years. The Tsukuba-NDB was compared with a conventional NDB (Musashi-NDB) using Statistical Parametric Mapping (SPM8), eZIS analysis, mean images, standard deviation (SD) images, SD values, specific volume of interest analysis (SVA). Furthermore, the association of the mean cerebral blood flow (mCBF) with various clinical indicators was statistically analyzed. A group comparison using SPM8 indicated that the t-value of the Tsukuba-NDB was lower in the frontoparietal region but tended to be higher in the bilateral temporal lobes and the base of the brain than that of the Musashi-NDB. The results of eZIS analysis by Musashi-NDB in 48 subjects indicated the presence of mild decreases in cerebral blood flow in the bilateral frontoparietal lobes of 9 subjects, precuneus and posterior cingulate gyrus of 5 subjects, lingual gyrus of 4 subjects, and near the left frontal gyrus, temporal lobe, superior temporal gyrus, and lenticular nucleus of 12 subjects. The mean images showed that there were no visual differences between both NDBs. The SD images intensities and SD values were lower in Tsukuba-NDB. Clinical case comparison and visual evaluation demonstrated that the sites of decrease in blood flow were more clearly indicated by the Tsukuba-NDB. Furthermore, mCBF was 40.87 ± 0.52 ml/100 g/min (mean ± SE), and tended to decrease with age. The tendency was stronger in male subjects than female subjects. Among various clinical indicators, the platelet count was statistically significantly correlated with CBF. In conclusion, our results suggest that Tsukuba-NDB, which is incorporated into a statistical imaging analysis software, eZIS, is sensitive to changes in cerebral blood flow caused by Cranial nerve disease, dementia and cerebrovascular accidents, and can provide precise

  9. Software defined multi-spectral imaging for Arctic sensor networks

    Science.gov (United States)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  10. 不同图像分析软件定量计算肌纤维类型数量及面积%Quantification and Size of Muscle Fibers based on different Image Analysis Software

    Institute of Scientific and Technical Information of China (English)

    王兵; 周越

    2012-01-01

    Objective: Compare the advantages and disadvantages of different software in quantitatively count different fiber types,so as to find convenient and accurate method to count different fiber types.Method: Metachromatic dye-ATPase staining was used to distinguish fiber types of the gastrocnemius.The software of Image J and Image-Pro Plus was used to count the numbers of different fiber types respectively.The software Image-Pro Plus and Photoshop was used to calculate the areas of different fiber types respectively.Result: Image J can precisely count the numbers of different fiber types,while Image-Pro Plus will additionally count some noise points so that the results are not correct.There are no significant differences when calculate the areas of type Ⅰ,but significant differences(p0.05) when calculate type Ⅱa and Ⅱb,analysis with Image-Pro Plus and Photoshop.Conclusion: Image J can conveniently and accurately count the numbers of different fiber types.For good quality pictures,Image-Pro Plus can fast select and calculate the areas;but for the bad ones(especially difficult to distinguish type Ⅱa and Ⅱb),Photoshop can precisely select and calculate.%目的:比较不同图像处理软件在定量计算不同类型肌纤维数量和面积时的优缺点,找到精确定量的简便方法。方法:采用尾悬吊模型造成大鼠肌肉去负荷萎缩后,应用ATP酶异染法对腓肠肌纤维冰冻切片进行染色分型。Image J和Image-Pro plus软件进行不同类型肌纤维的计数;Image-Pro plus和Photoshop计算不同类型肌纤维的面积百分比。结果:Image J可以精确的对不同类型肌纤维进行计数,而Image-Pro plus往往会多计数一些杂点而导致计数不准确;在对Ⅰ型肌纤维面积进行计算时,Image-Pro plus和Photoshop的结果无显著性差异;而计算Ⅱa和Ⅱb型的面积时,两种软件有显著性差异(P〈0.05)。结论:使用Image J软件可以方便、准确进行不同类

  11. Analysis and design for architecture-based software

    Institute of Scientific and Technical Information of China (English)

    Jia Xiaolin; He Jian; Qin Zheng; Wang Xianghua

    2005-01-01

    The technologies of software architecture are introduced, and the software analysis-and-design process is divided into requirement analysis, software architecture design and system design. Using these technologies, a model of architecture-centric software analysis and design process(ACSADP) is proposed. Meanwhile, with regard to the completeness, consistency and correctness between the software requirements and design results, the theories of function and process control are applied to ACSADP. Finally, a model of integrated development environment (IDE) for ACSADP is proposed. It can be demonstrated by the practice that the model of ACSADP can aid developer to manage software process effectively and improve the quality of software analysis and design.

  12. A Comparative Analysis of Institutional Repository Software

    OpenAIRE

    2010-01-01

    This proposal outlines the design of a comparative analysis of the four institutional repository software packages that were represented at the 4th International Conference on Open Repositories held in 2009 in Atlanta, Georgia: EPrints, DSpace, Fedora and Zentity (The 4th International Conference on Open Repositories website, https://or09.library.gatech.edu). The study includes 23 qualitative and quantitative measures taken from default installations of the four repositories on a benchmark ma...

  13. LTP data analysis software and infrastructure

    Science.gov (United States)

    Nofrarias Serra, Miquel

    The LTP (LISA Technology Package) is the core part of the LISA Pathfinder mission. The main goal of the mission is to study the sources of any disturbances that perturb the motion of the freely-falling test masses from their geodesic trajectories as well as to test various technologies needed for LISA. The LTP experiment is designed as a sequence of experimental runs in which the performance of the instrument is studied and characterised under different operating conditions. In order to best optimise subsequent experimental runs, each run must be promptly analysed to ensure that the following ones make best use of the available knowledge of the instrument. In order to do this, a robust and flexible data analysis software package is required. The software developed for the LTP Data Analysis is a comprehensive data analysis tool based on MATLAB. The environment provides an object-oriented approach to data analysis which allows the user to design and run data analysis pipelines, either graphically or via scripts. The output objects of the analyses contain a full history of the processing that took place; this history tree can be inspected and used to rebuild the objects. This poster introduces the analysis environment and the concepts that have gone in to its design.

  14. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindne

  15. JEM-X science analysis software

    DEFF Research Database (Denmark)

    Westergaard, Niels Jørgen Stenfeldt; Kretschmar, P.; Oxborrow, Carol Anne

    2003-01-01

    The science analysis of the data from JEM-X on INTEGRAL is performed through a number of levels including corrections, good time selection, imaging and source finding, spectrum and light-curve extraction. These levels consist of individual executables and the running of the complete analysis...

  16. Imaging of jaw with dental CT software program: Normal Anatomy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myong Gon; Seo, Kwang Hee; Jung, Hak Young; Sung, Nak Kwan; Chung, Duk Soo; Kim, Ok Dong [School of Medicine, Taegu Catholic University, Taegu (Korea, Republic of); Lee, Young Hwan [Taegu Armed Forces General Hospital, Taegu (Korea, Republic of)

    1994-07-15

    Dental CT software program can provide reformatted cross-sectional and panoramic images that cannot be obtained with conventional axial and direct coronal CT scan. The purpose of this study is to describe the method of the technique and to identify the precise anatomy of jaw. We evaluated 13 mandibles and 7 maxillae of 15 subjects without bony disease who were being considered for endosseous dental implants. Reformatted images obtained by the use of bone algorithm performed on GE HiSpeed Advantage CT scanner were retrospectively reviewed for detailed anatomy of jaw. Anatomy related to neurovascular bundle(mandibular foramen, inferior alveolar canal, mental foramen, canal for incisive artery, nutrient canal, lingual foramen and mylohyoid groove), muscular insertion(mylohyoid line, superior and inferior genial tubercle and digastric fossa) and other anatomy(submandibular fossa, sublingual fossa, contour of alveolar process, oblique line, retromolar fossa, temporal crest and retromolar triangle) were well delineated in mandible. In maxilla, anatomy related to neurovascular bundle(greater palatine foramen and groove, nasopalatine canal and incisive foramen) and other anatomy(alveolar process, maxillary sinus and nasal fossa) were also well delineated. Reformatted images using dental CT software program provided excellent delineation of the jaw anatomy. Therefore, dental CT software program can play an important role in the preoperative assessment of mandible and maxilla for dental implants and other surgical conditions.

  17. Parallel-Processing Software for Creating Mosaic Images

    Science.gov (United States)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  18. Uncertainty in the use of MAMA software to measure particle morphological parameters from SEM images

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, Daniel S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tandon, Lav [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-05

    The MAMA software package developed at LANL is designed to make morphological measurements on a wide variety of digital images of objects. At LANL, we have focused on using MAMA to measure scanning electron microscope (SEM) images of particles, as this is a critical part of our forensic analysis of interdicted radiologic materials. In order to successfully use MAMA to make such measurements, we must understand the level of uncertainty involved in the process, so that we can rigorously support our quantitative conclusions.

  19. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  20. Intraprocedural Dataflow Analysis for Software Product Lines

    DEFF Research Database (Denmark)

    Brabrand, Claus; Ribeiro, Márcio; Tolêdo, Társis

    2013-01-01

    Software product lines (SPLs) developed using annotative approaches such as conditional compilation come with an inherent risk of constructing erroneous products. For this reason, it is essential to be able to analyze such SPLs. However, as dataflow analysis techniques are not able to deal with SPLs......, developers must generate and analyze all valid products individually, which is expensive for non-trivial SPLs. In this paper, we demonstrate how to take any standard intraprocedural dataflow analysis and automatically turn it into a feature-sensitive dataflow analysis in five different ways where the last...... is a combination of the other four. All analyses are capable of analyzing all valid products of an SPL without having to generate all of them explicitly. We have implemented all analyses using SOOT’s intraprocedural dataflow analysis framework and experimentally evaluated four of them according to their performance...

  1. Software components for medical image visualization and surgical planning

    Science.gov (United States)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  2. TOM software toolbox: acquisition and analysis for electron tomography.

    Science.gov (United States)

    Nickell, Stephan; Förster, Friedrich; Linaroudis, Alexandros; Net, William Del; Beck, Florian; Hegerl, Reiner; Baumeister, Wolfgang; Plitzko, Jürgen M

    2005-03-01

    Automated data acquisition procedures have changed the perspectives of electron tomography (ET) in a profound manner. Elaborate data acquisition schemes with autotuning functions minimize exposure of the specimen to the electron beam and sophisticated image analysis routines retrieve a maximum of information from noisy data sets. "TOM software toolbox" integrates established algorithms and new concepts tailored to the special needs of low dose ET. It provides a user-friendly unified platform for all processing steps: acquisition, alignment, reconstruction, and analysis. Designed as a collection of computational procedures it is a complete software solution within a highly flexible framework. TOM represents a new way of working with the electron microscope and can serve as the basis for future high-throughput applications.

  3. JMorph: Software for performing rapid morphometric measurements on digital images of fossil assemblages

    Science.gov (United States)

    Lelièvre, Peter G.; Grey, Melissa

    2017-08-01

    Quantitative morphometric analyses of form are widely used in palaeontology, especially for taxonomic and evolutionary research. These analyses can involve several measurements performed on hundreds or even thousands of samples. Performing measurements of size and shape on large assemblages of macro- or microfossil samples is generally infeasible or impossible with traditional instruments such as vernier calipers. Instead, digital image processing software is required to perform measurements via suitable digital images of samples. Many software packages exist for morphometric analyses but there is not much available for the integral stage of data collection, particularly for the measurement of the outlines of samples. Some software exists to automatically detect the outline of a fossil sample from a digital image. However, automatic outline detection methods may perform inadequately when samples have incomplete outlines or images contain poor contrast between the sample and staging background. Hence, a manual digitization approach may be the only option. We are not aware of any software packages that are designed specifically for efficient digital measurement of fossil assemblages with numerous samples, especially for the purposes of manual outline analysis. Throughout several previous studies, we have developed a new software tool, JMorph, that is custom-built for that task. JMorph provides the means to perform many different types of measurements, which we describe in this manuscript. We focus on JMorph's ability to rapidly and accurately digitize the outlines of fossils. JMorph is freely available from the authors.

  4. Flightspeed Integral Image Analysis Toolkit

    Science.gov (United States)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  5. Applications of the ImageJ software in analysis of solid grains in a debris flow gully%ImageJ软件在泥石流固体颗粒分析中的应用

    Institute of Scientific and Technical Information of China (English)

    赵岩; 郑娇玉; 郭鹏; 熊木齐; 崔志杰; 孟兴民

    2015-01-01

    首次将ImageJ图像处理软件应用到泥石流固体颗粒的分析中,通过对固体颗粒边缘的识别,实现对沟道粗化层块石粒径的快速自动提取.以白龙江流域构林坪泥石流沟为例,使用该软件对构林坪沟沟道固体颗粒粒径和圆度进行计算,并对提取结果进行检验,结果表明ImageJ软件对固体颗粒粒径及圆度提取效果较好,可以为泥石流基础数据勘察提供新的技术与支持.%ImageJ software was first applied herr to analyzing debris flow,and the extraction of the particle size of solid grains was automatically achieved by recognizing the edge of the solid grains.The software was used to construct the practical size and circularity of solid grains in Gou Lin-ping gully,Bai Long River,as an example.And the results were analyzed in detail,which showed that the effect of the extraction of the practical size and circularity of solid grains was good.This can provide a new technique and support for the survey of basic data in debris flows.

  6. Demineralization Depth Using QLF and a Novel Image Processing Software

    Directory of Open Access Journals (Sweden)

    Jun Wu

    2010-01-01

    Full Text Available Quantitative Light-Induced fluorescence (QLF has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y=0.32X+0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value =.0014. The F test for the entire model was 62.86, which shows the P-value =.0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization.

  7. Static analysis of software the abstract interpretation

    CERN Document Server

    Boulanger, Jean-Louis

    2013-01-01

    The existing literature currently available to students and researchers is very general, covering only the formal techniques of static analysis. This book presents real examples of the formal techniques called ""abstract interpretation"" currently being used in various industrial fields: railway, aeronautics, space, automotive, etc. The purpose of this book is to present students and researchers, in a single book, with the wealth of experience of people who are intrinsically involved in the realization and evaluation of software-based safety critical systems. As the authors are people curr

  8. Intraprocedural dataflow analysis for software product lines

    DEFF Research Database (Denmark)

    Brabrand, Claus; Ribeiro, Márcio; Tolêdo, Társis

    2013-01-01

    Software product lines (SPLs) developed using annotative approaches such as conditional compilation come with an inherent risk of constructing erroneous products. For this reason, it is essential to be able to analyze such SPLs. However, as dataflow analysis techniques are not able to deal with SP...... and memory characteristics on five qualitatively different SPLs. On our benchmarks, the combined analysis strategy is up to almost eight times faster than the brute-force approach....... is a combination of the other four. All analyses are capable of analyzing all valid products of an SPL without having to generate all of them explicitly. We have implemented all analyses using SOOT’s intraprocedural dataflow analysis framework and experimentally evaluated four of them according to their performance...

  9. Construction Auxiliary Diagnosis Software Embedded Bone Disease Image Analysis Module Based on MITK%基于MITK构建嵌入骨病图像分析模块的辅助诊疗软件

    Institute of Scientific and Technical Information of China (English)

    徐超; 胡珊; 刘燕

    2011-01-01

    Taking bone disease image information as an example, the paper develops clinical multi - source information analysis software based on MITK, drawing on diagnosing bone disease is to prove that this embedded medical image analysis module would help physician manage patient information, medical imaging, physiological and biochemical information. Function architecture, key algorithms, and experiment results are introduced in the paper.%以骨科疾病的医学影像信息为例,基于医学影像工具(MITK)开发嵌入医学影像分析模块的临床多源信息分析管理软件工具,实现对病人基本信息、医学影像、生理和生化等信息的综合管理和利用。从功能结构、关键算法、实验结果等方面具体阐述软件系统设计与实施。

  10. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    Science.gov (United States)

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  11. A PARAMETRIC ANALYSIS SOFTWARE FOR PERFUSION AND DIFFUSION MR IMAGING IN CANCER RESEARCH%一个用于肿瘤研究中灌流和扩散成像参数分析的应用软件

    Institute of Scientific and Technical Information of China (English)

    杨保联; 黄天助

    2002-01-01

    在MATLAB平台(6.0版本) 上编写了灌流、弛豫和扩散成像参数分析软件,来处理肿瘤研究中灌流和扩散成像的数据. 该软件能从原始数据得到灌流,扩散系数及T1弛豫时间等参数的分布图像,能进行局部区域数据分析. 该软件适用于Microsoft Windows 及UNIX计算机操作系统.%A software package has been developed to process perfusion, relaxation times and diffusion MRI data acquired in cancer research. This software package was written on MATLAB platform (Version 6.0). Parameter maps, such as permeability, apparent diffusion coefficient and T1 were generated from original MRI data. The features of parametric analysis include ROI analysis, contrast adjust, statistical information generation, false-color image and zoom-in display. Since it was written in Matlab functions, this package can be used on almost all operation systems (Microsoft windows, Unix, Mac OS & Linux) and it is easy to expand the features.

  12. Vertical bone measurements from cone beam computed tomography images using different software packages

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Livia Almeida Bueno; Freitas, Deborah Queiroz, E-mail: tataventorini@hotmail.com [Universidade Estadual de Campinas (UNICAMP), Piracicaba, SP (Brazil). Faculdade de Odontologia

    2015-03-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (‑0.11 and ‑0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p > 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data. (author)

  13. SWOT Analysis of Software Development Process Models

    Directory of Open Access Journals (Sweden)

    Ashish B. Sasankar

    2011-09-01

    Full Text Available Software worth billions and trillions of dollars have gone waste in the past due to lack of proper techniques used for developing software resulting into software crisis. Historically , the processes of software development has played an important role in the software engineering. A number of life cycle models have been developed in last three decades. This paper is an attempt to Analyze the software process model using SWOT method. The objective is to identify Strength ,Weakness ,Opportunities and Threats of Waterfall, Spiral, Prototype etc.

  14. The ESA's Space Trajectory Analysis software suite

    Science.gov (United States)

    Ortega, Guillermo

    The European Space Agency (ESA) initiated in 2005 an internal activity to develop an open source software suite involving university science departments and research institutions all over the world. This project is called the "Space Trajectory Analysis" or STA. This article describes the birth of STA and its present configuration. One of the STA aims is to promote the exchange of technical ideas, and raise knowledge and competence in the areas of applied mathematics, space engineering, and informatics at University level. Conceived as a research and education tool to support the analysis phase of a space mission, STA is able to visualize a wide range of space trajectories. These include among others ascent, re-entry, descent and landing trajectories, orbits around planets and moons, interplanetary trajectories, rendezvous trajectories, etc. The article explains that STA project is an original idea of the Technical Directorate of ESA. It was born in August 2005 to provide a framework in astrodynamics research at University level. As research and education software applicable to Academia, a number of Universities support this development by joining ESA in leading the development. ESA and Universities partnership are expressed in the STA Steering Board. Together with ESA, each University has a chair in the board whose tasks are develop, control, promote, maintain, and expand the software suite. The article describes that STA provides calculations in the fields of spacecraft tracking, attitude analysis, coverage and visibility analysis, orbit determination, position and velocity of solar system bodies, etc. STA implements the concept of "space scenario" composed of Solar system bodies, spacecraft, ground stations, pads, etc. It is able to propagate the orbit of a spacecraft where orbital propagators are included. STA is able to compute communication links between objects of a scenario (coverage, line of sight), and to represent the trajectory computations and

  15. Mapping Pedagogical Opportunities Provided by Mathematics Analysis Software

    Science.gov (United States)

    Pierce, Robyn; Stacey, Kaye

    2010-01-01

    This paper proposes a taxonomy of the pedagogical opportunities that are offered by mathematics analysis software such as computer algebra systems, graphics calculators, dynamic geometry or statistical packages. Mathematics analysis software is software for purposes such as calculating, drawing graphs and making accurate diagrams. However, its…

  16. Analysis of Performance of Stereoscopic-Vision Software

    Science.gov (United States)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  17. Comparative Analysis of Software Effort Estimation Techniques

    National Research Council Canada - National Science Library

    P K Suri; Pallavi Ranjan

    2012-01-01

    .... Imprecision of the estimation is the reason for this problem. As software grew in size and importance it also grew in complexity, making it very difficult to accurately predict the cost of software development...

  18. Software Architecture Reliability Analysis using Failure Scenarios

    NARCIS (Netherlands)

    Tekinerdogan, B.; Sözer, Hasan; Aksit, Mehmet

    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components

  19. Document image analysis: A primer

    Indian Academy of Sciences (India)

    Rangachar Kasturi; Lawrence O’Gorman; Venu Govindaraju

    2002-02-01

    Document image analysis refers to algorithms and techniques that are applied to images of documents to obtain a computer-readable description from pixel data. A well-known document image analysis product is the Optical Character Recognition (OCR) software that recognizes characters in a scanned document. OCR makes it possible for the user to edit or search the document’s contents. In this paper we briefly describe various components of a document analysis system. Many of these basic building blocks are found in most document analysis systems, irrespective of the particular domain or language to which they are applied. We hope that this paper will help the reader by providing the background necessary to understand the detailed descriptions of specific techniques presented in other papers in this issue.

  20. Software Speeds Up Analysis of Breast Cancer Risk

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_161117.html Software Speeds Up Analysis of Breast Cancer Risk: Study ... 22, 2016 THURSDAY, Sept. 22, 2016 (HealthDay News) -- Software that quickly analyzes mammograms and patient history to ...

  1. Visual querying and analysis of large software repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2009-01-01

    We present a software framework for mining software repositories. Our extensible framework enables the integration of data extraction from repositories with data analysis and interactive visualization. We demonstrate the applicability of the framework by presenting several case studies performed on

  2. Scientific Data Analysis and Software Support: Geodynamics

    Science.gov (United States)

    Klosko, Steven; Sanchez, B. (Technical Monitor)

    2000-01-01

    The support on this contract centers on development of data analysis strategies, geodynamic models, and software codes to study four-dimensional geodynamic and oceanographic processes, as well as studies and mission support for near-Earth and interplanetary satellite missions. SRE had a subcontract to maintain the optical laboratory for the LTP, where instruments such as MOLA and GLAS are developed. NVI performed work on a Raytheon laser altimetry task through a subcontract, providing data analysis and final data production for distribution to users. HBG had a subcontract for specialized digital topography analysis and map generation. Over the course of this contract, Raytheon ITSS staff have supported over 60 individual tasks. Some tasks have remained in place during this entire interval whereas others have been completed and were of shorter duration. Over the course of events, task numbers were changed to reflect changes in the character of the work or new funding sources. The description presented below will detail the technical accomplishments that have been achieved according to their science and technology areas. What will be shown is a brief overview of the progress that has been made in each of these investigative and software development areas. Raytheon ITSS staff members have received many awards for their work on this contract, including GSFC Group Achievement Awards for TOPEX Precision Orbit Determination and the Joint Gravity Model One Team. NASA JPL gave the TOPEX/POSEIDON team a medal commemorating the completion of the primary mission and a Certificate of Appreciation. Raytheon ITSS has also received a Certificate of Appreciation from GSFC for its extensive support of the Shuttle Laser Altimeter Experiment.

  3. Special Software for Planetary Image Processing and Research

    Science.gov (United States)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  4. Specific developed phantoms and software to assess radiological equipment image quality

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G., E-mail: gverdu@iqn.upv.es [Universidad Politecnica de Valencia (Spain). Dept. de Ingenieria Quimica y Nuclear; Mayo, P., E-mail: p.mayo@titaniast.com [TITANIA Servicios Teconologicos, Valencia (Spain); Rodenas, F., E-mail: frodenas@mat.upv.es [Universidad Politecnica de Valencia (Spain). Dept. de Matematica Aplicada; Campayo, J.M., E-mail: j.campayo@lainsa.com [Logistica y Acondicionamientos Industriales S.A.U (LAINSA), Valencia (Spain)

    2011-07-01

    The use of radiographic phantoms specifically designed to evaluate the operation of the radiographic equipment lets the study of the image quality obtained by this equipment in an objective way. In digital radiographic equipment, the analysis of the image quality can be automatized because the acquisition of the image is possible in different technologies that are, computerized radiography or phosphor plate and direct radiography or detector. In this work we have shown an application to assess automatically the constancy quality image in the image chain of the radiographic equipment. This application is integrated by designed radiographic phantoms which are adapted to conventional, dental equipment and specific developed software for the automatic evaluation of the phantom image quality. The software is based on digital image processing techniques that let the automatic detection of the different phantom tests by edge detector, morphological operators, threshold histogram techniques, etc. The utility developed is enough sensitive to the radiographic equipment of operating conditions of voltage (kV) and charge (mAs). It is a friendly user programme connected with a data base of the hospital or clinic where it has been used. After the phantom image processing the user can obtain an inform with a resume of the imaging system state with accepting and constancy results. (author)

  5. Method for detecting software anomalies based on recurrence plot analysis

    OpenAIRE

    Michał Mosdorf

    2012-01-01

    Presented paper evaluates method for detecting software anomalies based on recurrence plot analysis of trace log generated by software execution. Described method for detecting software anomalies is based on windowed recurrence quantification analysis for selected measures (e.g. Recurrence rate - RR or Determinism - DET). Initial results show that proposed method is useful in detecting silent software anomalies that do not result in typical crashes (e.g. exceptions).

  6. Method for detecting software anomalies based on recurrence plot analysis

    Directory of Open Access Journals (Sweden)

    Michał Mosdorf

    2012-03-01

    Full Text Available Presented paper evaluates method for detecting software anomalies based on recurrence plot analysis of trace log generated by software execution. Described method for detecting software anomalies is based on windowed recurrence quantification analysis for selected measures (e.g. Recurrence rate - RR or Determinism - DET. Initial results show that proposed method is useful in detecting silent software anomalies that do not result in typical crashes (e.g. exceptions.

  7. Automatic Image Registration Using Free and Open Source Software

    OpenAIRE

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; M. V. R. Sesha Sai; P.G. Diwakar; Dadhwal, V. K.

    2014-01-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the ...

  8. 'Face value': new medical imaging software in commercial view.

    Science.gov (United States)

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices.

  9. Communication software for physicians' workstations supporting medical imaging services

    Science.gov (United States)

    Orphanos, George; Kanellopoulos, Dimitris; Koubias, Stavros

    1993-09-01

    This paper describes a software communication architecture for medical imaging services. This work aims to provide to the physician the communication facilities to access and track a patient's record or to retrieve medical images from a remote database. The proposed architecture is comprised of a communication protocol and an application programming interface (API). The implemented protocol, namely the Telemedicine Network Services (TNS) protocol, has been designed in agreement with Open System Interconnection (OSI) upper layer protocols already standardized. Based on this concept an OSI-like interface has been developed capable of providing application services to the application developer, and thus facilitating the writing of medical application. TNS protocol has been implemented on top of TCP/IP communication protocols, by implementing OSI presentation and application services on top of the Transport Service Access Point (TSAP) which is provided by the socket abstraction on top of the TCP.

  10. Color Medical Image Analysis

    CERN Document Server

    Schaefer, Gerald

    2013-01-01

    Since the early 20th century, medical imaging has been dominated by monochrome imaging modalities such as x-ray, computed tomography, ultrasound, and magnetic resonance imaging. As a result, color information has been overlooked in medical image analysis applications. Recently, various medical imaging modalities that involve color information have been introduced. These include cervicography, dermoscopy, fundus photography, gastrointestinal endoscopy, microscopy, and wound photography. However, in comparison to monochrome images, the analysis of color images is a relatively unexplored area. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for monochrome images are often not directly applicable to multichannel images. The goal of this volume is to summarize the state-of-the-art in the utilization of color information in medical image analysis.

  11. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    Science.gov (United States)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  12. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  13. Software applications for flux balance analysis.

    Science.gov (United States)

    Lakshmanan, Meiyappan; Koh, Geoffrey; Chung, Bevan K S; Lee, Dong-Yup

    2014-01-01

    Flux balance analysis (FBA) is a widely used computational method for characterizing and engineering intrinsic cellular metabolism. The increasing number of its successful applications and growing popularity are possibly attributable to the availability of specific software tools for FBA. Each tool has its unique features and limitations with respect to operational environment, user-interface and supported analysis algorithms. Presented herein is an in-depth evaluation of currently available FBA applications, focusing mainly on usability, functionality, graphical representation and inter-operability. Overall, most of the applications are able to perform basic features of model creation and FBA simulation. COBRA toolbox, OptFlux and FASIMU are versatile to support advanced in silico algorithms to identify environmental and genetic targets for strain design. SurreyFBA, WEbcoli, Acorn, FAME, GEMSiRV and MetaFluxNet are the distinct tools which provide the user friendly interfaces in model handling. In terms of software architecture, FBA-SimVis and OptFlux have the flexible environments as they enable the plug-in/add-on feature to aid prospective functional extensions. Notably, an increasing trend towards the implementation of more tailored e-services such as central model repository and assistance to collaborative efforts was observed among the web-based applications with the help of advanced web-technologies. Furthermore, most recent applications such as the Model SEED, FAME, MetaFlux and MicrobesFlux have even included several routines to facilitate the reconstruction of genome-scale metabolic models. Finally, a brief discussion on the future directions of FBA applications was made for the benefit of potential tool developers.

  14. GiA Roots: software for the high throughput analysis of plant root system architecture

    OpenAIRE

    Galkovskyi Taras; Mileyko Yuriy; Bucksch Alexander; Moore Brad; Symonova Olga; Price Charles A; Topp Christopher N; Iyer-Pascuzzi Anjali S; Zurek Paul R; Fang Suqin; Harer John; Benfey Philip N; Weitz Joshua S

    2012-01-01

    Abstract Background Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. Results We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically...

  15. LANDSAFE: LANDING SITE RISK ANALYSIS SOFTWARE FRAMEWORK

    Directory of Open Access Journals (Sweden)

    R. Schmidt

    2012-08-01

    Full Text Available The European Space Agency (ESA is planning a Lunar Lander mission in the 2018 timeframe that will demonstrate precise soft landing at the polar regions of the Moon. To ensure a safe and successful landing a careful risk analysis has to be carried out. This is comprised of identifying favorable target areas and evaluating the surface conditions in these areas. Features like craters, boulders, steep slopes, rough surfaces and shadow areas have to be identified in order to assess the risk associated to a landing site in terms of a successful touchdown and subsequent surface operation of the lander. In addition, global illumination conditions at the landing site have to be simulated and analyzed. The Landing Site Risk Analysis software framework (LandSAfe is a system for the analysis, selection and certification of safe landing sites on the lunar surface. LandSAfe generates several data products including high resolution digital terrain models (DTMs, hazard maps, illumination maps, temperature maps and surface reflectance maps which assist the user in evaluating potential landing site candidates. This paper presents the LandSAfe system and describes the methods and products of the different modules. For one candidate landing site on the rim of Shackleton crater at the south pole of the Moon a high resolution DTM is showcased.

  16. Software reliability experiments data analysis and investigation

    Science.gov (United States)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  17. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, H. De; Kawakatsu, T.

    2000-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  18. Morphological image analysis

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, H; Kawakatsu, T; Landau, DP; Lewis, SP; Schuttler, HB

    2001-01-01

    We describe a morphological image analysis method to characterize images in terms of geometry and topology. We present a method to compute the morphological properties of the objects building up the image and apply the method to triply periodic minimal surfaces and to images taken from polymer chemi

  19. Software patterns, knowledge maps, and domain analysis

    CERN Document Server

    Fayad, Mohamed E; Hegde, Srikanth GK; Basia, Anshu; Vakil, Ashka

    2014-01-01

    Preface AcknowledgmentsAuthors INTRODUCTIONAn Overview of Knowledge MapsIntroduction: Key Concepts-Software Stable Models, Knowledge Maps, Pattern Language, Goals, Capabilities (Enduring Business Themes + Business Objects) The Motivation The Problem The Objectives Overview of Software Stability Concepts Overview of Knowledge Maps Pattern Languages versus Knowledge Maps: A Brief ComparisonThe Solution Knowledge Maps Methodology or Concurrent Software Development ModelWhy Knowledge Maps? Research Methodology Undertaken Research Verification and Validation The Stratification of This Book Summary

  20. Analysis on Some of Software Reliability Models

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.

  1. IFDOTMETER : A New Software Application for Automated Immunofluorescence Analysis

    NARCIS (Netherlands)

    Rodriguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gomez-Sanchez, Ruben; Yakhine-Diop, S. M. S.; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M.; Gonzalez-Polo, Rosa A.; Fuentes, Jose M.

    2016-01-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user'

  2. Software architecture reliability analysis using failure scenarios

    NARCIS (Netherlands)

    Tekinerdogan, Bedir; Sozer, Hasan; Aksit, Mehmet

    2008-01-01

    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components a

  3. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software

    Science.gov (United States)

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Objectives: Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. Materials and Methods: In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Results: Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570–1.0). Conclusion: According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome. PMID:26005455

  4. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    Directory of Open Access Journals (Sweden)

    Mahtab Nouri

    2015-02-01

    Full Text Available Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis.In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient.Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0.According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  5. Analysis of Test Efficiency during Software Development Process

    CERN Document Server

    Nair, T R Gopalakrishnan; Tiwari, Pranesh Kumar

    2012-01-01

    One of the prerequisites of any organization is an unvarying sustainability in the dynamic and competitive industrial environment. Development of high quality software is therefore an inevitable constraint of any software industry. Defect management being one of the highly influencing factors for the production of high quality software, it is obligatory for the software organizations to orient them towards effective defect management. Since, the time of software evolution, testing is deemed a promising technique of defect management in all IT industries. This paper provides an empirical investigation of several projects through a case study comprising of four software companies having various production capabilities. The aim of this investigation is to analyze the efficiency of test team during software development process. The study indicates very low-test efficiency at requirements analysis phase and even lesser test efficiency at design phase of software development. Subsequently, the study calls for a str...

  6. User manual for freight transportation analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Terziev, M.N.; Wilson, L.B.

    1976-12-01

    Under sponsorship of the Federal Energy Administration, The Center for Transportation Studies at M.I.T. developed and tested a methodology for analysis of the impacts of various government and carrier policies on the demand for freight transportation. The purpose of this document is to familiarize the reader with the computer programs included in this methodology. The purpose of the computer software developed for this project is threefold. First, programs are used to calculate the cost of each of the transport alternatives available for the purchase of a given commodity by a receiver in a given industrial sector. Furthermore, these programs identify the least-cost alternative, and thus provide a forecasting capability at the disaggregate level. Given a description of the population of receivers in the destination city, a second group of programs applies the costing and forecasting programs to each receiver in a sample drawn from the population. The disaggregate forecasts are summed to produce an aggregate forecast of modal tonnages for the given origin/destination city-pair. Finally, a third group of programs computes fuel consumed in transportation from the aggregate modal tonnages. These three groups of programs were placed under the control of a master routine which coordinates the input and output of data.

  7. Performance analysis of software for identification of intestinal parasites

    Directory of Open Access Journals (Sweden)

    Andressa P. Gomes

    2015-08-01

    Full Text Available ABSTRACTIntroduction:Intestinal parasites are among the most frequent diagnoses worldwide. An accurate clinical diagnosis of human parasitic infections depends on laboratory confirmation for specific differentiation of the infectious agent.Objectives:To create technological solutions to help parasitological diagnosis, through construction and use of specific software.Material and method:From the images obtained from the sediment, the software compares the morphometry, area, perimeter and circularity, and uses the information on specific morphological and staining characteristics of parasites and allows the potential identification of parasites.RESULTS:Our results demonstrate satisfactory performance, from a total of 204 images analyzed, 81.86% had the parasite correctly identified by the computer system, and 18.13% could not be identified, due to the large amount of fecal debris in the sample evaluated.Discussion:Currently the techniques used in Parasitology area are predominantly manual, probably being affected by variables, such as attention and experience of the professional. Therefore, the use of computerization in this sector can improve the performance of parasitological analysis.Conclusions:This work contributes to the computerization of healthcare area, and benefits both health professionals and their patients, in addition to provide a more efficient, accurate and secure diagnosis.

  8. Evaluation of a software package for automated quality assessment of contrast detail images--comparison with subjective visual assessment.

    Science.gov (United States)

    Pascoal, A; Lawinski, C P; Honey, I; Blake, P

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA(detector), which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  9. Evaluation of a software package for automated quality assessment of contrast detail images-comparison with subjective visual assessment

    Energy Technology Data Exchange (ETDEWEB)

    Pascoal, A [Medical Engineering and Physics, King' s College London, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Lawinski, C P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Honey, I [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark); Blake, P [KCARE - King' s Centre for Assessment of Radiological Equipment, King' s College Hospital, Faraday Building Denmark Hill, London SE5 8RX (Denmark)

    2005-12-07

    Contrast detail analysis is commonly used to assess image quality (IQ) associated with diagnostic imaging systems. Applications include routine assessment of equipment performance and optimization studies. Most frequently, the evaluation of contrast detail images involves human observers visually detecting the threshold contrast detail combinations in the image. However, the subjective nature of human perception and the variations in the decision threshold pose limits to the minimum image quality variations detectable with reliability. Objective methods of assessment of image quality such as automated scoring have the potential to overcome the above limitations. A software package (CDRAD analyser) developed for automated scoring of images produced with the CDRAD test object was evaluated. Its performance to assess absolute and relative IQ was compared with that of an average observer. Results show that the software does not mimic the absolute performance of the average observer. The software proved more sensitive and was able to detect smaller low-contrast variations. The observer's performance was superior to the software's in the detection of smaller details. Both scoring methods showed frequent agreement in the detection of image quality variations resulting from changes in kVp and KERMA{sub detector}, which indicates the potential to use the software CDRAD analyser for assessment of relative IQ.

  10. Development of a New VLBI Data Analysis Software

    Science.gov (United States)

    Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.

    2010-01-01

    We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.

  11. STEM_CELL: a software tool for electron microscopy: part 2--analysis of crystalline materials.

    Science.gov (United States)

    Grillo, Vincenzo; Rossi, Francesca

    2013-02-01

    A new graphical software (STEM_CELL) for analysis of HRTEM and STEM-HAADF images is here introduced in detail. The advantage of the software, beyond its graphic interface, is to put together different analysis algorithms and simulation (described in an associated article) to produce novel analysis methodologies. Different implementations and improvements to state of the art approach are reported in the image analysis, filtering, normalization, background subtraction. In particular two important methodological results are here highlighted: (i) the definition of a procedure for atomic scale quantitative analysis of HAADF images, (ii) the extension of geometric phase analysis to large regions up to potentially 1μm through the use of under sampled images with aliasing effects.

  12. Technical Note: DIRART- A software suite for deformable image registration and adaptive radiotherapy research

    Energy Technology Data Exchange (ETDEWEB)

    Yang Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A. [Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, Missouri 63110 (United States)

    2011-01-15

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods: DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research.

  13. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging.

    Science.gov (United States)

    Girsault, Arik; Lukes, Tomas; Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data.

  14. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  15. Computer software for process hazards analysis.

    Science.gov (United States)

    Hyatt, N

    2000-10-01

    Computerized software tools are assuming major significance in conducting HAZOPs. This is because they have the potential to offer better online presentations and performance to HAZOP teams, as well as better documentation and downstream tracking. The chances of something being "missed" are greatly reduced. We know, only too well, that HAZOP sessions can be like the industrial equivalent of a trip to the dentist. Sessions can (and usually do) become arduous and painstaking. To make the process easier for all those involved, we need all the help computerized software can provide. In this paper I have outlined the challenges addressed in the production of Windows software for performing HAZOP and other forms of PHA. The object is to produce more "intelligent", more user-friendly software for performing HAZOP where technical interaction between team members is of key significance. HAZOP techniques, having already proven themselves, are extending into the field of computer control and human error. This makes further demands on HAZOP software and emphasizes its importance.

  16. Software Piracy in Research: A Moral Analysis.

    Science.gov (United States)

    Santillanes, Gary; Felder, Ryan Marshall

    2015-08-01

    Researchers in virtually every discipline rely on sophisticated proprietary software for their work. However, some researchers are unable to afford the licenses and instead procure the software illegally. We discuss the prohibition of software piracy by intellectual property laws, and argue that the moral basis for the copyright law offers the possibility of cases where software piracy may be morally justified. The ethics codes that scientific institutions abide by are informed by a rule-consequentialist logic: by preserving personal rights to authored works, people able to do so will be incentivized to create. By showing that the law has this rule-consequentialist grounding, we suggest that scientists who blindly adopt their institutional ethics codes will commit themselves to accepting that software piracy could be morally justified, in some cases. We hope that this conclusion will spark debate over important tensions between ethics codes, copyright law, and the underlying moral basis for these regulations. We conclude by offering practical solutions (other than piracy) for researchers.

  17. Runtime analysis of search heuristics on software engineering problems

    Institute of Scientific and Technical Information of China (English)

    Per Kristian LEHRE; Xin YAO

    2009-01-01

    Many software engineering tasks can potentially be automated using search heuristics. However, much work is needed in designing and evaluating search heuristics before this approach can be routinely applied to a software engineering problem. Experimental methodology should be complemented with theoretical analysis to achieve this goal.Recently, there have been significant theoretical advances in the runtime analysis of evolutionary algorithms (EAs) and other search heuristics in other problem domains. We suggest that these methods could be transferred and adapted to gain insight into the behaviour of search heuristics on software engineering problems while automating software engineering.

  18. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    Science.gov (United States)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  19. Failure-Modes-And-Effects Analysis Of Software Logic

    Science.gov (United States)

    Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David

    1996-01-01

    Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.

  20. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    Science.gov (United States)

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.

    2007-03-01

    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  1. Data and Analysis Center for Software

    Science.gov (United States)

    1993-08-01

    refintement and technoloqy trarnsition orogram ’or Tme I’AMP ýoftware arno techniques All (AMP products were develotped: in accordan’ce iArtW 000 SCD 2t6" A...provided a vehicle for software developers/ reusers to make intelligent choices regarding the selection of one component over another, or selecting

  2. ESTERR-PRO: A Setup Verification Software System Using Electronic Portal Imaging

    Directory of Open Access Journals (Sweden)

    Pantelis A. Asvestas

    2007-01-01

    Full Text Available The purpose of the paper is to present and evaluate the performance of a new software-based registration system for patient setup verification, during radiotherapy, using electronic portal images. The estimation of setup errors, using the proposed system, can be accomplished by means of two alternate registration methods. (a The portal image of the current fraction of the treatment is registered directly with the reference image (digitally reconstructed radiograph (DRR or simulator image using a modified manual technique. (b The portal image of the current fraction of the treatment is registered with the portal image of the first fraction of the treatment (reference portal image by applying a nearly automated technique based on self-organizing maps, whereas the reference portal has already been registered with a DRR or a simulator image. The proposed system was tested on phantom data and on data from six patients. The root mean square error (RMSE of the setup estimates was 0.8±0.3 (mean value ± standard deviation for the phantom data and 0.3±0.3 for the patient data, respectively, by applying the two methodologies. Furthermore, statistical analysis by means of the Wilcoxon nonparametric signed test showed that the results that were obtained by the two methods did not differ significantly (P value >0.05.

  3. Digital-image processing and image analysis of glacier ice

    Science.gov (United States)

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  4. Hybrid Expert Systems In Image Analysis

    Science.gov (United States)

    Dixon, Mark J.; Gregory, Paul J.

    1987-04-01

    Vision systems capable of inspecting industrial components and assemblies have a large potential market if they can be easily programmed and produced quickly. Currently, vision application software written in conventional high-level languages such as C or Pascal are produced by experts in program design, image analysis, and process control. Applications written this way are difficult to maintain and modify. Unless other similar inspection problems can be found, the final program is essentially one-off redundant code. A general-purpose vision system targeted for the Visual Machines Ltd. C-VAS 3000 image processing workstation, is described which will make writing image analysis software accessible to the non-expert both in programming computers and image analysis. A significant reduction in the effort required to produce vision systems, will be gained through a graphically-driven interactive application generator. Finally, an Expert System will be layered on top to guide the naive user through the process of generating an application.

  5. Theoretical and software considerations for nonlinear dynamic analysis

    Science.gov (United States)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  6. An open-source software tool for the generation of relaxation time maps in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kühne Titus

    2010-07-01

    Full Text Available Abstract Background In magnetic resonance (MR imaging, T1, T2 and T2* relaxation times represent characteristic tissue properties that can be quantified with the help of specific imaging strategies. While there are basic software tools for specific pulse sequences, until now there is no universal software program available to automate pixel-wise mapping of relaxation times from various types of images or MR systems. Such a software program would allow researchers to test and compare new imaging strategies and thus would significantly facilitate research in the area of quantitative tissue characterization. Results After defining requirements for a universal MR mapping tool, a software program named MRmap was created using a high-level graphics language. Additional features include a manual registration tool for source images with motion artifacts and a tabular DICOM viewer to examine pulse sequence parameters. MRmap was successfully tested on three different computer platforms with image data from three different MR system manufacturers and five different sorts of pulse sequences: multi-image inversion recovery T1; Look-Locker/TOMROP T1; modified Look-Locker (MOLLI T1; single-echo T2/T2*; and multi-echo T2/T2*. Computing times varied between 2 and 113 seconds. Estimates of relaxation times compared favorably to those obtained from non-automated curve fitting. Completed maps were exported in DICOM format and could be read in standard software packages used for analysis of clinical and research MR data. Conclusions MRmap is a flexible cross-platform research tool that enables accurate mapping of relaxation times from various pulse sequences. The software allows researchers to optimize quantitative MR strategies in a manufacturer-independent fashion. The program and its source code were made available as open-source software on the internet.

  7. Radar-Interferometric Asteroid Imaging Using a Flexible Software Correlator

    Science.gov (United States)

    Black, G.; Campbell, D. B.; Treacy, R.; Nolan, M. C.

    2005-12-01

    We've developed a technique to use a radio interferometer to image near earth objects (NEOs) during their close Earth approach when they can be illuminated by a ground-based radar system. There is great potential for this technique to yield detailed information that is complementary to other observational methods. We are using the NAIC's Arecibo Observatory's 1 MW 13 cm radar transmitter with the NRAO's Very Long Baseline Array (VLBA) as the receiving instrument. The VLBA, with antenna spacings of several thousands of kilometers, has a potential resolution on the order of milli-arcseconds; a couple of orders of magnitude smaller than typical ground-based telescopic observations, and sufficient to determine the gross shapes and orientations of spin vectors. Milli-arcsecond astrometry of these quickly moving objects can greatly improve their orbits and extend the span over which future Earth encounters can be predicted. The VLBA hardware correlator limits the frequency resolution and complicates incorporating a model of the near-field geometry. Typical target bandwidths are ˜1 Hz while the correlator's narrowest resolution is 120 Hz. To avoid these difficulties a specialized computer interface was designed to transfer the raw data to commercial PCs. We can now use this system to obtain the individual antenna data streams and subsequently correlate them in software, bypassing the hardware correlator entirely. Software processing permits synthesis of narrower frequency bins, plus easier access for iterations to improve the near field model or correct a poor ephemeris a posteriori. This system could also be used to achieve high time resolution on strong sources. We have recently used this system to observe near Earth asteroid (25143) Itokawa, a sub-kilometer sized object that passed within 0.013 AU of the Earth and is the target of the Japanese Hayabusa mission. The National Radio Astronomy Observatory is a facility of the NSF operated under cooperative agreement by

  8. Cross-instrument Analysis Correlation Software

    Energy Technology Data Exchange (ETDEWEB)

    2017-06-28

    This program has been designed to assist with the tracking of a sample from one analytical instrument to another such as SEM, microscopes, micro x-ray diffraction and other instruments where particular positions/locations on the sample are examined, photographed, etc. The software is designed to easily enter the position of fiducials and locations of interest such that in a future session in the same of different instrument the positions of interest can be re-found through using the known location fiducials in the current and reference session to transform the point into the current sessions coordinate system. The software is dialog box driven guiding the user through the necessary data entry and program choices. Information is stored in a series of text based extensible markup language (XML) files.

  9. Integrated analysis software for bulk power system stability

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, T.; Nagao, T.; Takahashi, K. [Central Research Inst. of Electric Power Industry, Tokyo (Japan)

    1994-12-31

    This paper presents Central Research Inst.of Electric Power Industry - CRIEPI`s - own developed three softwares for bulk power network analysis and the user support system which arranges tremendous data necessary for these softwares with easy and high reliability. (author) 3 refs., 7 figs., 2 tabs.

  10. Software metrics a guide to planning, analysis, and application

    CERN Document Server

    Pandian, C Ravindranath

    2003-01-01

    Software Metrics: A Guide to Planning, Analysis, and Application simplifies software measurement and explains its value as a pragmatic tool for management. Ideas and techniques presented in this book are derived from best practices. The ideas are field-proven, down to earth, and straightforward, making this volume an invaluable resource for those striving for process improvement.

  11. Software analysis methods for resource-sensitive systems

    NARCIS (Netherlands)

    Kersten, R.W.J.

    2015-01-01

    Practically every modern electronic device is controlled by software. It is important to establish certain quality characteristics of this software. In his dissertation, Rody Kersten presents innovative analysis methods towards this end. These semi-automatic methods go beyond the validation of

  12. Continuous Software Quality analysis for the ATLAS experiment

    CERN Document Server

    Washbrook, Andrew; The ATLAS collaboration

    2017-01-01

    The regular application of software quality tools in large collaborative projects is required to reduce code defects to an acceptable level. If left unchecked the accumulation of defects invariably results in performance degradation at scale and problems with the long-term maintainability of the code. Although software quality tools are effective for identification there remains a non-trivial sociological challenge to resolve defects in a timely manner. This is a ongoing concern for the ATLAS software which has evolved over many years to meet the demands of Monte Carlo simulation, detector reconstruction and data analysis. At present over 3.8 million lines of C++ code (and close to 6 million total lines of code) are maintained by a community of hundreds of developers worldwide. It is therefore preferable to address code defects before they are introduced into a widely used software release. Recent wholesale changes to the ATLAS software infrastructure have provided an ideal opportunity to apply software quali...

  13. Team Software Development for Aerothermodynamic and Aerodynamic Analysis and Design

    Science.gov (United States)

    Alexandrov, N.; Atkins, H. L.; Bibb, K. L.; Biedron, R. T.; Carpenter, M. H.; Gnoffo, P. A.; Hammond, D. P.; Jones, W. T.; Kleb, W. L.; Lee-Rausch, E. M.

    2003-01-01

    A collaborative approach to software development is described. The approach employs the agile development techniques: project retrospectives, Scrum status meetings, and elements of Extreme Programming to efficiently develop a cohesive and extensible software suite. The software product under development is a fluid dynamics simulator for performing aerodynamic and aerothermodynamic analysis and design. The functionality of the software product is achieved both through the merging, with substantial rewrite, of separate legacy codes and the authorship of new routines. Examples of rapid implementation of new functionality demonstrate the benefits obtained with this agile software development process. The appendix contains a discussion of coding issues encountered while porting legacy Fortran 77 code to Fortran 95, software design principles, and a Fortran 95 coding standard.

  14. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    Science.gov (United States)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  15. PREDICTION OF SMARTPHONES’ PERCEIVED IMAGE QUALITY USING SOFTWARE EVALUATION TOOL VIQET

    Directory of Open Access Journals (Sweden)

    Pinchas ZOREA

    2016-12-01

    Full Text Available A great deal of resources and efforts have been made in recent years to assess how the smartphones users perceived the image quality. Unfortunately, only limited success has been achieved and the image quality assessment still based on many physical human visual test. The paper describes the new model proposed for perceived quality based on human visual tests compared with image analysis by the software application tool. The values of parameters of perceived image quality (brightness, contrast, color saturation and sharpness were calibrated based on results from human visual experiments.PREDICŢIA CALITĂŢII PERCEPUTE A IMAGINILOR AFIȘATE DE SMARTPHONE-URI UTILIZÂND APLICAŢIA DE EVALUARE VIQETÎn ultimii ani au fost depuse eforturi semnificative pentru a evalua modul în care utilizatorii de smartphone  percep calitatea imaginilor. Din păcate, a fost atins doar un progres limitat, evaluarea calităţii imaginiilor bazându-se încă pe multiple teste vizuale umane. În lucrare este descris un nou model al calităţii percepute pe baza testelor vizuale umane, comparate cu analiza imaginii efectuate cu o aplicaţie software. Valorile parametrilor calităţii  percepute a imaginii (lu­minozitate, contrast, saturaţia culorilor şi claritatea au fost calibrate pe baza rezultatelor experimentelor vizuale umane.

  16. Software para análise quantitativa da deglutição Swallowing quantitative analysis software

    Directory of Open Access Journals (Sweden)

    André Augusto Spadotto

    2008-02-01

    Full Text Available OBJETIVO: Apresentar um software que permita uma análise detalhada da dinâmica da deglutição. MATERIAIS E MÉTODOS: Participaram deste estudo dez indivíduos após acidente vascular encefálico, sendo seis do gênero masculino, com idade média de 57,6 anos. Foi realizada videofluoroscopia da deglutição e as imagens foram digitalizadas em microcomputador, com posterior análise do tempo do trânsito faríngeo da deglutição, por meio de um cronômetro e do software. RESULTADOS: O tempo médio do trânsito faríngeo da deglutição apresentou-se diferente quando comparados os métodos utilizados (cronômetro e software. CONCLUSÃO: Este software é um instrumento de análise dos parâmetros tempo e velocidade da deglutição, propiciando melhor compreensão da dinâmica da deglutição, com reflexos tanto na abordagem clínica dos pacientes com disfagia como para fins de pesquisa científica.OBJECTIVE: The present paper is aimed at introducing a software to allow a detailed analysis of the swallowing dynamics. MATERIALS AND METHODS: The sample included ten (six male and four female stroke patients, with mean age of 57.6 years. Swallowing videofluoroscopy was performed and images were digitized for posterior analysis of the pharyngeal transit time with the aid of a chronometer and the software. RESULTS: Differences were observed in the average pharyngeal swallowing transit time as a result of measurements with chronometer and software. CONCLUSION: This software is a useful tool for the analysis of parameters such as swallowing time and speed, allowing a better understanding of the swallowing dynamics, both in the clinical approach of patients with oropharyngeal dysphagia and for scientific research purposes.

  17. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  18. Flexible Software Architecture for Visualization and Seismic Data Analysis

    Science.gov (United States)

    Petunin, S.; Pavlov, I.; Mogilenskikh, D.; Podzyuban, D.; Arkhipov, A.; Baturuin, N.; Lisin, A.; Smith, A.; Rivers, W.; Harben, P.

    2007-12-01

    Research in the field of seismology requires software and signal processing utilities for seismogram manipulation and analysis. Seismologists and data analysts often encounter a major problem in the use of any particular software application specific to seismic data analysis: the tuning of commands and windows to the specific waveforms and hot key combinations so as to fit their familiar informational environment. The ability to modify the user's interface independently from the developer requires an adaptive code structure. An adaptive code structure also allows for expansion of software capabilities such as new signal processing modules and implementation of more efficient algorithms. Our approach is to use a flexible "open" architecture for development of geophysical software. This report presents an integrated solution for organizing a logical software architecture based on the Unix version of the Geotool software implemented on the Microsoft NET 2.0 platform. Selection of this platform greatly expands the variety and number of computers that can implement the software, including laptops that can be utilized in field conditions. It also facilitates implementation of communication functions for seismic data requests from remote databases through the Internet. The main principle of the new architecture for Geotool is that scientists should be able to add new routines for digital waveform analysis via software plug-ins that utilize the basic Geotool display for GUI interaction. The use of plug-ins allows the efficient integration of diverse signal-processing software, including software still in preliminary development, into an organized platform without changing the fundamental structure of that platform itself. An analyst's use of Geotool is tracked via a metadata file so that future studies can reconstruct, and alter, the original signal processing operations. The work has been completed in the framework of a joint Russian- American project.

  19. A comparison of conventional and computer-assisted semen analysis (CRISMAS software) using samples from 166 young Danish men

    DEFF Research Database (Denmark)

    Vested, Anne; Ramlau-Hansen, Cecilia; Bonde, Jens P;

    2011-01-01

    The aim of the present study was to compare assessments of sperm concentration and sperm motility analysed by conventional semen analysis with those obtained by computer-assisted semen analysis (CASA) (Copenhagen Rigshospitalet Image House Sperm Motility Analysis System (CRISMAS) 4.6 software) us...... and motility analysis. This needs to be accounted for in clinics using this software and in studies of determinants of these semen characteristics.......The aim of the present study was to compare assessments of sperm concentration and sperm motility analysed by conventional semen analysis with those obtained by computer-assisted semen analysis (CASA) (Copenhagen Rigshospitalet Image House Sperm Motility Analysis System (CRISMAS) 4.6 software......) using semen samples from 166 young Danish men. The CRISMAS software identifies sperm concentration and classifies spermatozoa into three motility categories. To enable comparison of the two methods, the four motility stages obtained by conventional semen analysis were, based on their velocity...

  20. Quantitative histogram analysis of images

    Science.gov (United States)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None

  1. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses.

    Science.gov (United States)

    Bokhart, Mark T; Nazari, Milad; Garrard, Kenneth P; Muddiman, David C

    2017-09-20

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. Graphical Abstract ᅟ.

  2. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  3. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  4. Change impact analysis for software product lines

    Directory of Open Access Journals (Sweden)

    Jihen Maâzoun

    2016-10-01

    Full Text Available A software product line (SPL represents a family of products in a given application domain. Each SPL is constructed to provide for the derivation of new products by covering a wide range of features in its domain. Nevertheless, over time, some domain features may become obsolete with the apparition of new features while others may become refined. Accordingly, the SPL must be maintained to account for the domain evolution. Such evolution requires a means for managing the impact of changes on the SPL models, including the feature model and design. This paper presents an automated method that analyzes feature model evolution, traces their impact on the SPL design, and offers a set of recommendations to ensure the consistency of both models. The proposed method defines a set of new metrics adapted to SPL evolution to identify the effort needed to maintain the SPL models consistently and with a quality as good as the original models. The method and its tool are illustrated through an example of an SPL in the Text Editing domain. In addition, they are experimentally evaluated in terms of both the quality of the maintained SPL models and the precision of the impact change management.

  5. Research and Development on Food Nutrition Statistical Analysis Software System

    Directory of Open Access Journals (Sweden)

    Du Li

    2013-12-01

    Full Text Available Designing and developing a set of food nutrition component statistical analysis software can realize the automation of nutrition calculation, improve the nutrition processional professional’s working efficiency and achieve the informatization of the nutrition propaganda and education. In the software development process, the software engineering method and database technology are used to calculate the human daily nutritional intake and the intelligent system is used to evaluate the user’s health condition. The experiment can show that the system can correctly evaluate the human health condition and offer the reasonable suggestion, thus exploring a new road to solve the complex nutrition computational problem with information engineering.

  6. Power Analysis Software for Educational Researchers

    Science.gov (United States)

    Peng, Chao-Ying Joanne; Long, Haiying; Abaci, Serdar

    2012-01-01

    Given the importance of statistical power analysis in quantitative research and the repeated emphasis on it by American Educational Research Association/American Psychological Association journals, the authors examined the reporting practice of power analysis by the quantitative studies published in 12 education/psychology journals between 2005…

  7. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines.

  8. APERO, AN OPEN SOURCE BUNDLE ADJUSMENT SOFTWARE FOR AUTOMATIC CALIBRATION AND ORIENTATION OF SET OF IMAGES

    OpenAIRE

    M. Pierrot Deseilligny; I. Clery

    2012-01-01

    IGN has developed a set of photogrammetric tools, APERO and MICMAC, for computing 3D models from set of images. This software, developed initially for its internal needs are now delivered as open source code. This paper focuses on the presentation of APERO the orientation software. Compared to some other free software initiatives, it is probably more complex but also more complete, its targeted user is rather professionals (architects, archaeologist, geomophologist) than people. APERO uses bo...

  9. Digital Image Analysis for Detechip Code Determination

    Directory of Open Access Journals (Sweden)

    Marcus Lyon

    2012-08-01

    Full Text Available DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP® . Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of redgreen-blue (RGB values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  10. Strategic Analysis of a Video Compression Software Project

    OpenAIRE

    Bai, Chun Jung Rosalind

    2008-01-01

    The objective of this project is to develop a strategic recommendation for market entry of the Client's new software product based on a breakthrough predictive-decoding technology. The analysis examines videoconferencing market and reveals that there is a strong demand for the software products that can reduce delays in interactive video communications while maintaining reasonable video quality. The evaluation of the key external competitive forces suggests that the market has low intensity o...

  11. Adapted wavelet analysis from theory to software

    CERN Document Server

    Wickerhauser, Mladen Victor

    1994-01-01

    This detail-oriented text is intended for engineers and applied mathematicians who must write computer programs to perform wavelet and related analysis on real data. It contains an overview of mathematical prerequisites and proceeds to describe hands-on programming techniques to implement special programs for signal analysis and other applications. From the table of contents: - Mathematical Preliminaries - Programming Techniques - The Discrete Fourier Transform - Local Trigonometric Transforms - Quadrature Filters - The Discrete Wavelet Transform - Wavelet Packets - The Best Basis Algorithm - Multidimensional Library Trees - Time-Frequency Analysis - Some Applications - Solutions to Some of the Exercises - List of Symbols - Quadrature Filter Coefficients

  12. Developing a new software package for PSF estimation and fitting of adaptive optics images

    Science.gov (United States)

    Schreiber, Laura; Diolaiti, Emiliano; Sollima, Antonio; Arcidiacono, Carmelo; Bellazzini, Michele; Ciliegi, Paolo; Falomo, Renato; Foppiani, Italo; Greggio, Laura; Lanzoni, Barbara; Lombini, Matteo; Montegriffo, Paolo; Dalessandro, Emanuele; Massari, Davide

    2012-07-01

    Adaptive Optics (AO) images are characterized by structured Point Spread Function (PSF), with sharp core and extended halo, and by significant variations across the field of view. In order to enable the extraction of high-precision quantitative information and improve the scientific exploitation of AO data, efforts in the PSF modeling and in the integration of suitable models in a code for image analysis are needed. We present the current status of a study on the modeling of AO PSFs based on observational data taken with present telescopes (VLT and LBT). The methods under development include parametric models and hybrid (i.e. analytical / numerical) models adapted to various types of PSFs that can show up in AO images. The specific features of AO data, such as the mainly radial variation of the PSF with respect to the guide star position in single-reference AO, are taken into account as much as possible. The final objective of this project is the development of a flexible software package, based on the Starfinder code (Diolaiati et Al 2000), specifically dedicated to the PSF estimation and to the astrometric and photometric analysis of AO images with complex and spatially variable PSF.

  13. New software for 3D fracture network analysis and visualization

    Science.gov (United States)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  14. Development of output user interface software to support analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wahanani, Nursinta Adi, E-mail: sintaadi@batan.go.id; Natsir, Khairina, E-mail: sintaadi@batan.go.id; Hartini, Entin, E-mail: sintaadi@batan.go.id [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia)

    2014-09-30

    Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this software 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu{sup 239} and Pu{sup 241}. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.

  15. Development of output user interface software to support analysis

    Science.gov (United States)

    Wahanani, Nursinta Adi; Natsir, Khairina; Hartini, Entin

    2014-09-01

    Data processing software packages such as VSOP and MCNPX are softwares that has been scientifically proven and complete. The result of VSOP and MCNPX are huge and complex text files. In the analyze process, user need additional processing like Microsoft Excel to show informative result. This research develop an user interface software for output of VSOP and MCNPX. VSOP program output is used to support neutronic analysis and MCNPX program output is used to support burn-up analysis. Software development using iterative development methods which allow for revision and addition of features according to user needs. Processing time with this software 500 times faster than with conventional methods using Microsoft Excel. PYTHON is used as a programming language, because Python is available for all major operating systems: Windows, Linux/Unix, OS/2, Mac, Amiga, among others. Values that support neutronic analysis are k-eff, burn-up and mass Pu239 and Pu241. Burn-up analysis used the mass inventory values of actinide (Thorium, Plutonium, Neptunium and Uranium). Values are visualized in graphical shape to support analysis.

  16. QSoas: A Versatile Software for Data Analysis.

    Science.gov (United States)

    Fourmond, Vincent

    2016-05-17

    Undoubtedly, the most natural way to confirm a model is to quantitatively verify its predictions. However, this is not done systematically, and one of the reasons for that is the lack of appropriate tools for analyzing data, because the existing tools do not implement the required models or they lack the flexibility required to perform data analysis in a reasonable time. We present QSoas, an open-source, cross-platform data analysis program written to overcome these problems. In addition to standard data analysis procedures and full automation using scripts, QSoas features a very powerful data fitting interface with support for arbitrary functions, differential equation and kinetic system integration, and flexible global fits. QSoas is available from http://www.qsoas.org .

  17. Development and validation of a video analysis software for marine benthic applications

    Science.gov (United States)

    Romero-Ramirez, A.; Grémare, A.; Bernard, G.; Pascal, L.; Maire, O.; Duchêne, J. C.

    2016-10-01

    Our aim in the EU funded JERICO project was to develop a flexible and scalable imaging platform that could be used in the widest possible set of ecological situations. Depending on research objectives, both image acquisition and analysis procedures may indeed differ. Up to now the attempts for automating image analysis procedures have consisted of the development of pieces of software specifically designed for a given objective. This led to the conception of a new software: AVIExplore. Its general architecture and its three constitutive modules: AVIExplore - Mobile, AVIExplore - Fixed and AVIExplore - ScriptEdit are presented. AVIExplore provides a unique environment for video analysis. Its main features include: (1) image selection tools allowing for the division of videos in homogeneous sections, (2) automatic extraction of targeted information, (3) solutions for long-term time-series as well as large spatial scale image acquisition, (4) real time acquisition and in some cases real time analysis, and (5) a large range of customized image-analysis possibilities through a script editor. The flexibility of use of AVIExplore is illustrated and validated by three case studies: (1) coral identification and mapping, (2) identification and quantification of different types of behaviors in a mud shrimp, and (3) quantification of filtering activity in a passive suspension-feeder. The accuracy of the software is measured comparing with visual assessment. It is: 90.2%, 82.7%, and 98.3% for the three case studies, respectively. Some of the advantages and current limitations of the software as well as some of its foreseen advancements are then briefly discussed.

  18. Software Quality Attribute Analysis by Architecture Reconstruction (SQUA3RE)

    NARCIS (Netherlands)

    Stormer, C.

    2007-01-01

    Software Quality Attribute Analysis by Architecture Reconstruction (SQUA3RE) is a method that fosters a goal-driven process to evaluate the impact of what-if scenarios on existing systems. The method is partitioned into SQA2 and ARE. The SQA2 part provides the analysis models that can be used for q

  19. Software Quality Attribute Analysis by Architecture Reconstruction (SQUA3RE)

    NARCIS (Netherlands)

    Stormer, C.

    2007-01-01

    Software Quality Attribute Analysis by Architecture Reconstruction (SQUA3RE) is a method that fosters a goal-driven process to evaluate the impact of what-if scenarios on existing systems. The method is partitioned into SQA2 and ARE. The SQA2 part provides the analysis models that can be used for q

  20. Splitting a Large Software Archive for Easing Future Software Evolution: An Industrial Experience Report using Formal Concept Analysis

    NARCIS (Netherlands)

    Glorie, M.; Zaidman, A.E.; Hofland, L.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: CSMR 2008 - 12th European Conference on Software Maintenance and Reengineering, 1-4 April 2008; doi:10.1109/CSMR.2008.4493310 Philips medical systems produces medical diagnostic imaging products, such as MR, X-ray and CT scanners. The software of these devices is com

  1. Diffusion tensor imaging of the median nerve: intra-, inter-reader agreement, and agreement between two software packages.

    Science.gov (United States)

    Guggenberger, Roman; Nanz, Daniel; Puippe, Gilbert; Rufibach, Kaspar; White, Lawrence M; Sussman, Marshall S; Andreisek, Gustav

    2012-08-01

    To assess intra-, inter-reader agreement, and the agreement between two software packages for magnetic resonance diffusion tensor imaging (DTI) measurements of the median nerve. Fifteen healthy volunteers (seven men, eight women; mean age, 31.2 years) underwent DTI of both wrists at 1.5 T. Fractional anisotropy (FA) and apparent diffusion coefficient (ADC) of the median nerve were measured by three readers using two commonly used software packages. Measurements were repeated by two readers after 6 weeks. Intraclass correlation coefficients (ICC) and Bland-Altman analysis were used for statistical analysis. ICCs for intra-reader agreement ranged from 0.87 to 0.99, for inter-reader agreement from 0.62 to 0.83, and between the two software packages from 0.63 to 0.82. Bland-Altman analysis showed no differences for intra- and inter-reader agreement and agreement between software packages. The intra-, inter-reader, and agreement between software packages for DTI measurements of the median nerve were moderate to substantial suggesting that user- and software-dependent factors contribute little to variance in DTI measurements.

  2. Analysis on testing and operational reliability of software

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang

    2008-01-01

    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  3. Software Construction and Analysis Tools for Future Space Missions

    Science.gov (United States)

    Lowry, Michael R.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    NASA and its international partners will increasingly depend on software-based systems to implement advanced functions for future space missions, such as Martian rovers that autonomously navigate long distances exploring geographic features formed by surface water early in the planet's history. The software-based functions for these missions will need to be robust and highly reliable, raising significant challenges in the context of recent Mars mission failures attributed to software faults. After reviewing these challenges, this paper describes tools that have been developed at NASA Ames that could contribute to meeting these challenges; 1) Program synthesis tools based on automated inference that generate documentation for manual review and annotations for automated certification. 2) Model-checking tools for concurrent object-oriented software that achieve memorability through synergy with program abstraction and static analysis tools.

  4. Power Analysis Tutorial for Experimental Design Software

    Science.gov (United States)

    2014-11-01

    Details ............................................................ D-1 Appendix E – JMP Monte Carlo Simulation Script...freedom for error. • In Design Expert, when constructing a design, you are asked for delta and sigma . The default model for power analysis is...Designed Experiments. Third Edition. New York: John Wiley and Sons, 2009. 12. Muthen, Linda, and Bengt Muthen. “How to Use a Monte Carlo Study to

  5. Software Agent with Reinforcement Learning Approach for Medical Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    Mahsa Chitsaz; Chaw Seng Woo

    2011-01-01

    Many image segmentation solutions are problem-based. Medical images have very similar grey level and texture among the interested objects. Therefore, medical image segmentation requires improvements although there have been researches done since the last few decades. We design a self-learning framework to extract several objects of interest simultaneously from Computed Tomography (CT) images. Our segmentation method has a learning phase that is based on reinforcement learning (RL) system. Each RL agent works on a particular sub-image of an input image to find a suitable value for each object in it. The RL system is define by state, action and reward. We defined some actions for each state in the sub-image. A reward function computes reward for each action of the RL agent. Finally, the valuable information, from discovering all states of the interest objects, will be stored in a Q-matrix and the final result can be applied in segmentation of similar images. The experimental results for cranial CT images demonstrated segmentation accuracy above 95%.

  6. Optimising MR perfusion imaging: comparison of different software-based approaches in acute ischaemic stroke

    Energy Technology Data Exchange (ETDEWEB)

    Schaafs, Lars-Arne [Charite-Universitaetsmedizin, Department of Radiology, Berlin (Germany); Charite-Universitaetsmedizin, Academic Neuroradiology, Department of Neurology and Center for Stroke Research, Berlin (Germany); Porter, David [Fraunhofer Institute for Medical Image Computing MEVIS, Bremen (Germany); Audebert, Heinrich J. [Charite-Universitaetsmedizin, Department of Neurology with Experimental Neurology, Berlin (Germany); Fiebach, Jochen B.; Villringer, Kersten [Charite-Universitaetsmedizin, Academic Neuroradiology, Department of Neurology and Center for Stroke Research, Berlin (Germany)

    2016-11-15

    Perfusion imaging (PI) is susceptible to confounding factors such as motion artefacts as well as delay and dispersion (D/D). We evaluate the influence of different post-processing algorithms on hypoperfusion assessment in PI analysis software packages to improve the clinical accuracy of stroke PI. Fifty patients with acute ischaemic stroke underwent MRI imaging in the first 24 h after onset. Diverging approaches to motion and D/D correction were applied. The calculated MTT and CBF perfusion maps were assessed by volumetry of lesions and tested for agreement with a standard approach and with the final lesion volume (FLV) on day 6 in patients with persisting vessel occlusion. MTT map lesion volumes were significantly smaller throughout the software packages with correction of motion and D/D when compared to the commonly used approach with no correction (p = 0.001-0.022). Volumes on CBF maps did not differ significantly (p = 0.207-0.925). All packages with advanced post-processing algorithms showed a high level of agreement with FLV (ICC = 0.704-0.879). Correction of D/D had a significant influence on estimated lesion volumes and leads to significantly smaller lesion volumes on MTT maps. This may improve patient selection. (orig.)

  7. Application of ImageJ Analysis Software In Measuring Kernel Size of Maize Seeds%ImageJ图象处理软件在测量玉米子粒大小中的应用

    Institute of Scientific and Technical Information of China (English)

    白光红; 张义荣; 刘弋菊; 邢鸿雁; 严建兵; 彭惠茹; 章建新; 李建生

    2009-01-01

    介绍了基于ImageJ软件处理数字图像辅助测量子粒大小的新方法.分别用图像处理和游标卡尺两种方法测量了70份玉米材料的子粒大小,图像处理法测量的相对误差均小于2%.对粒长、粒宽和粒厚成组数据进行t测验,两种测量方法间差异不显著.相关性分析表明,两种测量方法间呈极显著的线性相关.与游标卡尺法相比,ImageJ软件处理图像法更方便快速,可用于玉米等作物种子大小的实际测量工作.

  8. Software package for the design and analysis of DNA origami structures

    DEFF Research Database (Denmark)

    Andersen, Ebbe Sloth; Nielsen, Morten Muhlig; Dong, Mingdong

    was observed on the mica surface with a fraction of the dolphin nanostructures showing extensive tail flexibility of approximately 90 degrees. The Java editor and tools are free software distributed under the GNU license. The open architecture of the editor makes it easy for the scientific community......A software package was developed for the semi-automated design of DNA origamis and further data analysis of Atomic Force Microscopy (AFM) images. As an example, we design the shape of a bottlenose dolphin and analyze it by means of high resolution AFM imaging. A high yield of DNA dolphins...... to contribute new tools and functionalities. Documentation, tutorials and software will be made available online....

  9. The software analysis project for the Office of Human Resources

    Science.gov (United States)

    Tureman, Robert L., Jr.

    1994-01-01

    There were two major sections of the project for the Office of Human Resources (OHR). The first section was to conduct a planning study to analyze software use with the goal of recommending software purchases and determining whether the need exists for a file server. The second section was analysis and distribution planning for retirement planning computer program entitled VISION provided by NASA Headquarters. The software planning study was developed to help OHR analyze the current administrative desktop computing environment and make decisions regarding software acquisition and implementation. There were three major areas addressed by the study: current environment new software requirements, and strategies regarding the implementation of a server in the Office. To gather data on current environment, employees were surveyed and an inventory of computers were produced. The surveys were compiled and analyzed by the ASEE fellow with interpretation help by OHR staff. New software requirements represented a compilation and analysis of the surveyed requests of OHR personnel. Finally, the information on the use of a server represents research done by the ASEE fellow and analysis of survey data to determine software requirements for a server. This included selection of a methodology to estimate the number of copies of each software program required given current use and estimated growth. The report presents the results of the computing survey, a description of the current computing environment, recommenations for changes in the computing environment, current software needs, management advantages of using a server, and management considerations in the implementation of a server. In addition, detailed specifications were presented for the hardware and software recommendations to offer a complete picture to OHR management. The retirement planning computer program available to NASA employees will aid in long-range retirement planning. The intended audience is the NASA civil

  10. Software for Data Analysis Programming with R

    CERN Document Server

    Chambers, John

    2008-01-01

    Although statistical design is one of the oldest branches of statistics, its importance is ever increasing, especially in the face of the data flood that often faces statisticians. It is important to recognize the appropriate design, and to understand how to effectively implement it, being aware that the default settings from a computer package can easily provide an incorrect analysis. The goal of this book is to describe the principles that drive good design, paying attention to both the theoretical background and the problems arising from real experimental situations. Designs are motivated t

  11. Method, apparatus and software for analyzing perfusion images

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2007-01-01

    The invention relates to a method for analyzing perfusion images, in particular MR pertbsion images, of a human or animal organ including the steps of: (a) defining at least one contour of the organ, and (b) establishing at least one perfusion parameter of a region of interest of said organ within a

  12. Method, apparatus and software for analyzing perfusion images

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2004-01-01

    The invention relates to a method for analyzing perfusion images, in particular MR pertbsion images, of a human or animal organ including the steps of: (a) defining at least one contour of the organ, and (b) establishing at least one perfusion parameter of a region of interest of said organ within a

  13. Evaluation of Co-rich manganese deposits by image analysis and photogrammetric techniques

    Digital Repository Service at National Institute of Oceanography (India)

    Yamazaki, T.; Sharma, R.; Tsurusaki, K.

    Stereo-seabed photographs of Co-rich manganese deposits on a mid-Pacific seamount, were analysed using an image analysis software for coverage estimation and size classification of nodules, and a photogrammetric software for calculation of height...

  14. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Kanstrup, Anne-Marie Fiehn; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  15. Eclipse: ESO C Library for an Image Processing Software Environment

    Science.gov (United States)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  16. Applications of the BEam Cross section Analysis Software (BECAS)

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert; Fedorov, Vladimir;

    2013-01-01

    A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used for the gener......A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used...... for the generation of beam finite element models which correctly account for effects stemming from material anisotropy and inhomogeneity in cross sections of arbitrary geometry. These type of modelling approach allows for an accurate yet computationally inexpensive representation of a general class of three...

  17. The BEPCⅡ Data Production and BESⅢ offline Analysis Software System

    Institute of Scientific and Technical Information of China (English)

    ZepuMAO

    2001-01-01

    The BES detector has operated for about 12 years,and the BES offline data analysis environment also has been developed and upgraded along with developments of the BES hardware and software.The BESⅢ software system will operate for many years.Thus they should meet developments of the new technology in software,It should be highly flexible,Powerful,stable and easy for maintenance.And following points should be taken into account:1) To benefit the collaboration and make better exchanges with the international HEP experiments this system shoule be set up by adopting or referring the newest technology in the software from advanced experiments in the world.2).It should support hundreds of the existing BES software packages and serve for both old experts who familiar with BESII software and computing environment and new members who is going to benefit from the new system.3).The most BESII existing packages will be modified or re-designed according to the hardware changes.

  18. Availability Analysis of Application Servers Using Software Rejuvenation and Virtualization

    Institute of Scientific and Technical Information of China (English)

    Thandar Thein; Jong Sou Park

    2009-01-01

    Demands on software reliability and availability have increased tremendously due to the nature of present day applications. We focus on the aspect of software for the high availability of application servers since the unavailability of servers more often originates from software faults rather than hardware faults. The software rejuvenation technique has been widely used to avoid the occurrence of unplanned failures, mainly due to the phenomena of software aging or caused by transient failures. In this paper, first we present a new way of using the virtual machine based software rejuvenation named VMSR to offer high availability for application server systems. Second we model a single physical server which is used to host multiple virtual machines (VMs) with the VMSR framework using stochastic modeling and evaluate it through both numerical analysis and SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) tool simulation.This VMSR model is very general and can capture application server characteristics, failure behavior, and performability measures. Our results demonstrate that VMSR approach is a practical way to ensure uninterrupted availability and to optimize performance for aging applications.

  19. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem.

  20. Design and Implementation of Convex Analysis of Mixtures Software Suite

    OpenAIRE

    Meng, Fan

    2012-01-01

    Various convex analysis of mixtures (CAM) based algorithms have been developed to address real world blind source separation (BSS) problems and proven to have good performances in previous papers. This thesis reported the implementation of a comprehensive software CAM-Java, which contains three different CAM based algorithms, CAM compartment modeling (CAM-CM), CAM non-negative independent component analysis (CAM-nICA), and CAM non-negative well-grounded component analysis (CAM-nWCA). The imp...

  1. RAVEN, a New Software for Dynamic Risk Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cristian Rabiti; Andrea Alfonsi; Joshua Cogliati; Diego Mandelli; Robert Kinoshita

    2014-06-01

    RAVEN is a generic software driver to perform parametric and probabilistic analysis of code simulating complex systems. Initially developed to provide dynamic risk analysis capabilities to the RELAP-7 code [1] is currently being generalized with the addition of Application Programming Interfaces (APIs). These interfaces are used to extend RAVEN capabilities to any software as long as all the parameters that need to be perturbed are accessible by inputs files or directly via python interfaces. RAVEN is capable to investigate the system response probing the input space using Monte Carlo, grid strategies, or Latin Hyper Cube schemes, but its strength is its focus toward system feature discovery like limit surfaces separating regions of the input space leading to system failure using dynamic supervised learning techniques. The paper will present an overview of the software capabilities and their implementation schemes followed by same application examples.

  2. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Directory of Open Access Journals (Sweden)

    Raj Kumar

    2012-12-01

    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  3. Initial Investigation of Software-Based Bone-Suppressed Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Park, Eunpyeong; Youn, Hanbean; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2015-05-15

    Chest radiography is the most widely used imaging modality in medicine. However, the diagnostic performance of chest radiography is deteriorated by the anatomical background of the patient. So, dual energy imaging (DEI) has recently been emerged and demonstrated an improved. However, the typical DEI requires more than two projections, hence causing additional patient dose. The motion artifact is another concern in the DEI. In this study, we investigate DEI-like bone-suppressed imaging based on the post processing of a single radiograph. To obtain bone-only images, we use the artificial neural network (ANN) method with the error backpropagation-based machine learning approach. The computational load of learning process of the ANN is too heavy for a practical implementation because we use the gradient descent method for the error backpropagation. We will use a more advanced error propagation method for the learning process.

  4. Clinical software VIII for magnetic resonance imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Kohno, Satoru; Takeo, Kazuhiro [Medical Applications Department, Medical Systems Division, Shimadzu Corporation, Kyoto (Japan)

    2001-02-01

    This report describes the latest techniques of MRA (magnetic resonance angiography) and the brain attack diagnosis protocol which are now effectively utilized in the Shimadzu-Marconi MAGNEX ECLIPSE MRI (magnetic resonance imaging) system (1.5 tesla type) and the MAGNEX POLARIS MRI system (1.0 tesla type). As for the latest techniques for MRA, this report refers to the SLINKY (sliding interleaved ky) technique, which provides high-resolution images over a wide range in the direction of slice, without using contrast agent, and to the iPass technique which enables highly reliable CE-MRA (contrast-enhanced magnetic resonance angiography), through easy and simple operation. Also reported are the techniques of diffusion imaging and perfusion imaging, utilized for stroke assessment. (author)

  5. Free software for performing physical analysis of systems for digital radiography and mammography

    Energy Technology Data Exchange (ETDEWEB)

    Donini, Bruno; Lanconelli, Nico, E-mail: nico.lanconelli@unibo.it [Alma Mater Studiorum, Department of Physics and Astronomy, University of Bologna, Bologna 40127 (Italy); Rivetti, Stefano [Fisica Medica, Ospedale di Sassuolo S.p.A., Sassuolo 41049 (Italy); Bertolini, Marco [Medical Physics Unit, Azienda Ospedaliera ASMN, Istituto di Ricovero e Cura a Carattere Scientifico, Reggio Emilia 42123 (Italy)

    2014-05-15

    Purpose: In this paper, the authors present a free software for assisting users in achieving the physical characterization of x-ray digital systems and image quality checks. Methods: The program was developed as a plugin of a well-known public-domain suite ImageJ. The software can assist users in calculating various physical parameters such as the response curve (also termed signal transfer property), modulation transfer function (MTF), noise power spectra (NPS), and detective quantum efficiency (DQE). It also includes the computation of some image quality checks: defective pixel analysis, uniformity, dark analysis, and lag. Results: The software was made available in 2009 and has been used during the last couple of years by many users who gave us valuable feedback for improving its usability. It was tested for achieving the physical characterization of several clinical systems for digital radiography and mammography. Various published papers made use of the outcomes of the plugin. Conclusions: This software is potentially beneficial to a variety of users: physicists working in hospitals, staff working in radiological departments, such as medical physicists, physicians, engineers. The plugin, together with a brief user manual, are freely available and can be found online ( http://www.medphys.it/downloads.htm ). With our plugin users can estimate all three most important parameters used for physical characterization (MTF, NPS, and also DQE). The plugin can run on any operating system equipped with ImageJ suite. The authors validated the software by comparing MTF and NPS curves on a common set of images with those obtained with other dedicated programs, achieving a very good agreement.

  6. New tools for digital medical image processing implemented in DIP software

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Erica A.C.; Santana, Ivan E. [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco, Recife, PE (Brazil); Lima, Fernando R.A., E-mail: falima@cnen.gov.b [Centro Regional de Ciencias Nucleares, (CRCN/NE-CNEN-PE), Recife, PE (Brazil); Viera, Jose W. [Escola Politecnica de Pernambuco, Recife, PE (Brazil)

    2011-07-01

    The anthropomorphic models used in computational dosimetry, also called phantoms, are mostly built from stacks of images CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) obtained from scans of patients or volunteers. The construction of voxel phantoms requires computational processing for transforming image formats, dimensional image compression (2D) to form three-dimensional arrays (3D), quantization, resampling, enhancement, restoration and image segmentation, among others. Hardly the computational dosimetry researcher finds all these skills into a single software and often it results in a decreased development of their research or inadequate use of alternative tools. The need to integrate the various tasks of the original digital image processing to obtain an image that can be used in a computational model of exposure led to the development of software DIP (Digital Image Processing). This software reads, writes and edits binary files containing the 3D matrix corresponding to a stack of cross-sectional images of a given geometry that can be a human body or other volume of interest. It can also read any type of computer image and do conversions. When the task involves only one output image, it is saved in the JPEG standard Windows. When it involves a stack of images, the binary output file is called SGI (Interactive Graphic Simulations, a symbol already used in other publications of the Research Group in Numerical Dosimetry). The following paper presents the third version of the DIP software and emphasizes the new tools it implemented. Currently it has the menus Basics, Views, Spatial Domain, Frequency Domain, Segmentations and Study. Each menu contains items and subitems with features that generally require an image as input and produce an image or an attribute in the output. (author)

  7. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  8. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    Science.gov (United States)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  9. Orbiter subsystem hardware/software interaction analysis. Volume 8: AFT reaction control system, part 2

    Science.gov (United States)

    Becker, D. D.

    1980-01-01

    The orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are examined. Potential interaction with the software is examined through an evaluation of the software requirements. The analysis is restricted to flight software requirements and excludes utility/checkout software. The results of the hardware/software interaction analysis for the forward reaction control system are presented.

  10. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    Science.gov (United States)

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-03-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias.

  11. Environment for Test and Analysis of Distributed Software (ETADS)

    Science.gov (United States)

    1994-09-27

    and Analysis of Distributed Software (ETADS) Final Report Dear Sir/ Madam , Enclosed please find the subject final report for your review. If you have any...OSF >= 3.0 BAL Sequent Balance BFLY BBN Butterfly TC2000 BSD386 80[34]86 running BSDI, 386BSD, Net- BSD, FreeBSD CM2 Thinking Machines CM-2 Sun front

  12. PROTEINCHALLENGE: Crowd sourcing in proteomics analysis and software development

    DEFF Research Database (Denmark)

    Martin, Sarah F.; Falkenberg, Heiner; Dyrlund, Thomas Franck;

    2013-01-01

    , including arguments for community-wide open source software development and “big data” compatible solutions for the future. For the meantime, we have laid out ten top tips for data processing. With these at hand, a first large-scale proteomics analysis hopefully becomes less daunting to navigate...

  13. Comparative Analysis and Evaluation of Existing Risk Management Software

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The focus of this article lies on the specific features of the existing software packages for risk management differentiating three categories. Representative for these categories we consider the Crystal Ball, Haufe Risikomanager and MIS - Risk Management solutions, outlining the strenghts and weaknesses in a comparative analysis.

  14. Software for analysis of equine ground reaction force data

    NARCIS (Netherlands)

    Schamhardt, H.C.; Merkens, H.W.; Lammertink, J.L.M.A.

    1986-01-01

    Software for analysis of force plate recordings of the horse at normal walk is described. The data of a number of stance phases are averaged to obtain a representative tracing of that horse. The amplitudes of a number of characteristic peaks in the force-time curves are used to compare left and righ

  15. Using Business Analysis Software in a Business Intelligence Course

    Science.gov (United States)

    Elizondo, Juan; Parzinger, Monica J.; Welch, Orion J.

    2011-01-01

    This paper presents an example of a project used in an undergraduate business intelligence class which integrates concepts from statistics, marketing, and information systems disciplines. SAS Enterprise Miner software is used as the foundation for predictive analysis and data mining. The course culminates with a competition and the project is used…

  16. Medical Image Analysis Facility

    Science.gov (United States)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  17. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    Science.gov (United States)

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  18. NUKE 软件转制立体影像的技术分析%The Technical Analysis on the Conversion of Stereoscopic Image in the Software of NUKE

    Institute of Scientific and Technical Information of China (English)

    何小凡

    2014-01-01

    后期转制立体影像的方法是当下快速满足市场对立体影视片源大量需求的一剂灵药。通过专业软件NUKE的再制作,将原本普通的画面逐帧制作纵向深度从而变为立体影像,这一技术主要分为ROTO (拆分)、DEPTH(纵向深度)、CLEAN PLATE(填补镜头穿帮)、CONVERT(转换)四个主要制作流程。%The approach of the conversion of stereoscopic image in a later stage is regarded as a panacea which helps efficiently meet the massive demand of stereoscopic movie resources in the current market.With the help of reproduction made by the professional software of NUKE,this technique changes the original common pic-tures into stereoscopic image through making vertical depth frame by frame.This process is divided into such four major manufacturing processes as ROTO,DEPTH,CLEAN PLATE,and CONVERT.

  19. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  20. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  1. A Knowledge-based Environment for Software Process Performance Analysis

    Directory of Open Access Journals (Sweden)

    Natália Chaves Lessa Schots

    2015-08-01

    Full Text Available Background: Process performance analysis is a key step for implementing continuous improvement in software organizations. However, the knowledge to execute such analysis is not trivial and the person responsible to executing it must be provided with appropriate support. Aim: This paper presents a knowledge-based environment, named SPEAKER, proposed for supporting software organizations during the execution of process performance analysis. SPEAKER comprises a body of knowledge and a set of activities and tasks for software process performance analysis along with supporting tools to executing these activities and tasks. Method: We conducted an informal literature reviews and a systematic mapping study, which provided basic requirements for the proposed environment. We implemented the SPEAKER environment integrating supporting tools for the execution of activities and tasks of performance analysis and the knowledge necessary to execute them, in order to meet the variability presented by the characteristics of these activities. Results: In this paper, we describe each SPEAKER module and the individual evaluations of these modules, and also present an example of use comprising how the environment can guide the user through a specific performance analysis activity. Conclusion: Although we only conducted individual evaluations of SPEAKER’s modules, the example of use indicates the feasibility of the proposed environment. Therefore, the environment as a whole will be further evaluated to verify if it attains its goal of assisting in the execution of process performance analysis by non-specialist people.

  2. Software for browsing sectioned images of a dog body and generating a 3D model.

    Science.gov (United States)

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  3. Software and codes for analysis of concentrating solar power technologies.

    Energy Technology Data Exchange (ETDEWEB)

    Ho, Clifford Kuofei

    2008-12-01

    This report presents a review and evaluation of software and codes that have been used to support Sandia National Laboratories concentrating solar power (CSP) program. Additional software packages developed by other institutions and companies that can potentially improve Sandia's analysis capabilities in the CSP program are also evaluated. The software and codes are grouped according to specific CSP technologies: power tower systems, linear concentrator systems, and dish/engine systems. A description of each code is presented with regard to each specific CSP technology, along with details regarding availability, maintenance, and references. A summary of all the codes is then presented with recommendations regarding the use and retention of the codes. A description of probabilistic methods for uncertainty and sensitivity analyses of concentrating solar power technologies is also provided.

  4. One-Click Data Analysis Software for Science Operations

    Science.gov (United States)

    Navarro, Vicente

    2015-12-01

    One of the important activities of ESA Science Operations Centre is to provide Data Analysis Software (DAS) to enable users and scientists to process data further to higher levels. During operations and post-operations, Data Analysis Software (DAS) is fully maintained and updated for new OS and library releases. Nonetheless, once a Mission goes into the "legacy" phase, there are very limited funds and long-term preservation becomes more and more difficult. Building on Virtual Machine (VM), Cloud computing and Software as a Service (SaaS) technologies, this project has aimed at providing long-term preservation of Data Analysis Software for the following missions: - PIA for ISO (1995) - SAS for XMM-Newton (1999) - Hipe for Herschel (2009) - EXIA for EXOSAT (1983) Following goals have guided the architecture: - Support for all operations, post-operations and archive/legacy phases. - Support for local (user's computer) and cloud environments (ESAC-Cloud, Amazon - AWS). - Support for expert users, requiring full capabilities. - Provision of a simple web-based interface. This talk describes the architecture, challenges, results and lessons learnt gathered in this project.

  5. GammaLib and ctools. A software framework for the analysis of astronomical gamma-ray data

    Science.gov (United States)

    Knödlseder, J.; Mayer, M.; Deil, C.; Cayrou, J.-B.; Owen, E.; Kelley-Hoskins, N.; Lu, C.-C.; Buehler, R.; Forest, F.; Louge, T.; Siejkowski, H.; Kosack, K.; Gerard, L.; Schulz, A.; Martin, P.; Sanchez, D.; Ohm, S.; Hassan, T.; Brau-Nogué, S.

    2016-08-01

    The field of gamma-ray astronomy has seen important progress during the last decade, yet to date no common software framework has been developed for the scientific analysis of gamma-ray telescope data. We propose to fill this gap by means of the GammaLib software, a generic library that we have developed to support the analysis of gamma-ray event data. GammaLib was written in C++ and all functionality is available in Python through an extension module. Based on this framework we have developed the ctools software package, a suite of software tools that enables flexible workflows to be built for the analysis of Imaging Air Cherenkov Telescope event data. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center. The ctools were written in Python and C++, and can be either used from the command line via shell scripts or directly from Python. In this paper we present the GammaLib and ctools software versions 1.0 that were released at the end of 2015. GammaLib and ctools are ready for the science analysis of Imaging Air Cherenkov Telescope event data, and also support the analysis of Fermi-LAT data and the exploitation of the COMPTEL legacy data archive. We propose using ctools as the science tools software for the Cherenkov Telescope Array Observatory.

  6. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    Science.gov (United States)

    Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng

    2011-08-01

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us

  7. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Bang-Hung [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Tsai, Sung-Yi [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Imaging Medical, St.Martin De Porres Hospital, Chia-Yi, Taiwan (China); Wang, Shyh-Jen [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Su, Tung-Ping; Chou, Yuan-Hwa [Department of Psychiatry, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chen, Chia-Chieh [Institute of Nuclear Energy Research, Longtan, Taiwan (China); Chen, Jyh-Cheng, E-mail: jcchen@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China)

    2011-08-21

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images. Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of {sup 123}I-ADAM. The image matrix size was 128x128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans. The average of specific uptake ratio (SUR: target/cerebellum-1) of {sup 123}I-ADAM binding to SERT in midbrain was 1.78{+-}0.27, pons was 1.21{+-}0.53, and striatum was 0.79{+-}0.13. The cronbach's {alpha} of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2

  8. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  9. Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers

    Science.gov (United States)

    Bjorner, Nikolaj

    2010-01-01

    The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings

  10. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    Directory of Open Access Journals (Sweden)

    Luke eCampagnola

    2014-01-01

    Full Text Available The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  11. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    Science.gov (United States)

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  12. Development and Evaluation of an Open-Source Software Package “CGITA” for Quantifying Tumor Heterogeneity with Molecular Images

    Directory of Open Access Journals (Sweden)

    Yu-Hua Dean Fang

    2014-01-01

    Full Text Available Background. The quantification of tumor heterogeneity with molecular images, by analyzing the local or global variation in the spatial arrangements of pixel intensity with texture analysis, possesses a great clinical potential for treatment planning and prognosis. To address the lack of available software for computing the tumor heterogeneity on the public domain, we develop a software package, namely, Chang-Gung Image Texture Analysis (CGITA toolbox, and provide it to the research community as a free, open-source project. Methods. With a user-friendly graphical interface, CGITA provides users with an easy way to compute more than seventy heterogeneity indices. To test and demonstrate the usefulness of CGITA, we used a small cohort of eighteen locally advanced oral cavity (ORC cancer patients treated with definitive radiotherapies. Results. In our case study of ORC data, we found that more than ten of the current implemented heterogeneity indices outperformed SUVmean for outcome prediction in the ROC analysis with a higher area under curve (AUC. Heterogeneity indices provide a better area under the curve up to 0.9 than the SUVmean and TLG (0.6 and 0.52, resp.. Conclusions. CGITA is a free and open-source software package to quantify tumor heterogeneity from molecular images. CGITA is available for free for academic use at http://code.google.com/p/cgita.

  13. Development and evaluation of an open-source software package "CGITA" for quantifying tumor heterogeneity with molecular images.

    Science.gov (United States)

    Fang, Yu-Hua Dean; Lin, Chien-Yu; Shih, Meng-Jung; Wang, Hung-Ming; Ho, Tsung-Ying; Liao, Chun-Ta; Yen, Tzu-Chen

    2014-01-01

    The quantification of tumor heterogeneity with molecular images, by analyzing the local or global variation in the spatial arrangements of pixel intensity with texture analysis, possesses a great clinical potential for treatment planning and prognosis. To address the lack of available software for computing the tumor heterogeneity on the public domain, we develop a software package, namely, Chang-Gung Image Texture Analysis (CGITA) toolbox, and provide it to the research community as a free, open-source project. With a user-friendly graphical interface, CGITA provides users with an easy way to compute more than seventy heterogeneity indices. To test and demonstrate the usefulness of CGITA, we used a small cohort of eighteen locally advanced oral cavity (ORC) cancer patients treated with definitive radiotherapies. In our case study of ORC data, we found that more than ten of the current implemented heterogeneity indices outperformed SUVmean for outcome prediction in the ROC analysis with a higher area under curve (AUC). Heterogeneity indices provide a better area under the curve up to 0.9 than the SUVmean and TLG (0.6 and 0.52, resp.). CGITA is a free and open-source software package to quantify tumor heterogeneity from molecular images. CGITA is available for free for academic use at http://code.google.com/p/cgita.

  14. TScratch: a novel and simple software tool for automated analysis of monolayer wound healing assays.

    Science.gov (United States)

    Gebäck, Tobias; Schulz, Martin Michael Peter; Koumoutsakos, Petros; Detmar, Michael

    2009-04-01

    Cell migration plays a major role in development, physiology, and disease, and is frequently evaluated in vitro by the monolayer wound healing assay. The assay analysis, however, is a time-consuming task that is often performed manually. In order to accelerate this analysis, we have developed TScratch, a new, freely available image analysis technique and associated software tool that uses the fast discrete curvelet transform to automate the measurement of the area occupied by cells in the images. This tool helps to significantly reduce the time needed for analysis and enables objective and reproducible quantification of assays. The software also offers a graphical user interface which allows easy inspection of analysis results and, if desired, manual modification of analysis parameters. The automated analysis was validated by comparing its results with manual-analysis results for a range of different cell lines. The comparisons demonstrate a close agreement for the vast majority of images that were examined and indicate that the present computational tool can reproduce statistically significant results in experiments with well-known cell migration inhibitors and enhancers.

  15. Image fusion using MIM software via picture archiving and communication system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The preliminary studies of the multimodality image registration andfusion were performed using an image fusion software and a picture archiving andcommunication system (PACS) to explore the methodology. Original image voluminaldata were acquired with a CT scanner, MR and dual-head coincidence SPECT, respec-tively. The data sets from all imaging devices were queried, retrieved, transferred andaccessed via DICOM PACS. The image fusion was performed at thc SPECT ICONwork-station, where the MIM (Medical Image Merge) fusion software was installed.The images were created by reslicing original volume on the fly. The image volumeswere aligned by translation and rotation of these view ports with respect to the origi-nal volume orientation. The transparency factor and contrast were adjusted in orderthat both volumes can be visualized in the merged images. The image volume data ofCT, MR and nuclear medicine were transferred, accessed and loaded via PACS suc-cessfully. The perfect fused images of chest CT/18F-FDG and brain MR/SPECT wereobtained. These results showed that image fusion technique using PACS was feasibleand practical. Further experimentation and larger validation studies were needed toexplore the full potential of the clinical use.

  16. A Software Tool for Integrated Optical Design Analysis

    Science.gov (United States)

    Moore, Jim; Troy, Ed; DePlachett, Charles; Montgomery, Edward (Technical Monitor)

    2001-01-01

    Design of large precision optical systems requires multi-disciplinary analysis, modeling, and design. Thermal, structural and optical characteristics of the hardware must be accurately understood in order to design a system capable of accomplishing the performance requirements. The interactions between each of the disciplines become stronger as systems are designed lighter weight for space applications. This coupling dictates a concurrent engineering design approach. In the past, integrated modeling tools have been developed that attempt to integrate all of the complex analysis within the framework of a single model. This often results in modeling simplifications and it requires engineering specialist to learn new applications. The software described in this presentation addresses the concurrent engineering task using a different approach. The software tool, Integrated Optical Design Analysis (IODA), uses data fusion technology to enable a cross discipline team of engineering experts to concurrently design an optical system using their standard validated engineering design tools.

  17. Simulated spectra for QA/QC of spectral analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Jackman, K. R. (Kevin R.); Biegalski, S. R.

    2004-01-01

    Monte Carlo simulated spectra have been developed to test the peak analysis algorithms of several spectral analysis software packages. Using MCNP 5, generic sample spectra were generated in order to perform ANSI N42.14 standard spectral tests on Canberra Genie-2000, Ortec GammaVision, and UniSampo. The reference spectra were generated in MCNP 5 using an F8, pulse height, tally with a detector model of an actual Germanium detector used in counting. The detector model matches the detector resolution, energy calibration, and efficiency. The simulated spectra have been found to be useful in testing the reliability and performance of spectral analysis programs. The detector model used was found to be useful in testing the performance of modern spectral analysis software tools. The software packages were analyzed and found to be in compliance with the ANSI 42.14 tests of the peak-search and peak-fitting algorithms. This method of using simulated spectra can be used to perform the ANSI 42.14 tests on the reliability and performance of spectral analysis programs in the absence of standard radioactive materials.

  18. Knickpoint finder: A software tool that improves neotectonic analysis

    Science.gov (United States)

    Queiroz, G. L.; Salamuni, E.; Nascimento, E. R.

    2015-03-01

    This work presents a new software tool for morphometric analysis of drainage networks based on the methods of Hack (1973) and Etchebehere et al. (2004). This tool is applicable to studies of morphotectonics and neotectonics. The software used a digital elevation model (DEM) to identify the relief breakpoints along drainage profiles (knickpoints). The program was coded in Python for use on the ArcGIS platform and is called Knickpoint Finder. A study area was selected to test and evaluate the software's ability to analyze and identify neotectonic morphostructures based on the morphology of the terrain. For an assessment of its validity, we chose an area of the James River basin, which covers most of the Piedmont area of Virginia (USA), which is an area of constant intraplate seismicity and non-orogenic active tectonics and exhibits a relatively homogeneous geodesic surface currently being altered by the seismogenic features of the region. After using the tool in the chosen area, we found that the knickpoint locations are associated with the geologic structures, epicenters of recent earthquakes, and drainages with rectilinear anomalies. The regional analysis demanded the use of a spatial representation of the data after processing using Knickpoint Finder. The results were satisfactory in terms of the correlation of dense areas of knickpoints with active lineaments and the rapidity of the identification of deformed areas. Therefore, this software tool may be considered useful in neotectonic analyses of large areas and may be applied to any area where there is DEM coverage.

  19. MIAWARE Software

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Pereira, Oscar N. M.; Dias, Paulo

    2008-01-01

    This article presents MIAWARE, a software for Medical Image Analysis With Automated Reporting Engine, which was designed and developed for doctor/radiologist assistance. It allows to analyze an image stack from computed axial tomography scan of lungs (thorax) and, at the same time, to mark all...... pathologies on images and report their characteristics. The reporting process is normalized - radiologists cannot describe pathological changes with their own words, but can only use some terms from a specific vocabulary set provided by the software. Consequently, a normalized radiological report...... is automatically generated. Furthermore, MIAWARE software is accompanied with an intelligent search engine for medical reports, based on the relations between parts of the lungs. A logical structure of the lungs is introduced to the search algorithm through the specially developed ontology. As a result...

  20. SMV model-based safety analysis of software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Seong, Poong Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)], E-mail: phseong@kaist.ac.kr

    2009-02-15

    Fault tree analysis (FTA) is one of the most frequently applied safety analysis techniques when developing safety-critical industrial systems such as software-based emergency shutdown systems of nuclear power plants and has been used for safety analysis of software requirements in the nuclear industry. However, the conventional method for safety analysis of software requirements has several problems in terms of correctness and efficiency; the fault tree generated from natural language specifications may contain flaws or errors while the manual work of safety verification is very labor-intensive and time-consuming. In this paper, we propose a new approach to resolve problems of the conventional method; we generate a fault tree from a symbolic model verifier (SMV) model, not from natural language specifications, and verify safety properties automatically, not manually, by a model checker SMV. To demonstrate the feasibility of this approach, we applied it to shutdown system 2 (SDS2) of Wolsong nuclear power plant (NPP). In spite of subtle ambiguities present in the approach, the results of this case study demonstrate its overall feasibility and effectiveness.

  1. Mvox: Interactive 2-4D medical image and graphics visualization software

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten

    1996-01-01

    Mvox is a new tool for visualization, segmentation and manipulation of a wide range of 2-4D grey level and colour images, and 3D surface graphics, which has been developed at the Department of Mathematical Modelling, Technical University of Denmark. The principal idea behind the software has been...... to provide a flexible tool that is able to handle all the kinds of data that are typically used in a research environment for medical imaging and visualization. At the same time the software should be easy to use and have a consistent interface providing locally only the functions relevant to the context...

  2. Effectiveness of an Automatic Tracking Software in Underwater Motion Analysis

    Directory of Open Access Journals (Sweden)

    Fabrício A. Magalhaes

    2013-12-01

    Full Text Available Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP, based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers’ positions were manually tracked to determine the markers’ center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM. Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker’s coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4% than for COM (17.8%. Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis.

  3. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  4. Monte Carlo PENRADIO software for dose calculation in medical imaging

    Science.gov (United States)

    Adrien, Camille; Lòpez Noriega, Mercedes; Bonniaud, Guillaume; Bordy, Jean-Marc; Le Loirec, Cindy; Poumarede, Bénédicte

    2014-06-01

    The increase on the collective radiation dose due to the large number of medical imaging exams has led the medical physics community to deeply consider the amount of dose delivered and its associated risks in these exams. For this purpose we have developed a Monte Carlo tool, PENRADIO, based on a modified version of PENELOPE code 2006 release, to obtain an accurate individualized radiation dose in conventional and interventional radiography and in computed tomography (CT). This tool has been validated showing excellent agreement between the measured and simulated organ doses in the case of a hip conventional radiography and a coronography. We expect the same accuracy in further results for other localizations and CT examinations.

  5. Spectrum analysis on quality requirements consideration in software design documents.

    Science.gov (United States)

    Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji

    2013-12-01

    Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.

  6. eXtended CASA Line Analysis Software Suite (XCLASS)

    CERN Document Server

    Möller, T; Schilke, P

    2015-01-01

    The eXtended CASA Line Analysis Software Suite (XCLASS) is a toolbox for the Common Astronomy Software Applications package (CASA) containing new functions for modeling interferometric and single dish data. Among the tools is the myXCLASS program which calculates synthetic spectra by solving the radiative transfer equation for an isothermal object in one dimension, whereas the finite source size and dust attenuation are considered as well. Molecular data required by the myXCLASS program are taken from an embedded SQLite3 database containing entries from the Cologne Database for Molecular Spectroscopy CDMS) and JPL using the Virtual Atomic and Molecular Data Center (VAMDC) portal. Additionally, the toolbox provides an interface for the model optimizer package Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX), which helps to find the best description of observational data using myXCLASS (or another external model program), i.e., finding the parameter set that most closely reproduces t...

  7. Calibration Analysis Software for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    The calibration of the ATLAS Pixel detector at LHC fulfils two main purposes: to tune the front-end configuration parameters for establishing the best operational settings and to measure the tuning performance through a subset of scans. An analysis framework has been set up in order to take actions on the detector given the outcome of a calibration scan (e.g. to create a mask for disabling noisy pixels). The software framework to control all aspects of the Pixel detector scans and analyses is called Calibration Console. The introduction of a new layer, equipped with new Front End-I4 Chips, required an update the Console architecture. It now handles scans and scans analyses applied together to chips with different characteristics. An overview of the newly developed Calibration Analysis Software will be presented, together with some preliminary result.

  8. Calibration Analysis Software for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    The calibration of the Pixel detector fulfills two main purposes: to tune front-end registers for establishing the best operational settings and to measure the tuning performance through a subset of scans. An analysis framework has been set up in order to take actions on the detector given the outcome of a calibration scan (e.g. to create a mask for disabling noisy pixels). The software framework to control all aspects of the Pixel detector scans and analyses is called Calibration Console. The introduction of a new layer, equipped with new Front End-I4 Chips, required an update the Console architecture. It now handles scans and scans analyses applied toghether to chips with dierent characteristics. An overview of the newly developed Calibration Analysis Software will be presented, together with some preliminary result.

  9. Using Statistical Analysis Software to Advance Nitro Plasticizer Wettability

    Energy Technology Data Exchange (ETDEWEB)

    Shear, Trevor Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-29

    Statistical analysis in science is an extremely powerful tool that is often underutilized. Additionally, it is frequently the case that data is misinterpreted or not used to its fullest extent. Utilizing the advanced software JMP®, many aspects of experimental design and data analysis can be evaluated and improved. This overview will detail the features of JMP® and how they were used to advance a project, resulting in time and cost savings, as well as the collection of scientifically sound data. The project analyzed in this report addresses the inability of a nitro plasticizer to coat a gold coated quartz crystal sensor used in a quartz crystal microbalance. Through the use of the JMP® software, the wettability of the nitro plasticizer was increased by over 200% using an atmospheric plasma pen, ensuring good sample preparation and reliable results.

  10. Calibration analysis software for the ATLAS Pixel Detector

    Science.gov (United States)

    Stramaglia, Maria Elena

    2016-07-01

    The calibration of the ATLAS Pixel Detector at LHC fulfils two main purposes: to tune the front-end configuration parameters for establishing the best operational settings and to measure the tuning performance through a subset of scans. An analysis framework has been set up in order to take actions on the detector given the outcome of a calibration scan (e.g. to create a mask for disabling noisy pixels). The software framework to control all aspects of the Pixel Detector scans and analyses is called calibration console. The introduction of a new layer, equipped with new FE-I4 chips, required an update of the console architecture. It now handles scans and scan analyses applied together to chips with different characteristics. An overview of the newly developed calibration analysis software will be presented, together with some preliminary results.

  11. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    Science.gov (United States)

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under

  12. PLAGIARISM DETECTION PROBLEMS AND ANALYSIS SOFTWARE TOOLS FOR ITS SOLVE

    Directory of Open Access Journals (Sweden)

    V. I. Shynkarenko

    2017-02-01

    Full Text Available Purpose. This study is aimed at: 1 the definition of plagiarism in texts on formal and natural languages, building a taxonomy of plagiarism; 2 identify major problems of plagiarism detection when using automated tools to solve them; 3 Analysis and systematization of information obtained during the review, testing and analysis of existing detection systems. Methodology. To identify the requirements of the software to detect plagiarism apply methods of analysis of normative documentation (legislative base and competitive tools. To check the requirements of the testing methods used and GUI interfaces review. Findings. The paper considers the concept of plagiarism issues of proliferation and classification. A review of existing systems to identify plagiarism: desktop applications, and online resources. Highlighting their functional characteristics, determine the format of the input and output data and constraints on them, customization features and access. Drill down system requirements is made. Originality. The authors proposed schemes complement the existing hierarchical taxonomy of plagiarism. Analysis of existing systems is done in terms of functionality and possibilities for use of large amounts of data. Practical value. The practical significance is determined by the breadth of the problem of plagiarism in various fields. In Ukraine, develops the legal framework for the fight against plagiarism, which requires the active solution development tasks, improvement and delivery of relevant software (PO. This work contributes to the solution of these problems. Review of existing programs, Anti-plagiarism, as well as study and research experience in the field and update the concept of plagiarism, the strategy allows it to identify more fully articulate to the functional performance requirements, the input and output of the developed software, as well as to identify the features of such software. The article focuses on the features of solving the

  13. NEuronMOrphological analysis tool: open-source software for quantitative morphometrics

    Directory of Open Access Journals (Sweden)

    Lucia eBilleci

    2013-02-01

    Full Text Available Morphometric analysis of neurons and brain tissue is relevant to the study of neuron circuitry development during the first phases of brain growth or for probing the link between microstructural morphology and degenerative diseases. As neural imaging techniques become ever more sophisticated, so does the amount and complexity of data generated. The NEuronMOrphological analysis tool NEMO was purposely developed to handle and process large numbers of optical microscopy image files of neurons in culture or slices in order to automatically run batch routines, store data and apply multivariate classification and feature extraction using3-way principal component analysis. Here we describe the software's main features, underlining the differences between NEMO and other commercial and non-commercial image processing tools, and show an example of how NEMO can be used to classify neurons from wild-type mice and from animal models of autism.

  14. Image based performance analysis of thermal imagers

    Science.gov (United States)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  15. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience.

    Science.gov (United States)

    Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin

    2008-11-01

    OBJECTIVE: Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. METHODS: IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. RESULTS: The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. CONCLUSION: With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community.

  16. Effect of software manipulation (Photoshop) of digitised retinal images on the grading of diabetic retinopathy.

    Science.gov (United States)

    George, L D; Lusty, J; Owens, D R; Ollerton, R L

    1999-08-01

    To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy. 150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides. Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images. This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images.

  17. Models for composing software : an analysis of software composition and objects

    NARCIS (Netherlands)

    Bergmans, Lodewijk

    1999-01-01

    In this report, we investigate component-based software construction with a focus on composition. In particular we try to analyze the requirements and issues for components and software composition. As a means to understand this research area, we introduce a canonical model for representing software

  18. MATHEMATICAL FORMALISM AND SOFTWARE FOR PROCESSING OF IMAGES OF IRON-CARBON ALLOYS MICROSTRUCTURE

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2010-01-01

    Full Text Available The description of mathematical apparatus, algorithms and software АОМ-1 and АОМ-2, used for computer processing and quantitative analysis of microstructures of pearlitic steels and grey irons, is given.

  19. Modulation of retinal image vasculature analysis to extend utility and provide secondary value from optical coherence tomography imaging.

    Science.gov (United States)

    Cameron, James R; Ballerini, Lucia; Langan, Clare; Warren, Claire; Denholm, Nicholas; Smart, Katie; MacGillivray, Thomas J

    2016-04-01

    Retinal image analysis is emerging as a key source of biomarkers of chronic systemic conditions affecting the cardiovascular system and brain. The rapid development and increasing diversity of commercial retinal imaging systems present a challenge to image analysis software providers. In addition, clinicians are looking to extract maximum value from the clinical imaging taking place. We describe how existing and well-established retinal vasculature segmentation and measurement software for fundus camera images has been modulated to analyze scanning laser ophthalmoscope retinal images generated by the dual-modality Heidelberg SPECTRALIS(®) instrument, which also features optical coherence tomography.

  20. Development of HydroImage, A User Friendly Hydrogeophysical Characterization Software

    Energy Technology Data Exchange (ETDEWEB)

    Mok, Chin Man [GSI Environmental; Hubbard, Susan [Lawrence Berkeley National Laboratory; Chen, Jinsong [Lawrence Berkeley National Laboratory; Suribhatla, Raghu [AMEC E& I; Kaback, Dawn Samara [AMEC E& I

    2014-01-29

    HydroImage, user friendly software that utilizes high-resolution geophysical data for estimating hydrogeological parameters in subsurface strate, was developed under this grant. HydroImage runs on a personal computer platform to promote broad use by hydrogeologists to further understanding of subsurface processes that govern contaminant fate, transport, and remediation. The unique software provides estimates of hydrogeological properties over continuous volumes of the subsurface, whereas previous approaches only allow estimation of point locations. thus, this unique tool can be used to significantly enhance site conceptual models and improve design and operation of remediation systems. The HydroImage technical approach uses statistical models to integrate geophysical data with borehole geological data and hydrological measurements to produce hydrogeological parameter estimates as 2-D or 3-D images.

  1. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    Science.gov (United States)

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es.

  2. IMMAN: free software for information theory-based chemometric analysis.

    Science.gov (United States)

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA

  3. Analysis of signal acquisition in GPS receiver software

    Directory of Open Access Journals (Sweden)

    Vlada S. Sokolović

    2011-01-01

    Full Text Available This paper presents a critical analysis of the flow signal processing carried out in GPS receiver software, which served as a basis for a critical comparison of different signal processing architectures within the GPS receiver. It is possible to achieve Increased flexibility and reduction of GPS device commercial costs, including those of mobile devices, by using radio technology software (SDR, Software Defined Radio. The SDR application can be realized when certain hardware components in a GPS receiver are replaced. Signal processing in the SDR is implemented using a programmable DSP (Digital Signal Processing or FPGA (Field Programmable Gate Array circuit, which allows a simple change of digital signal processing algorithms and a simple change of the receiver parameters. The starting point of the research is the signal generated on the satellite the structure of which is shown in the paper. Based on the GPS signal structure, a receiver is realized with a task to extract an appropriate signal from the spectrum and detect it. Based on collected navigation data, the receiver calculates the position of the end user. The signal coming from the satellite may be at the carrier frequencies of L1 and L2. Since the SPS is used in the civil service, all the tests shown in the work were performed on the L1 signal. The signal coming to the receiver is generated in the spread spectrum technology and is situated below the level of noise. Such signals often interfere with signals from the environment which presents a difficulty for a receiver to perform proper detection and signal processing. Therefore, signal processing technology is continually being improved, aiming at more accurate and faster signal processing. All tests were carried out on a signal acquired from the satellite using the SE4110 input circuit used for filtering, amplification and signal selection. The samples of the received signal were forwarded to a computer for data post processing, i. e

  4. Efficacy of an Intra-Operative Imaging Software System for Anatomic Anterior Cruciate Ligament Reconstruction Surgery

    Directory of Open Access Journals (Sweden)

    Xudong Zhang

    2012-01-01

    Full Text Available An imaging software system was studied for improving the performance of anatomic anterior cruciate ligament (ACL reconstruction which requires identifying ACL insertion sites for bone tunnel placement. This software predicts and displays the insertion sites based on the literature data and patient-specific bony landmarks. Twenty orthopaedic surgeons performed simulated arthroscopic ACL surgeries on 20 knee specimens, first without and then with the visual guidance by fluoroscopic imaging, and their tunnel entry positions were recorded. The native ACL insertion morphologies of individual specimens were quantified in relation to CT-based bone models and then used to evaluate the software-generated insertion locations. Results suggested that the system was effective in leading surgeons to predetermined locations while the application of averaged insertion morphological information in individual surgeries can be susceptible to inaccuracy and uncertainty. Implications on challenges associated with developing engineering solutions to aid in re-creating or recognizing anatomy in surgical care delivery are discussed.

  5. Sub-basal Corneal Nerve Plexus Analysis Using a New Software Technology.

    Science.gov (United States)

    Batawi, Hatim; Shalabi, Nabeel; Joag, Madhura; Koru-Sengul, Tulay; Rodriguez, Jorge; Green, Parke T; Campigotto, Mauro; Karp, Carol L; Galor, Anat

    2017-03-24

    To study sub-basal corneal nerve plexus (SCNP) parameters by in vivo corneal confocal microscopy using a new software technology and examine the effect of demographics and diabetes mellitus (DM) on corneal nerves morphology. A Confoscan 4 (Nidek Technologies) was used in this cross-sectional study to image the SCNP in 84 right eyes at the Miami Veterans Affairs eye clinic. Images were analyzed using a new semiautomated nerve analysis software program (The Corneal Nerve Analysis tool) which evaluated 9 parameters including nerve fibers length (NFL) and nerve fibers length density (NFLD). The main outcome measure was the examination of SCNP morphology by demographics, comorbidities, and HbA1c level. Interoperator and intraoperator reproducibility were good for the 9 parameters studied (Intraclass Correlations [ICCs] 0.73-0.97). Image variability between two images within the same scan was good for all parameters (ICC 0.66-0.80). Older individuals had lower SCNP parameters with NFL and NFLD negatively correlating with age (r=-0.471, and -0.461, respectively, Psoftware technique for the analysis of the SCNP with confocal microscopy. Older age, DM, and higher level of HbA1c were associated with a significant reduction in SCNP parameters.

  6. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bergamaschi, Antoine, E-mail: antoine.bergamaschi@synchrotron-soleil.fr; Medjoubi, Kadda [Synchrotron SOLEIL, BP 48, Saint-Aubin, 91192 Gif sur Yvette (France); Messaoudi, Cédric; Marco, Sergio [Université Paris-Saclay, CNRS, Université Paris-Saclay, F-91405 Orsay (France); Institut Curie, INSERM, PSL Reseach University, F-91405 Orsay (France); Somogyi, Andrea [Synchrotron SOLEIL, BP 48, Saint-Aubin, 91192 Gif sur Yvette (France)

    2016-04-12

    The MMX-I open-source software has been developed for processing and reconstruction of large multimodal X-ray imaging and tomography datasets. The recent version of MMX-I is optimized for scanning X-ray fluorescence, phase-, absorption- and dark-field contrast techniques. This, together with its implementation in Java, makes MMX-I a versatile and friendly user tool for X-ray imaging. A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  7. 3-dimensional root phenotyping with a novel imaging and software platform

    Science.gov (United States)

    A novel imaging and software platform was developed for the high-throughput phenotyping of 3-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and ...

  8. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    Science.gov (United States)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  9. OVERVIEW OF THE SAPHIRE PROBABILISTIC RISK ANALYSIS SOFTWARE

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L.; Wood, Ted; Knudsen, James; Ma, Zhegang

    2016-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. In this paper, we provide an overview of the current technical capabilities found in SAPHIRE Version 8, including the user interface and enhanced solving algorithms.

  10. BIM Software Capability and Interoperability Analysis : An analytical approach toward structural usage of BIM software (S-BIM)

    OpenAIRE

    A. Taher, Ali

    2016-01-01

    This study focused on the structuralanalysis of BIM models. Different commercial software (Autodesk products and Rhinoceros)are presented through modelling and analysis of different structures with varying complexity,section properties, geometry, and material. Beside the commercial software, differentarchitectural and different tools for structural analysis are evaluated (dynamo, grasshopper,add-on tool, direct link, indirect link via IFC). BIM and Structural BIM (S-BIM)

  11. Software applications toward quantitative metabolic flux analysis and modeling.

    Science.gov (United States)

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  12. MR image analysis: Longitudinal cardiac motion influences left ventricular measurements

    Energy Technology Data Exchange (ETDEWEB)

    Berkovic, Patrick [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: pberko17@hotmail.com; Hemmink, Maarten [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: maartenhemmink@gmail.com; Parizel, Paul M. [University Hospital Antwerp, Department of Radiology (Belgium)], E-mail: paul.parizel@uza.be; Vrints, Christiaan J. [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: chris.vrints@uza.be; Paelinck, Bernard P. [University Hospital Antwerp, Department of Cardiology (Belgium)], E-mail: Bernard.paelinck@uza.be

    2010-02-15

    Background: Software for the analysis of left ventricular (LV) volumes and mass using border detection in short-axis images only, is hampered by through-plane cardiac motion. Therefore we aimed to evaluate software that involves longitudinal cardiac motion. Methods: Twenty-three consecutive patients underwent 1.5-Tesla cine magnetic resonance (MR) imaging of the entire heart in the long-axis and short-axis orientation with breath-hold steady-state free precession imaging. Offline analysis was performed using software that uses short-axis images (Medis MASS) and software that includes two-chamber and four-chamber images to involve longitudinal LV expansion and shortening (CAAS-MRV). Intraobserver and interobserver reproducibility was assessed by using Bland-Altman analysis. Results: Compared with MASS software, CAAS-MRV resulted in significantly smaller end-diastolic (156 {+-} 48 ml versus 167 {+-} 52 ml, p = 0.001) and end-systolic LV volumes (79 {+-} 48 ml versus 94 {+-} 52 ml, p < 0.001). In addition, CAAS-MRV resulted in higher LV ejection fraction (52 {+-} 14% versus 46 {+-} 13%, p < 0.001) and calculated LV mass (154 {+-} 52 g versus 142 {+-} 52 g, p = 0.004). Intraobserver and interobserver limits of agreement were similar for both methods. Conclusion: MR analysis of LV volumes and mass involving long-axis LV motion is a highly reproducible method, resulting in smaller LV volumes, higher ejection fraction and calculated LV mass.

  13. Reflections on ultrasound image analysis.

    Science.gov (United States)

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  14. Design Criteria For Networked Image Analysis System

    Science.gov (United States)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  15. Navigating freely-available software tools for metabolomics analysis.

    Science.gov (United States)

    Spicer, Rachel; Salek, Reza M; Moreno, Pablo; Cañueto, Daniel; Steinbeck, Christoph

    2017-01-01

    The field of metabolomics has expanded greatly over the past two decades, both as an experimental science with applications in many areas, as well as in regards to data standards and bioinformatics software tools. The diversity of experimental designs and instrumental technologies used for metabolomics has led to the need for distinct data analysis methods and the development of many software tools. To compile a comprehensive list of the most widely used freely available software and tools that are used primarily in metabolomics. The most widely used tools were selected for inclusion in the review by either ≥ 50 citations on Web of Science (as of 08/09/16) or the use of the tool being reported in the recent Metabolomics Society survey. Tools were then categorised by the type of instrumental data (i.e. LC-MS, GC-MS or NMR) and the functionality (i.e. pre- and post-processing, statistical analysis, workflow and other functions) they are designed for. A comprehensive list of the most used tools was compiled. Each tool is discussed within the context of its application domain and in relation to comparable tools of the same domain. An extended list including additional tools is available at https://github.com/RASpicer/MetabolomicsTools which is classified and searchable via a simple controlled vocabulary. This review presents the most widely used tools for metabolomics analysis, categorised based on their main functionality. As future work, we suggest a direct comparison of tools' abilities to perform specific data analysis tasks e.g. peak picking.

  16. UPVapor: Cofrentes nuclear power plant production results analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Curiel, M. [Logistica y Acondicionamientos Industriales SAU, Sorolla Center, local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain); Palomo, M. J. [ISIRYM, Universidad Politecnica de Valencia, Camino de Vera s/n, Valencia (Spain); Baraza, A. [Iberdrola Generacion S. A., Central Nuclear Cofrentes, Carretera Almansa Requena s/n, 04662 Cofrentes, Valencia (Spain); Vaquer, J., E-mail: m.curiel@lainsa.co [TITANIA Servicios Tecnologicos SL, Sorolla Center, local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain)

    2010-10-15

    UPVapor software version 02 has been developed for the Cofrentes nuclear power plant Data Analysis Department (Spain). It is an analysis graphical environment in which users have available all the plant variables registered in the process computer system (SIEC). In order to perform this, UPVapor software has many advanced graphic tools for work simplicity, as well as a friendly environment easy to use and with many configuration possibilities. Plant variables are classified in the same way that they are in SIEC computer and these values are taken from it through the network of Iberdrola. UPVapor can generate two different types of graphics: evolution graphs and X Y graphs. The first ones analyse the evolution up to twenty plant variables in a user's defined time period and according to historic plant files. Many tools are available: cursors, graphic configuration, mobile means, non valid data visualization ... Moreover, a particular analysis configuration can be saved, as a pre selection, giving the possibility of charging pre selection directly and developing quick monitoring of a group of preselected plant variables. In X Y graphs, it is possible to analyse a variable value against another variable in a defined time. As an option, users can filter previous data depending on a variable certain range, with the possibility of programming up to five filters. As well as the other graph, X Y graph has many configurations, saving and printing options. With UPVapor software, data analysts can save a valuable time during daily work and, as it is of easy utilization, it permits to other users to perform their own analysis without ask the analysts to develop. Besides, it can be used from any work centre with access to network framework. (Author)

  17. Analysis of mice tumor models using dynamic MRI data and a dedicated software platform

    Energy Technology Data Exchange (ETDEWEB)

    Alfke, H.; Maurer, E.; Klose, K.J. [Philipps Univ. Marburg (Germany). Dept. of Radiology; Kohle, S.; Rascher-Friesenhausen, R.; Behrens, S.; Peitgen, H.O. [MeVis - Center for Medical Diagnostic Systems and Visualization, Bremen (Germany); Celik, I. [Philipps Univ. Marburg (Germany). Inst. for Theoretical Surgery; Heverhagen, J.T. [Philipps Univ. Marburg (Germany). Dept. of Radiology; Ohio State Univ., Columbus (United States). Dept. of Radiology

    2004-09-01

    Purpose: To implement a software platform (DynaVision) dedicated to analyze data from functional imaging of tumors with different mathematical approaches, and to test the software platform in pancreatic carcinoma xenografts in mice with severe combined immunodeficiency disease (SCID). Materials and Methods: A software program was developed for extraction and visualization of tissue perfusion parameters from dynamic contrast-enhanced images. This includes regional parameter calculation from enhancement curves, parametric images (e.g., blood flow), animation, 3D visualization, two-compartment modeling a mode for comparing different datasets (e.g., therapy monitoring), and motion correction. We analyzed xenograft tumors from two pancreatic carcinoma cell lines (B x PC3 and ASPC1) implanted in 14 SCID mice after injection of Gd-DTPA into the tail vein. These data were correlated with histopathological findings. Results: Image analysis was completed in approximately 15 minutes per data set. The possibility of drawing and editing ROIs within the whole data set makes it easy to obtain quantitative data from the intensity-time curves. In one animal, motion artifacts reduced the image quality to a greater extent but data analysis was still possible after motion correction. Dynamic MRI of mice tumor models revealed a highly heterogeneous distribution of the contrast-enhancement curves and derived parameters, which correlated with differences in histopathology. ASPc1 tumors showed a more hypervascular type of curves with faster and higher signal enhancement rate (wash-in) and a faster signal decrease (wash-out). BXPC3 tumors showed a more hypovascular type with slower wash-in and wash-out. This correlated with the biological properties of the tumors. (orig.)

  18. Software for MR image overlay guided needle insertions: the clinical translation process

    Science.gov (United States)

    Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor

    2013-03-01

    PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.

  19. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  20. Integrating digital image management software for improved patient care and optimal practice management.

    Science.gov (United States)

    Starr, Jon C

    2006-06-01

    Photographic images provide vital documentation of preoperative, intraoperative, and postoperative results in the clinical dermatologic surgery practice and can document histologic findings from skin biopsies, thereby enhancing patient care. Images may be printed as part of text documents, transmitted via electronic mail, or included in electronic medical records. To describe existing computer software that integrates digital photography and the medical record to improve patient care and practice management. A variety of computer applications are available to optimize the use of digital images in the dermatologic practice.

  1. International Atomic Energy Agency intercomparison of ion beam analysis software

    Science.gov (United States)

    Barradas, N. P.; Arstila, K.; Battistig, G.; Bianconi, M.; Dytlewski, N.; Jeynes, C.; Kótai, E.; Lulli, G.; Mayer, M.; Rauhala, E.; Szilágyi, E.; Thompson, M.

    2007-09-01

    Ion beam analysis (IBA) includes a group of techniques for the determination of elemental concentration depth profiles of thin film materials. Often the final results rely on simulations, fits and calculations, made by dedicated codes written for specific techniques. Here we evaluate numerical codes dedicated to the analysis of Rutherford backscattering spectrometry, non-Rutherford elastic backscattering spectrometry, elastic recoil detection analysis and non-resonant nuclear reaction analysis data. Several software packages have been presented and made available to the community. New codes regularly appear, and old codes continue to be used and occasionally updated and expanded. However, those codes have to date not been validated, or even compared to each other. Consequently, IBA practitioners use codes whose validity, correctness and accuracy have never been validated beyond the authors' efforts. In this work, we present the results of an IBA software intercomparison exercise, where seven different packages participated. These were DEPTH, GISA, DataFurnace (NDF), RBX, RUMP, SIMNRA (all analytical codes) and MCERD (a Monte Carlo code). In a first step, a series of simulations were defined, testing different capabilities of the codes, for fixed conditions. In a second step, a set of real experimental data were analysed. The main conclusion is that the codes perform well within the limits of their design, and that the largest differences in the results obtained are due to differences in the fundamental databases used (stopping power and scattering cross section). In particular, spectra can be calculated including Rutherford cross sections with screening, energy resolution convolutions including energy straggling, and pileup effects, with agreement between the codes available at the 0.1% level. This same agreement is also available for the non-RBS techniques. This agreement is not limited to calculation of spectra from particular structures with predetermined

  2. Feature-Oriented Nonfunctional Requirement Analysis for Software Product Line

    Institute of Scientific and Technical Information of China (English)

    Xin Peng; Seok-Won Lee; Wen-Yun Zhao

    2009-01-01

    Domain analysis in software product line (SPL) development provides a basis for core assets design and implementation by a systematic and comprehensive commonality/variability analysis. In feature-oriented SPL methods, products of the domain analysis are domain feature models and corresponding feature decision models to facilitate application-oriented customization. As in requirement analysis for a single system, the domain analysis in the SPL development should consider both functional and nonfunctional domain requirements. However, the nonfunctional requirements (NFRs) are often neglected in the existing domain analysis methods. In this paper, we propose a context-based method of the NFR analysis for the SPL development. In the method, NFRs are materialized by connecting nonfunctional goals with real-world context,thus NFR elicitation and variability analysis can be performed by context analysis for the whole domain with the assistance of NFR templates and NFR graphs. After the variability analysis, our method integrates both functional and nonfunctional perspectives by incorporating the nonfunctional goals and operationalizations into an initial functional feature model.NFR-related constraints are also elicited and integrated. Finally, a decision model with both functional and nonfunctional perspectives is constructed to facilitate application-oriented feature model customization. A computer-aided grading system (CAGS) product line is employed to demonstrate the method throughout the paper.

  3. Meta-Analyst: software for meta-analysis of binary, continuous and diagnostic data

    Directory of Open Access Journals (Sweden)

    Schmid Christopher H

    2009-12-01

    Full Text Available Abstract Background Meta-analysis is increasingly used as a key source of evidence synthesis to inform clinical practice. The theory and statistical foundations of meta-analysis continually evolve, providing solutions to many new and challenging problems. In practice, most meta-analyses are performed in general statistical packages or dedicated meta-analysis programs. Results Herein, we introduce Meta-Analyst, a novel, powerful, intuitive, and free meta-analysis program for the meta-analysis of a variety of problems. Meta-Analyst is implemented in C# atop of the Microsoft .NET framework, and features a graphical user interface. The software performs several meta-analysis and meta-regression models for binary and continuous outcomes, as well as analyses for diagnostic and prognostic test studies in the frequentist and Bayesian frameworks. Moreover, Meta-Analyst includes a flexible tool to edit and customize generated meta-analysis graphs (e.g., forest plots and provides output in many formats (images, Adobe PDF, Microsoft Word-ready RTF. The software architecture employed allows for rapid changes to be made to either the Graphical User Interface (GUI or to the analytic modules. We verified the numerical precision of Meta-Analyst by comparing its output with that from standard meta-analysis routines in Stata over a large database of 11,803 meta-analyses of binary outcome data, and 6,881 meta-analyses of continuous outcome data from the Cochrane Library of Systematic Reviews. Results from analyses of diagnostic and prognostic test studies have been verified in a limited number of meta-analyses versus MetaDisc and MetaTest. Bayesian statistical analyses use the OpenBUGS calculation engine (and are thus as accurate as the standalone OpenBUGS software. Conclusion We have developed and validated a new program for conducting meta-analyses that combines the advantages of existing software for this task.

  4. Digital Images Analysis

    OpenAIRE

    2012-01-01

    International audience; A specific field of image processing focuses on the evaluation of image quality and assessment of their authenticity. A loss of image quality may be due to the various processes by which it passes. In assessing the authenticity of the image we detect forgeries, detection of hidden messages, etc. In this work, we present an overview of these areas; these areas have in common the need to develop theories and techniques to detect changes in the image that it is not detect...

  5. Image Analysis in CT Angiography

    NARCIS (Netherlands)

    Manniesing, R.

    2006-01-01

    In this thesis we develop and validate novel image processing techniques for the analysis of vascular structures in medical images. First a new type of filter is proposed which is capable of enhancing vascular structures while suppressing noise in the remainder of the image. This filter is based on

  6. A software to digital image processing to be used in the voxel phantom development.

    Science.gov (United States)

    Vieira, J W; Lima, F R A

    2009-11-15

    Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image

  7. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    Science.gov (United States)

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  8. Software tools of the Computis European project to process mass spectrometry images.

    Science.gov (United States)

    Robbe, Marie-France; Both, Jean-Pierre; Prideaux, Brendan; Klinkert, Ivo; Picaud, Vincent; Schramm, Thorsten; Hester, Atfons; Guevara, Victor; Stoeckli, Markus; Roempp, Andreas; Heeren, Ron M A; Spengler, Bernhard; Gala, Olivier; Haan, Serge

    2014-01-01

    Among the needs usually expressed by teams using mass spectrometry imaging, one that often arises is that for user-friendly software able to manage huge data volumes quickly and to provide efficient assistance for the interpretation of data. To answer this need, the Computis European project developed several complementary software tools to process mass spectrometry imaging data. Data Cube Explorer provides a simple spatial and spectral exploration for matrix-assisted laser desorption/ionisation-time of flight (MALDI-ToF) and time of flight-secondary-ion mass spectrometry (ToF-SIMS) data. SpectViewer offers visualisation functions, assistance to the interpretation of data, classification functionalities, peak list extraction to interrogate biological database and image overlay, and it can process data issued from MALDI-ToF, ToF-SIMS and desorption electrospray ionisation (DESI) equipment. EasyReg2D is able to register two images, in American Standard Code for Information Interchange (ASCII) format, issued from different technologies. The collaboration between the teams was hampered by the multiplicity of equipment and data formats, so the project also developed a common data format (imzML) to facilitate the exchange of experimental data and their interpretation by the different software tools. The BioMap platform for visualisation and exploration of MALDI-ToF and DESI images was adapted to parse imzML files, enabling its access to all project partners and, more globally, to a larger community of users. Considering the huge advantages brought by the imzML standard format, a specific editor (vBrowser) for imzML files and converters from proprietary formats to imzML were developed to enable the use of the imzML format by a broad scientific community. This initiative paves the way toward the development of a large panel of software tools able to process mass spectrometry imaging datasets in the future.

  9. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  10. Utilizing a Photo-Analysis Software for Content Identifying Method (CIM

    Directory of Open Access Journals (Sweden)

    Nejad Nasim Sahraei

    2015-01-01

    Full Text Available Content Identifying Methodology or (CIM was developed to measure public preferences in order to reveal the common characteristics of landscapes and aspects of underlying perceptions including the individual's reactions to content and spatial configuration, therefore, it can assist with the identification of factors that influenced preference. Regarding the analysis of landscape photographs through CIM, there are several studies utilizing image analysis software, such as Adobe Photoshop, in order to identify the physical contents in the scenes. This study attempts to evaluate public’s ‘preferences for aesthetic qualities of pedestrian bridges in urban areas through a photo-questionnaire survey, in which respondents evaluated images of pedestrian bridges in urban areas. Two groups of images were evaluated as the most and least preferred scenes that concern the highest and lowest mean scores respectively. These two groups were analyzed by CIM and also evaluated based on the respondent’s description of each group to reveal the pattern of preferences and the factors that may affect them. Digimizer Software was employed to triangulate the two approaches and to determine the role of these factors on people’s preferences. This study attempts to introduce the useful software for image analysis which can measure the physical contents and also their spatial organization in the scenes. According to the findings, it is revealed that Digimizer could be a useful tool in CIM approaches through preference studies that utilizes photographs in place of the actual landscape in order to determine the most important factors in public preferences for pedestrian bridges in urban areas.

  11. Reference image selection for difference imaging analysis

    CERN Document Server

    Huckvale, Leo; Sale, Stuart E

    2014-01-01

    Difference image analysis (DIA) is an effective technique for obtaining photometry in crowded fields, relative to a chosen reference image. As yet, however, optimal reference image selection is an unsolved problem. We examine how this selection depends on the combination of seeing, background and detector pixel size. Our tests use a combination of simulated data and quality indicators from DIA of well-sampled optical data and under-sampled near-infrared data from the OGLE and VVV surveys, respectively. We search for a figure-of-merit (FoM) which could be used to select reference images for each survey. While we do not find a universally applicable FoM, survey-specific measures indicate that the effect of spatial under-sampling may require a change in strategy from the standard DIA approach, even though seeing remains the primary criterion. We find that background is not an important criterion for reference selection, at least for the dynamic range in the images we test. For our analysis of VVV data in particu...

  12. Open Source software and social networks: disruptive alternatives for medical imaging.

    Science.gov (United States)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris

    2011-05-01

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate and exchange information

  13. Hazard Analysis of Software Requirements Specification for Process Module of FPGA-based Controllers in NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jung; Sejin; Kim, Eui-Sub; Yoo, Junbeom [Konkuk University, Seoul (Korea, Republic of); Keum, Jong Yong; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Software in PLC, FPGA which are used to develop I and C system also should be analyzed to hazards and risks before used. NUREG/CR-6430 proposes the method for performing software hazard analysis. It suggests analysis technique for software affected hazards and it reveals that software hazard analysis should be performed with the aspects of software life cycle such as requirements analysis, design, detailed design, implements. It also provides the guide phrases for applying software hazard analysis. HAZOP (Hazard and operability analysis) is one of the analysis technique which is introduced in NUREG/CR-6430 and it is useful technique to use guide phrases. HAZOP is sometimes used to analyze the safety of software. Analysis method of NUREG/CR-6430 had been used in Korea nuclear power plant software for PLC development. Appropriate guide phrases and analysis process are selected to apply efficiently and NUREG/CR-6430 provides applicable methods for software hazard analysis is identified in these researches. We perform software hazard analysis of FPGA software requirements specification with two approaches which are NUREG/CR-6430 and HAZOP with using general GW. We also perform the comparative analysis with them. NUREG/CR-6430 approach has several pros and cons comparing with the HAZOP with general guide words and approach. It is enough applicable to analyze the software requirements specification of FPGA.

  14. Engine structures analysis software: Component Specific Modeling (COSMO)

    Science.gov (United States)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  15. Development of software for the thermohydraulic analysis of air coolers

    Directory of Open Access Journals (Sweden)

    Šerbanović Slobodan P.

    2003-01-01

    Full Text Available Air coolers consume much more energy compared to other heat exchangers due to the large fan power required. This is an additional reason to establish reliable methods for the rational design and thermohydraulic analysis of these devices. The optimal values of the outlet temperature and air flow rate are of particular importance. The paper presents a methodology for the thermohydraulic calculation of air cooler performances, which is incorporated in the "Air Cooler" software module. The module covers two options: cooling and/or condensation of process fluids by ambient air. The calculated results can be given in various ways ie. in the tabular and graphical form.

  16. Development of RCM analysis software for Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Ho; Choi, Kwang Hee; Jeong, Hyeong Jong [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A software called KEPCO RCM workstation (KRCM) has been developed to optimize the maintenance strategies of Korean nuclear power plants. The program modules of the KRCM were designed in a manner that combines EPRI methodologies and KEPRI analysis technique. The KRCM is being applied to the three pilot system, chemical and volume control system, main steam system, and compressed air system of Yonggwang Units 1 and 2. In addition, the KRCM can be utilized as a tool to meet a part of the requirements of maintenance rule (MR) imposed by U.S. NRC. 3 refs., 4 figs. (Author)

  17. Development of a software for INAA analysis automation

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Genezini, Frederico A.; Figueiredo, Ana Maria G.; Ticianelli, Regina B., E-mail: gzahn@ipen [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this work, a software to automate the post-counting tasks in comparative INAA has been developed that aims to become more flexible than the available options, integrating itself with some of the routines currently in use in the IPEN Activation Analysis Laboratory and allowing the user to choose between a fully-automatic analysis or an Excel-oriented one. The software makes use of the Genie 2000 data importing and analysis routines and stores each 'energy-counts-uncertainty' table as a separate ASCII file that can be used later on if required by the analyst. Moreover, it generates an Excel-compatible CSV (comma separated values) file with only the relevant results from the analyses for each sample or comparator, as well as the results of the concentration calculations and the results obtained with four different statistical tools (unweighted average, weighted average, normalized residuals and Rajeval technique), allowing the analyst to double-check the results. Finally, a 'summary' CSV file is also produced, with the final concentration results obtained for each element in each sample. (author)

  18. Integrating software architectures for distributed simulations and simulation analysis communities.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.

  19. Comparison between ASI, CNES and JAXA CCD analysis software for optical space debris monitoring

    Science.gov (United States)

    Paolillo, Fabrizio; Laas-Bourez, Myrtille; Yanagisawa, Toshifumi; Cappelletti, Chantal; Graziani, Filippo; Vidal, Bruno

    Since nineties Italian Space Agency (ASI), Centre National d'Etudes Spatiales CNES and Japan Aerospace Exploration Agency (JAXA) play an important role in Inter-Agency Space Debris Coordination Committee (IADC) activities. Respectively the Group of Astrodynamics of Uni-versity Sapienza of Rome (GAUSS), TAROT team (Télescope a Action Rapide pour les Objets Transitoires) and Institute of Aerospace Technology (IAT), participate in optical space debris monitoring activities (WG1 at IADC ) with the following facilities: 1. SpaDE observatory of ASI/GAUSS in Collepardo (Fr.), country-regionplaceItaly. 2. TAROT observatories of CNES: one in Chili (ESO LA Silla) and one in placecountry-regionFrance (Observatoire de la Côte d'Azur, at Calern). 3. Nyukasayama Observatory of IAT/JAXA, country-regionplaceJapan. Due to the large amount of data collected during the IADC coordinated observation campaigns and the autonomous campaigns, these research groups developed three different software for image processing automation and for the correlation of the detected objects with the catalogue. Using these software the three different observatories are improving the knowledge of the space debris population, in particular in the so-called geostationary belt (AI23.4 IADC International 2007 optical observation campaigns in higher Earth orbits and AI23.2 Investigation of high A/m ratio debris in higher Earth orbits), but they use different space debris monitoring techniques. With the aim to improve CCD analysis capabilities of each research group, during the 27th IADC meeting ASI, CNES and JAXA started a cooperation in this field on the comparison between the image processing software. The objectives of this activity are: 1. Test of ASI, CNES and JAXA CCD analysis software on real images taken in the 3 dif-ferent observation strategies (each observatory uses a particular objects extraction pro-cedure). 2. Results comparison: number of bad detection, number of good detection, processing

  20. PROTEINCHALLENGE: Crowd sourcing in proteomics analysis and software development

    DEFF Research Database (Denmark)

    Martin, Sarah F.; Falkenberg, Heiner; Dyrlund, Thomas Franck

    2013-01-01

    , including arguments for community-wide open source software development and “big data” compatible solutions for the future. For the meantime, we have laid out ten top tips for data processing. With these at hand, a first large-scale proteomics analysis hopefully becomes less daunting to navigate.......However there is clearly a real need for robust tools, standard operating procedures and general acceptance of best practises. Thus we submit to the proteomics community a call for a community-wide open set of proteomics analysis challenges—PROTEINCHALLENGE—that directly target and compare data analysis workflows......In large-scale proteomics studies there is a temptation, after months of experimental work, to plug resulting data into a convenient—if poorly implemented—set of tools, which may neither do the data justice nor help answer the scientific question. In this paper we have captured key concerns...

  1. Graph based communication analysis for hardware/software codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1999-01-01

    In this paper we present a coarse grain CDFG (Control/Data Flow Graph) model suitable for hardware/software partitioning of single processes and demonstrate how it is necessary to perform various transformations on the graph structure before partitioning in order to achieve a structure that allows...... for accurate estimation of communication overhead between nodes mapped to different processors. In particular, we demonstrate how various transformations of control structures can lead to a more accurate communication analysis and more efficient implementations. The purpose of the transformations is to obtain...... a CDFG structure that is sufficiently fine grained as to support a correct communication analysis but not more fine grained than necessary as this will increase partitioning and analysis time....

  2. Finite Element Analysis of Wheel Rim Using Abaqus Software

    Directory of Open Access Journals (Sweden)

    Bimal Bastin

    2017-02-01

    Full Text Available The rim is the "outer edge of a wheel, holding the tire". It makes up the outer circular design of the wheel on which the inside edge of the tire is mounted on vehicles such as automobiles. A standard automotive steel wheel rim is made from a rectangular sheet metal. Design is an important industrial activity which influences the quality of the product being produced. The wheel rim is modeled by using modeling software SOLIDWORKS . Later this modal is imported to ABAQUS for analysis. Static load analysis has been done by applying a pressure of 5N/mm2 . The materials taken for analysis are steel alloy, Aluminium, Magnesium, and Forged Steel. The displacement occurred to the rim is noted after applying the static load to different materials and maximum principal stresses were also noted

  3. Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants.

    Science.gov (United States)

    Minervini, Massimo; Giuffrida, Mario V; Perata, Pierdomenico; Tsaftaris, Sotirios A

    2017-04-01

    Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com). © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  4. ANALYSIS OF FUNDUS IMAGES

    DEFF Research Database (Denmark)

    2000-01-01

    A method classifying objects man image as respective arterial or venous vessels comprising: identifying pixels of the said modified image which are located on a line object, determining which of the said image points is associated with crossing point or a bifurcation of the respective line object......, wherein a crossing point is represented by an image point which is the intersection of four line segments, performing a matching operation on pairs of said line segments for each said crossing point, to determine the path of blood vessels in the image, thereby classifying the line objects in the original...... image into two arbitrary sets, and thereafter designating one of the sets as representing venous structure, the other of the sets as representing arterial structure, depending on one or more of the following criteria: (a) complexity of structure; (b) average density; (c) average width; (d) tortuosity...

  5. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    2011-01-01

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  6. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  7. Oncological image analysis: medical and molecular image analysis

    Science.gov (United States)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  8. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  9. Don't Blame the Software: Using Qualitative Data Analysis Software Successfully in Doctoral Research

    Directory of Open Access Journals (Sweden)

    Michelle Salmona

    2016-07-01

    Full Text Available In this article, we explore the learning experiences of doctoral candidates as they use qualitative data analysis software (QDAS. Of particular interest is the process of adopting technology during the development of research methodology. Using an action research approach, data was gathered over five years from advanced doctoral research candidates and supervisors. The technology acceptance model (TAM was then applied as a theoretical analytic lens for better understanding how students interact with new technology. Findings relate to two significant barriers which doctoral students confront: 1. aligning perceptions of ease of use and usefulness is essential in overcoming resistance to technological change; 2. transparency into the research process through technology promotes insights into methodological challenges. Transitioning through both barriers requires a competent foundation in qualitative research. The study acknowledges the importance of higher degree research, curriculum reform and doctoral supervision in post-graduate research training together with their interconnected relationships in support of high-quality inquiry. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1603117

  10. Prospective comparison of (18)F-NaF PET/CT versus (18)F-FDG PET/CT imaging in mandibular extension of head and neck squamous cell carcinoma with dedicated analysis software and validation with surgical specimen. A preliminary study.

    Science.gov (United States)

    Lopez, Raphael; Gantet, Pierre; Salabert, Anne Sophie; Julian, Anne; Hitzel, Anne; Herbault-Barres, Beatrice; Fontan, Charlotte; Alshehri, Sarah; Payoux, Pierre

    2017-09-01

    The aim of this study is to propose a new method to quantify radioactivity with PET/CT imaging in mandibular extension in head and neck squamous cell carcinoma (HNSCC), using innovative software, and to compare results with microscopic surgical specimens. This prospective study enrolled 15 patients who underwent (18)F-NaF and (18)F-FDG PET/CT. We compared the delineations of bone invasions obtained with (18)F-NaF PET/CT and (18)F-FDG PET/CT with the results of histopathological analysis of mandibular resections (from right and left bone borders). A method for visualization and quantification of PET images was developed. For all patients, a significant difference (p = 0.032 for right limits and p = 0.011 for left limits) was observed between (18)F-FDG PET/CT imaging and histopathology results, and no significant difference (p = 0.88 for right limits and p = 0.55 for left limits) was observed between (18)F-NaF PET/CT imaging and histopathology results. The right limits were less than 10 mm in 93% of patients, and the left limits were less than 10 mm in 86% of patients. The dedicated software enabled the objective delineation of radioactivity within the bone. We can confirm that (18)F-NaF is a precise and specific bone marker for the assessment of intraosseous mandibular extensions of head and neck cancers. Therapeutic, III. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. APERO, AN OPEN SOURCE BUNDLE ADJUSMENT SOFTWARE FOR AUTOMATIC CALIBRATION AND ORIENTATION OF SET OF IMAGES

    Directory of Open Access Journals (Sweden)

    M. Pierrot Deseilligny

    2012-09-01

    Full Text Available IGN has developed a set of photogrammetric tools, APERO and MICMAC, for computing 3D models from set of images. This software, developed initially for its internal needs are now delivered as open source code. This paper focuses on the presentation of APERO the orientation software. Compared to some other free software initiatives, it is probably more complex but also more complete, its targeted user is rather professionals (architects, archaeologist, geomophologist than people. APERO uses both computer vision approach for estimation of initial solution and photogrammetry for a rigorous compensation of the total error; it has a large library of parametric model of distortion allowing a precise modelization of all the kind of pinhole camera we know, including several model of fish-eye; there is also several tools for geo-referencing the result. The results are illustrated on various application, including the data-set of 3D-Arch workshop.

  12. Apero, AN Open Source Bundle Adjusment Software for Automatic Calibration and Orientation of Set of Images

    Science.gov (United States)

    Pierrot Deseilligny, M.; Clery, I.

    2011-09-01

    IGN has developed a set of photogrammetric tools, APERO and MICMAC, for computing 3D models from set of images. This software, developed initially for its internal needs are now delivered as open source code. This paper focuses on the presentation of APERO the orientation software. Compared to some other free software initiatives, it is probably more complex but also more complete, its targeted user is rather professionals (architects, archaeologist, geomophologist) than people. APERO uses both computer vision approach for estimation of initial solution and photogrammetry for a rigorous compensation of the total error; it has a large library of parametric model of distortion allowing a precise modelization of all the kind of pinhole camera we know, including several model of fish-eye; there is also several tools for geo-referencing the result. The results are illustrated on various application, including the data-set of 3D-Arch workshop.

  13. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  14. Evaluation of Peak-Fitting Software for Gamma Spectrum Analysis

    CERN Document Server

    Zahn, Guilherme S; Moralles, Maurício

    2015-01-01

    In all applications of gamma-ray spectroscopy, one of the most important and delicate parts of the data analysis is the fitting of the gamma-ray spectra, where information as the number of counts, the position of the centroid and the width, for instance, are associated with each peak of each spectrum. There's a huge choice of computer programs that perform this type of analysis, and the most commonly used in routine work are the ones that automatically locate and fit the peaks; this fit can be made in several different ways -- the most common ways are to fit a Gaussian function to each peak or simply to integrate the area under the peak, but some software go far beyond and include several small corrections to the simple Gaussian peak function, in order to compensate for secondary effects. In this work several gamma-ray spectroscopy software are compared in the task of finding and fitting the gamma-ray peaks in spectra taken with standard sources of $^{137}$Cs, $^{60}$Co, $^{133}$Ba and $^{152}$Eu. The results...

  15. General Meta-Models to Analysis of Software Architecture Definitions

    Directory of Open Access Journals (Sweden)

    GholamAli Nejad HajAli Irani

    2011-12-01

    Full Text Available An important step for understanding the architecture will be obtained by providing a clear definition from that. More than 150 valid definitions presented for identifying the software architecture. So an analogy among them is needed to give us a better understanding on the existing definitions. In this paper an analysis over different issues of current definitions is provided based on the incorporated elements. In conjunction with this objective first, the definitions are collected and, after conducting an analysis over them, are broken into different constituent elements which are shown in one table. Then some selected parameters in the table are classified into groups for comparison purposes. Then all parameters of each individual group are specified and compared with each other. This procedure is rendered for all groups respectively. Finally, a meta-model is developed for each group. The aim is not to accept or reject a specific definition, but rather is to contrast the definitions and their respective constituent elements in order to construct a background for gaining better perceptions on software architecture which in turn can benefit the introduction of an appropriate definition.

  16. eXtended CASA Line Analysis Software Suite (XCLASS)

    Science.gov (United States)

    Möller, T.; Endres, C.; Schilke, P.

    2017-01-01

    The eXtended CASA Line Analysis Software Suite (XCLASS) is a toolbox for the Common Astronomy Software Applications package (CASA) containing new functions for modeling interferometric and single dish data. Among the tools is the myXCLASS program which calculates synthetic spectra by solving the radiative transfer equation for an isothermal object in one dimension, whereas the finite source size and dust attenuation are considered as well. Molecular data required by the myXCLASS program are taken from an embedded SQLite3 database containing entries from the Cologne Database for Molecular Spectroscopy (CDMS) and JPL using the Virtual Atomic and Molecular Data Center (VAMDC) portal. Additionally, the toolbox provides an interface for the model optimizer package Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX), which helps to find the best description of observational data using myXCLASS (or another external model program), that is, finding the parameter set that most closely reproduces the data. http://www.astro.uni-koeln.de/projects/schilke/myXCLASSInterface A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A7

  17. rosettR: protocol and software for seedling area and growth analysis

    DEFF Research Database (Denmark)

    Tomé, Filipa; Jansseune, Karel; Saey, Bernadette

    2017-01-01

    Growth is an important parameter to consider when studying the impact of treatments or mutations on plant physiology. Leaf area and growth rates can be estimated efficiently from images of plants, but the experiment setup, image analysis, and statistical evaluation can be laborious, often requiring...... substantial manual effort and programming skills. Here we present rosettR, a non-destructive and high-throughput phenotyping protocol for the measurement of total rosette area of seedlings grown in plates in sterile conditions. We demonstrate that our protocol can be used to accurately detect growth...... differences among different genotypes and in response to light regimes and osmotic stress. rosettR is implemented as a package for the statistical computing software R and provides easy to use functions to design an experiment, analyze the images, and generate reports on quality control as well as a final...

  18. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  19. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  20. Hyperspectral image analysis. A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Amigo, José Manuel, E-mail: jmar@food.ku.dk [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Babamoradi, Hamid [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Elcoroaristizabal, Saioa [Spectroscopy and Chemometrics Group, Department of Food Sciences, Faculty of Science, University of Copenhagen, Rolighedsvej 30, Frederiksberg C DK–1958 (Denmark); Chemical and Environmental Engineering Department, School of Engineering, University of the Basque Country, Alameda de Urquijo s/n, E-48013 Bilbao (Spain)

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  1. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  2. The COMPTEL Processing and Analysis Software system (COMPASS)

    Science.gov (United States)

    de Vries, C. P.; COMPTEL Collaboration

    The data analysis system of the gamma-ray Compton Telescope (COMPTEL) onboard the Compton-GRO spacecraft is described. A continous stream of data of the order of 1 kbytes per second is generated by the instrument. The data processing and analysis software is build around a relational database managment system (RDBMS) in order to be able to trace heritage and processing status of all data in the processing pipeline. Four institutes cooperate in this effort requiring procedures to keep local RDBMS contents identical between the sites and swift exchange of data using network facilities. Lately, there has been a gradual move of the system from central processing facilities towards clusters of workstations.

  3. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    Science.gov (United States)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  4. Software for the analysis and simulations of measurements; Software para analise e simulacao de medicoes

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Augusto Cesar Assis; Sarmento, Christiana Lauar; Mota, Geraldo Cesar; Domingos, Marileide Mourao; Belo, Noema Sant`Anna; Alves, Tulio Marcus Machado [Companhia Energetica de Minas Gerais (CEMIG), Belo Horizonte, MG (Brazil)

    1992-12-31

    This paper shows the development of a graphic software which act as a system to analyze the behaviour of electric power measurements and permits the calculation of `percent errors`, derived from measure inexactness. The software will show, in each situation, the correct link diagram, the measurement diagram, the `percent error` and the graphic behaviour of this error, in function of the power charge factor. 14 figs., 4 refs.

  5. The Image-Guided Surgery ToolKit IGSTK: an open source C++ software toolkit

    Science.gov (United States)

    Cheng, Peng; Ibanez, Luis; Gobbi, David; Gary, Kevin; Aylward, Stephen; Jomier, Julien; Enquobahrie, Andinet; Zhang, Hui; Kim, Hee-su; Blake, M. Brian; Cleary, Kevin

    2007-03-01

    The Image-Guided Surgery Toolkit (IGSTK) is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine architecture. This paper presents an overview of the project based on a recent book which can be downloaded from igstk.org. The paper includes an introduction to open source projects, a discussion of our software development process and the best practices that were developed, and an overview of requirements. The paper also presents the architecture framework and main components. This presentation is followed by a discussion of the state machine model that was incorporated and the associated rationale. The paper concludes with an example application.

  6. Montage: a grid portal and software toolkit for science-grade astronomical image mosaicking

    CERN Document Server

    Jacob, Joseph C; Berriman, G Bruce; Good, John; Laity, Anastasia C; Deelman, Ewa; Kesselman, Carl; Singh, Gurmeet; Su, Mei-Hui; Prince, Thomas A; Williams, Roy

    2010-01-01

    Montage is a portable software toolkit for constructing custom, science-grade mosaics by composing multiple astronomical images. The mosaics constructed by Montage preserve the astrometry (position) and photometry (intensity) of the sources in the input images. The mosaic to be constructed is specified by the user in terms of a set of parameters, including dataset and wavelength to be used, location and size on the sky, coordinate system and projection, and spatial sampling rate. Many astronomical datasets are massive, and are stored in distributed archives that are, in most cases, remote with respect to the available computational resources. Montage can be run on both single- and multi-processor computers, including clusters and grids. Standard grid tools are used to run Montage in the case where the data or computers used to construct a mosaic are located remotely on the Internet. This paper describes the architecture, algorithms, and usage of Montage as both a software toolkit and as a grid portal. Timing ...

  7. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis.

    Science.gov (United States)

    Zaimi, Aldo; Duval, Tanguy; Gasecka, Alicja; Côté, Daniel; Stikov, Nikola; Cohen-Adad, Julien

    2016-01-01

    Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg.

  8. AxonSeg: open source software for axon and myelin segmentation and morphometric analysis

    Directory of Open Access Journals (Sweden)

    Aldo Zaimi

    2016-08-01

    Full Text Available Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy only. Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i image pre-processing, (ii pre-segmentation of axons over a cropped image and discriminant analysis to select the best parameters based on axon shape and intensity information, (iii automatic axon and myelin segmentation over the full image and (iv atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM, scanning electron microscopy (SEM and coherent anti-Stokes Raman scattering (CARS microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg.

  9. A Software Tool for Quantitative Seismicity Analysis - ZMAP

    Science.gov (United States)

    Wiemer, S.; Gerstenberger, M.

    2001-12-01

    Earthquake catalogs are probably the most basic product of seismology, and remain arguably the most useful for tectonic studies. Modern seismograph networks can locate up to 100,000 earthquakes annually, providing a continuous and sometime overwhelming stream of data. ZMAP is a set of tools driven by a graphical user interface (GUI), designed to help seismologists analyze catalog data. ZMAP is primarily a research tool suited to the evaluation of catalog quality and to addressing specific hypotheses; however, it can also be useful in routine network operations. Examples of ZMAP features include catalog quality assessment (artifacts, completeness, explosion contamination), interactive data exploration, mapping transients in seismicity (rate changes, b-values, p-values), fractal dimension analysis and stress tensor inversions. Roughly 100 scientists worldwide have used the software at least occasionally. About 30 peer-reviewed publications have made use of ZMAP. ZMAP code is open source, written in the commercial software language Matlab by the Mathworks, a widely used software in the natural sciences. ZMAP was first published in 1994, and has continued to grow over the past 7 years. Recently, we released ZMAP v.6. The poster will introduce the features of ZMAP. We will specifically focus on ZMAP features related to time-dependent probabilistic hazard assessment. We are currently implementing a ZMAP based system that computes probabilistic hazard maps, which combine the stationary background hazard as well as aftershock and foreshock hazard into a comprehensive time dependent probabilistic hazard map. These maps will be displayed in near real time on the Internet. This poster is also intended as a forum for ZMAP users to provide feedback and discuss the future of ZMAP.

  10. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  11. Paraxial ghost image analysis

    Science.gov (United States)

    Abd El-Maksoud, Rania H.; Sasian, José M.

    2009-08-01

    This paper develops a methodology to model ghost images that are formed by two reflections between the surfaces of a multi-element lens system in the paraxial regime. An algorithm is presented to generate the ghost layouts from the nominal layout. For each possible ghost layout, paraxial ray tracing is performed to determine the ghost Gaussian cardinal points, the size of the ghost image at the nominal image plane, the location and diameter of the ghost entrance and exit pupils, and the location and diameter for the ghost entrance and exit windows. The paraxial ghost irradiance point spread function is obtained by adding up the irradiance contributions for all ghosts. Ghost simulation results for a simple lens system are provided. This approach provides a quick way to analyze ghost images in the paraxial regime.

  12. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    OpenAIRE

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  13. Towards a software approach to mitigate correlation power analysis

    CSIR Research Space (South Africa)

    Frieslaar, I

    2016-07-01

    Full Text Available software countermea- sures against SCA. These techniques are random precharging, masking, hiding and shuffling. Random precharging in a software environment requires the datapath to be filled with random operand instruc- tions before and after an important...

  14. Prospects for Evidence -Based Software Assurance: Models and Analysis

    Science.gov (United States)

    2015-09-01

    would not only facilitate technology transition, but also the management of complex supply chains . A third challenge for R&D managers is tracing the...The project addresses the challenge of software assurance in the presence of rich supply chains . As a consequence of the focus on supply chains , the...area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 software assurance, evidence-based software, software supply chain

  15. Visual data mining and analysis of software repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    In this article we describe an ongoing effort to integrate information visualization techniques into the process of configuration management for software systems. Our focus is to help software engineers manage the evolution of large and complex software systems by offering them effective and efficie

  16. Visual data mining and analysis of software repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    In this article we describe an ongoing effort to integrate information visualization techniques into the process of configuration management for software systems. Our focus is to help software engineers manage the evolution of large and complex software systems by offering them effective and efficie

  17. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    Science.gov (United States)

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  18. Enhanced simulator software for image validation and interpretation for multimodal localization super-resolution fluorescence microscopy

    Science.gov (United States)

    Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor

    2017-02-01

    Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

  19. [CASTOR-Radiology: software of management in a Unit of Medical Imaging: use in the CHU of Tours].

    Science.gov (United States)

    Bertrand, P; Rouleau, P; Alison, D; Bristeau, M; Minard, P; Saad, B

    1993-01-01

    Despite the large volume of information circulating in radiology departments, very few of them are currently computerised, although computer processing is developing rapidly in hospitals, encouraged by the installation of PMSI. This article illustrates the example of an imaging department management software: CASTOR-Radiologie, Computerisation of part of the Hospital Information System (HIS) must allow an improvement in the efficacy of the service rendered, must reliably reflect the department's activity and must be able to monitor the running costs. CASTOR-Radiologie was developed in conformity with standard national specifications defined by the Public Hospitals Department of the French Ministry of Health. The functions of this software are: unique patient identification, HIS base, management of examination requests, allowing a rapid reply to clinician's requests, "real-time" follow-up of patients in the department, saving time for secretaries and technicians, medical files and file analysis, allowing analysis of diagnostic strategies and quality control, edition of analytical tables of the department's activity compatible with the PMSI procedures catalogue, allowing optimisation of the use of limited resources, aid to the management of human, equipment and consumable resources. Links with other hospital computers raise organisational rather than technical problems, but have been planned for in the CASTOR-Radiologie software. This new tool was very well accepted by the personnel.

  20. Development of the free-space optical communications analysis software

    Science.gov (United States)

    Jeganathan, Muthu; Mecherle, G. Stephen; Lesh, James R.

    1998-05-01

    The Free-space Optical Communication Analysis Software (FOCAS) was developed at the Jet Propulsion Laboratory (JPL) to provide mission planners, systems engineers and communications engineers with an easy to use tool to analyze direct-detection optical communication links. The FOCAS program, implemented in Microsoft Excel, gives it all the power and flexibility built into the spreadsheet. An easy-to-use interface, developed using Visual Basic for Applications (VBA), to the spreadsheet allows easy input of data and parameters. A host of pre- defined components allow an analyst to configure a link without having to know the details of the components. FOCAS replaces the over-a-decade-old FORTRAN program called OPTI widely used previously at JPL. This paper describes the features and capabilities of the Excel-spreadsheet-based FOCAS program.

  1. Cost Analysis of Poor Quality Using a Software Simulation

    Directory of Open Access Journals (Sweden)

    Jana Fabianová

    2017-02-01

    Full Text Available The issues of quality, cost of poor quality and factors affecting quality are crucial to maintaining a competitiveness regarding to business activities. Use of software applications and computer simulation enables more effective quality management. Simulation tools offer incorporating the variability of more variables in experiments and evaluating their common impact on the final output. The article presents a case study focused on the possibility of using computer simulation Monte Carlo in the field of quality management. Two approaches for determining the cost of poor quality are introduced here. One from retrospective scope of view, where the cost of poor quality and production process are calculated based on historical data. The second approach uses the probabilistic characteristics of the input variables by means of simulation, and reflects as a perspective view of the costs of poor quality. Simulation output in the form of a tornado and sensitivity charts complement the risk analysis.

  2. Integrated Software Environment for Pressurized Thermal Shock Analysis

    Directory of Open Access Journals (Sweden)

    Dino Araneo

    2011-01-01

    Full Text Available The present paper describes the main features and an application to a real Nuclear Power Plant (NPP of an Integrated Software Environment (in the following referred to as “platform” developed at University of Pisa (UNIPI to perform Pressurized Thermal Shock (PTS analysis. The platform is written in Java for the portability and it implements all the steps foreseen in the methodology developed at UNIPI for the deterministic analysis of PTS scenarios. The methodology starts with the thermal hydraulic analysis of the NPP with a system code (such as Relap5-3D and Cathare2, during a selected transient scenario. The results so obtained are then processed to provide boundary conditions for the next step, that is, a CFD calculation. Once the system pressure and the RPV wall temperature are known, the stresses inside the RPV wall can be calculated by mean a Finite Element (FE code. The last step of the methodology is the Fracture Mechanics (FM analysis, using weight functions, aimed at evaluating the stress intensity factor (KI at crack tip to be compared with the critical stress intensity factor KIc. The platform automates all these steps foreseen in the methodology once the user specifies a number of boundary conditions at the beginning of the simulation.

  3. Analysis of Software Development Methodologies to Build Safety Software Applications for the SATEX-II: A Mexican Experimental Satellite

    Science.gov (United States)

    Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel

    2013-09-01

    Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.

  4. Images of innovation in discourses of free and open source software

    NARCIS (Netherlands)

    Dafermos, G.; Van Eeten, M.J.G.

    2014-01-01

    In this study, we examine the relationship between innovation and free/open source software (FOSS) based on the views of contributors to FOSS projects, using Q methodology as a method of discourse analysis to make visible the positions held by FOSS contributors and identify the discourses encountere

  5. Image Analysis for Tongue Characterization

    Institute of Scientific and Technical Information of China (English)

    SHENLansun; WEIBaoguo; CAIYiheng; ZHANGXinfeng; WANGYanqing; CHENJing; KONGLingbiao

    2003-01-01

    Tongue diagnosis is one of the essential methods in traditional Chinese medical diagnosis. The ac-curacy of tongue diagnosis can be improved by tongue char-acterization. This paper investigates the use of image anal-ysis techniques for tongue characterization by evaluating visual features obtained from images. A tongue imaging and analysis instrument (TIAI) was developed to acquire digital color tongue images. Several novel approaches are presented for color calibration, tongue area segmentation,quantitative analysis and qualitative description for the colors of tongue and its coating, the thickness and moisture of coating and quantification of the cracks of the toilgue.The overall accuracy of the automatic analysis of the colors of tongue and the thickness of tongue coating exceeds 85%.This work shows the promising future of tongue character-ization.

  6. Software workflow for the automatic tagging of medieval manuscript images (SWATI)

    Science.gov (United States)

    Chandna, Swati; Tonne, Danah; Jejkal, Thomas; Stotzka, Rainer; Krause, Celia; Vanscheidt, Philipp; Busch, Hannah; Prabhune, Ajinkya

    2015-01-01

    Digital methods, tools and algorithms are gaining in importance for the analysis of digitized manuscript collections in the arts and humanities. One example is the BMBF-funded research project "eCodicology" which aims to design, evaluate and optimize algorithms for the automatic identification of macro- and micro-structural layout features of medieval manuscripts. The main goal of this research project is to provide better insights into high-dimensional datasets of medieval manuscripts for humanities scholars. The heterogeneous nature and size of the humanities data and the need to create a database of automatically extracted reproducible features for better statistical and visual analysis are the main challenges in designing a workflow for the arts and humanities. This paper presents a concept of a workflow for the automatic tagging of medieval manuscripts. As a starting point, the workflow uses medieval manuscripts digitized within the scope of the project Virtual Scriptorium St. Matthias". Firstly, these digitized manuscripts are ingested into a data repository. Secondly, specific algorithms are adapted or designed for the identification of macro- and micro-structural layout elements like page size, writing space, number of lines etc. And lastly, a statistical analysis and scientific evaluation of the manuscripts groups are performed. The workflow is designed generically to process large amounts of data automatically with any desired algorithm for feature extraction. As a result, a database of objectified and reproducible features is created which helps to analyze and visualize hidden relationships of around 170,000 pages. The workflow shows the potential of automatic image analysis by enabling the processing of a single page in less than a minute. Furthermore, the accuracy tests of the workflow on a small set of manuscripts with respect to features like page size and text areas show that automatic and manual analysis are comparable. The usage of a computer

  7. Comparison of an Imaging Software and Manual Prediction of Soft Tissue Changes after Orthognathic Surgery

    Directory of Open Access Journals (Sweden)

    M. S. Ahmad Akhoundi

    2012-01-01

    Full Text Available Objective: Accurate prediction of the surgical outcome is important in treating dentofacial deformities. Visualized treatment objectives usually involve manual surgical simulation based on tracing of cephalometric radiographs. Recent technical advancements have led to the use of computer assisted imaging systems in treatment planning for orthognathic surgical cases. The purpose of this study was to examine and compare the ability and reliability of digitization using Dolphin Imaging Software with traditional manual techniques and to compare orthognathic prediction with actual outcomes.Materials and Methods: Forty patients consisting of 35 women and 5 men (32 class III and 8 class II with no previous surgery were evaluated by manual tracing and indirect digitization using Dolphin Imaging Software. Reliability of each method was assessed then the two techniques were compared using paired t test.Result: The nasal tip presented the least predicted error and higher reliability. The least accurate regions in vertical plane were subnasal and upper lip, and subnasal and pogonion in horizontal plane. There were no statistically significant differences between the predictions of groups with and without genioplasty.Conclusion: Computer-generated image prediction was suitable for patient education and communication. However, efforts are still needed to improve accuracy and reliability of the prediction program and to include changes in soft tissue tension and muscle strain.

  8. A PC-based 3D imaging system: algorithms, software, and hardware considerations.

    Science.gov (United States)

    Raya, S P; Udupa, J K; Barrett, W A

    1990-01-01

    Three-dimensional (3D) imaging in medicine is known to produce easily and quickly derivable medically relevant information, especially in complex situations. We intend to demonstrate in this paper, that with an appropriate choice of approaches and a proper design of algorithms and software, it is possible to develop a low-cost 3D imaging system that can provide a level of performance sufficient to meet the daily case load in an individual or even group-practice situation. We describe hardware considerations of a generic system and give an example of a specific system we used for our implementation. Given a 3D image as a stack of slices, we generate a packed binary cubic voxel array, by combining segmentation (density thresholding), interpolation, and packing in an efficient way. Since threshold-based segmentation is very often not perfect, object-like structures and noise clutter the binary scene. We utilize an effective mechanism to isolate the object from this clutter by tracking a specified, connected surface of the object. The surface description thus obtained is rendered to create a depiction of the surface on a 2D display screen. Efficient implementation of hidden-part removal and image-space shading and a simple and fast antialiasing technique provide a level of performance which otherwise would not have been possible in a PC environment. We outline our software emphasizing some design aspects and present some clinical examples.

  9. Study on image processing of panoramic X-ray using deviation improvement software.

    Science.gov (United States)

    Kim, Tae-Gon; Lee, Yang-Sun; Kim, Young-Pyo; Park, Yong-Pil; Cheon, Min-Woo

    2014-01-01

    Utilization of panoramic X-ray device is getting wider. Panoramic X-ray has low resolution than general X-ray device and it occurs to distortion by deviation of image synthesis. Due to structural problems, it has been used restrictively to identify of tooth structure, not for whole head. Therefore, it designed and produced panoramic X-ray device which is possible to diagnostic coverage can be extended and had to be adjusted interval control between X-ray generator and image processing for whole of Maxillofacia's diagnosis. Produced panoramic X-ray device is composed basically of short image synthesis. In addition, it was confirmed the results by used the device which was applied deviation of the brightness of the image, filter to improve the location of the deviation and interpolation method. In this study, it was used 13 images including the front. It occurs to brightness deviation, position deviation, and geometric correction when synthesis of image, but it had been solved by deviation improvement software and a change of CCD camera's scan line which is used for image acquisition. Therefore, it confirmed expansion possibility of utilization range to commonly used panoramic X-ray device.

  10. A Survey of DICOM Viewer Software to Integrate Clinical Research and Medical Imaging.

    Science.gov (United States)

    Haak, Daniel; Page, Charles-E; Deserno, Thomas M

    2016-04-01

    The digital imaging and communications in medicine (DICOM) protocol is the leading standard for image data management in healthcare. Imaging biomarkers and image-based surrogate endpoints in clinical trials and medical registries require DICOM viewer software with advanced functionality for visualization and interfaces for integration. In this paper, a comprehensive evaluation of 28 DICOM viewers is performed. The evaluation criteria are obtained from application scenarios in clinical research rather than patient care. They include (i) platform, (ii) interface, (iii) support, (iv) two-dimensional (2D), and (v) three-dimensional (3D) viewing. On the average, 4.48 and 1.43 of overall 8 2D and 5 3D image viewing criteria are satisfied, respectively. Suitable DICOM interfaces for central viewing in hospitals are provided by GingkoCADx, MIPAV, and OsiriX Lite. The viewers ImageJ, MicroView, MIPAV, and OsiriX Lite offer all included 3D-rendering features for advanced viewing. Interfaces needed for decentral viewing in web-based systems are offered by Oviyam, Weasis, and Xero. Focusing on open source components, MIPAV is the best candidate for 3D imaging as well as DICOM communication. Weasis is superior for workflow optimization in clinical trials. Our evaluation shows that advanced visualization and suitable interfaces can also be found in the open source field and not only in commercial products.

  11. STATISTICAL ANALYSIS ON SOFTWARE METRICS AFFECTING MODULARITY IN OPEN SOURCE SOFTWARE

    Directory of Open Access Journals (Sweden)

    Andi Wahju Rahardjo Emanuel

    2011-06-01

    Full Text Available Modularity has been identified by many researchers as one of the success factors of Open Source Software (OSS Projects. This modularity trait are influenced by some aspects of software metrics such as size, complexity, cohesion, and coupling. In this research, we analyze the software metrics such as Size Metrics (NCLOC, Lines, and Statements, Complexity Metrics (McCabe's Cyclomatic Complexity, Cohesion Metrics (LCOM4, and Coupling Metrics (RFC, Afferent coupling and Efferent coupling of 59 Java-based OSS Projects from Sourceforge.net. By assuming that the number of downloads can be used as the indication of success of these projects, the OSS Projects being selected are the projects which have been downloaded more than 100,000 times. The software metrics reflecting the modularity of these projects are collected using SONAR tool and then statistically analyzed using scatter graph, Pearson rproduct-moment correlation, and least-square-fit linear approximation. It can be shown that there areonly three independent metrics reflecting modularity which are NCLOC, LCOM4, and Afferent Coupling, whereas there is also one inconclusive result regarding Efferent Coupling.

  12. Introduction of Aesthetic Analyzer Software: Computer-aided Linear and Angular Analysis of Facial Profile Photographs

    Directory of Open Access Journals (Sweden)

    Moshkelgosha V.

    2012-06-01

    Full Text Available Statement of Problem: Evaluation of diagnostic records as a supplement to direct examination has an important role in treatment planning of orthodontic patients with aesthetic needs. Photogrammetry as a quantitative tool has recently attracted the attention of researchers again.Purpose: The purpose of this study was to design computer software to analyze orthodontic patients’ facial profile photographic images and to estimate reliability and validity of its measurement.Materials and Method: Profile photographic images of 20 volunteered students were taken in the natural head position with standard technique. Manual linear and angular measurements were used as a gold standard and compared with the results obtained from Aesthetic analyzer Software (designed for that purpose. Dahlberg’s method error and Intraclass Correlation Coefficient (ICC was used to estimate validity, reliability and inter-examiner errors.Results: Almost all the measurements showed a high correlation between the manual and computerized method (ICC>0.75. The maximum method errors computed from Dahlberg’s formula were 1.345 mm in linear and 3.294 degrees in angular measurements. At the highest levels, inter-examiner errors were 1.684 mm and 3.741 degrees in linear and angular measurements, respectively. Conclusion: Although a low budget has been allocated for the design of Aesthetic Analyzer software, its features are comparable with commercially available products. The software’s capabilities can be increased. The results of the current study indicated that the software is accurate and repeatable in photographic analysis of orthodontic patients.

  13. Development of an Open Source Image-Based Flow Modeling Software - SimVascular

    Science.gov (United States)

    Updegrove, Adam; Merkow, Jameson; Schiavazzi, Daniele; Wilson, Nathan; Marsden, Alison; Shadden, Shawn

    2014-11-01

    SimVascular (www.simvascular.org) is currently the only comprehensive software package that provides a complete pipeline from medical image data segmentation to patient specific blood flow simulation. This software and its derivatives have been used in hundreds of conference abstracts and peer-reviewed journal articles, as well as the foundation of medical startups. SimVascular was initially released in August 2007, yet major challenges and deterrents for new adopters were the requirement of licensing three expensive commercial libraries utilized by the software, a complicated build process, and a lack of documentation, support and organized maintenance. In the past year, the SimVascular team has made significant progress to integrate open source alternatives for the linear solver, solid modeling, and mesh generation commercial libraries required by the original public release. In addition, the build system, available distributions, and graphical user interface have been significantly enhanced. Finally, the software has been updated to enable users to directly run simulations using models and boundary condition values, included in the Vascular Model Repository (vascularmodel.org). In this presentation we will briefly overview the capabilities of the new SimVascular 2.0 release. National Science Foundation.

  14. Software use cases to elicit the software requirements analysis within the ASTRI project

    Science.gov (United States)

    Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo

    2016-07-01

    The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered

  15. PROTEINCHALLENGE: crowd sourcing in proteomics analysis and software development.

    Science.gov (United States)

    Martin, Sarah F; Falkenberg, Heiner; Dyrlund, Thomas F; Khoudoli, Guennadi A; Mageean, Craig J; Linding, Rune

    2013-08-02

    In large-scale proteomics studies there is a temptation, after months of experimental work, to plug resulting data into a convenient-if poorly implemented-set of tools, which may neither do the data justice nor help answer the scientific question. In this paper we have captured key concerns, including arguments for community-wide open source software development and "big data" compatible solutions for the future. For the meantime, we have laid out ten top tips for data processing. With these at hand, a first large-scale proteomics analysis hopefully becomes less daunting to navigate. However there is clearly a real need for robust tools, standard operating procedures and general acceptance of best practises. Thus we submit to the proteomics community a call for a community-wide open set of proteomics analysis challenges--PROTEINCHALLENGE--that directly target and compare data analysis workflows, with the aim of setting a community-driven gold standard for data handling, reporting and sharing. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. The Application and Extension of Backward Software Analysis

    CERN Document Server

    Perisic, Aleksandar

    2010-01-01

    The backward software analysis is a method that emanates from executing a program backwards - instead of taking input data and following the execution path, we start from output data and by executing the program backwards command by command, analyze data that could lead to the current output. The changed perspective forces a developer to think in a new way about the program. It can be applied as a thorough procedure or casual method. With this method, we have many advantages in testing, algorithm and system analysis. For example, in testing the advantage is obvious if the set of output data is smaller than possible inputs. For some programs or algorithms, we know more precisely the output data, so this backward analysis can help in reducing the number of test cases or even in strict verification of an algorithm. The difficulty lies in the fact that we need types of data that no programming language currently supports, so we need additional effort to understand how this method works, or what effort we need to ...

  17. Economic Consequence Analysis of Disasters: The ECAT Software Tool

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Adam; Prager, Fynn; Chen, Zhenhua; Chatterjee, Samrat; Wei, Dan; Heatwole, Nathaniel; Warren, Eric

    2017-04-15

    This study develops a methodology for rapidly obtaining approximate estimates of the economic consequences from numerous natural, man-made and technological threats. This software tool is intended for use by various decision makers and analysts to obtain estimates rapidly. It is programmed in Excel and Visual Basic for Applications (VBA) to facilitate its use. This tool is called E-CAT (Economic Consequence Analysis Tool) and accounts for the cumulative direct and indirect impacts (including resilience and behavioral factors that significantly affect base estimates) on the U.S. economy. E-CAT is intended to be a major step toward advancing the current state of economic consequence analysis (ECA) and also contributing to and developing interest in further research into complex but rapid turnaround approaches. The essence of the methodology involves running numerous simulations in a computable general equilibrium (CGE) model for each threat, yielding synthetic data for the estimation of a single regression equation based on the identification of key explanatory variables (threat characteristics and background conditions). This transforms the results of a complex model, which is beyond the reach of most users, into a "reduced form" model that is readily comprehensible. Functionality has been built into E-CAT so that its users can switch various consequence categories on and off in order to create customized profiles of economic consequences of numerous risk events. E-CAT incorporates uncertainty on both the input and output side in the course of the analysis.

  18. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  19. CONRAD—A software framework for cone-beam imaging in radiology

    Science.gov (United States)

    Maier, Andreas; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca

    2013-01-01

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects. Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source. Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size. Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  20. CONRAD--a software framework for cone-beam imaging in radiology.

    Science.gov (United States)

    Maier, Andreas; Hofmann, Hannes G; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca

    2013-11-01

    In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects. CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source. A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size. As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison

  1. CONRAD—A software framework for cone-beam imaging in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Hofmann, Hannes G.; Berger, Martin [Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander University of Erlangen-Nuremberg, Erlangen 91058 (Germany); Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim [Erlangen Graduate School in Advanced Optical Technologies (SAOT), Universität Erlangen-Nürnberg Pattern Recognition Laboratory, Department of Computer Science, Friedrich-Alexander University of Erlangen-Nuremberg, Erlangen 91058 (Germany)

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  2. Adiposoft: automated software for the analysis of white adipose tissue cellularity in histological sections.

    Science.gov (United States)

    Galarraga, Miguel; Campión, Javier; Muñoz-Barrutia, Arrate; Boqué, Noemí; Moreno, Haritz; Martínez, José Alfredo; Milagro, Fermín; Ortiz-de-Solórzano, Carlos

    2012-12-01

    The accurate estimation of the number and size of cells provides relevant information on the kinetics of growth and the physiological status of a given tissue or organ. Here, we present Adiposoft, a fully automated open-source software for the analysis of white adipose tissue cellularity in histological sections. First, we describe the sequence of image analysis routines implemented by the program. Then, we evaluate our software by comparing it with other adipose tissue quantification methods, namely, with the manual analysis of cells in histological sections (used as gold standard) and with the automated analysis of cells in suspension, the most commonly used method. Our results show significant concordance between Adiposoft and the other two methods. We also demonstrate the ability of the proposed method to distinguish the cellular composition of three different rat fat depots. Moreover, we found high correlation and low disagreement between Adiposoft and the manual delineation of cells. We conclude that Adiposoft provides accurate results while considerably reducing the amount of time and effort required for the analysis.

  3. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  4. Development of Software for Analyzing Breakage Cutting Tools Based on Image Processing%基于图像技术分析刀具破损的软件开发

    Institute of Scientific and Technical Information of China (English)

    赵彦玲; 刘献礼; 王鹏; 王波; 王红运

    2004-01-01

    As the present day digital microsystems do not provide specialized microscopes that can detect cutting-tool, analysis software has been developed using VC++. A module for verge test and image segmentation is designed specifically for cutting-tools. Known calibration relations and given postulates are used in scale measurements. Practical operations show that the software can perform accurate detection.

  5. Introducing PLIA: Planetary Laboratory for Image Analysis

    Science.gov (United States)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  6. Control software analysis, Part I Open-loop properties

    CERN Document Server

    Feron, Eric

    2008-01-01

    As the digital world enters further into everyday life, questions are raised about the increasing challenges brought by the interaction of real-time software with physical devices. Many accidents and incidents encountered in areas as diverse as medical systems, transportation systems or weapon systems are ultimately attributed to "software failures". Since real-time software that interacts with physical systems might as well be called control software, the long litany of accidents due to real-time software failures might be taken as an equally long list of opportunities for control systems engineering. In this paper, we are interested only in run-time errors in those pieces of software that are a direct implementation of control system specifications: For well-defined and well-understood control architectures such as those present in standard textbooks on digital control systems, the current state of theoretical computer science is well-equipped enough to address and analyze control algorithms. It appears tha...

  7. Common tasks in microscopic and ultrastructural image analysis using ImageJ.

    Science.gov (United States)

    Papadopulos, Francesca; Spinelli, Matthew; Valente, Sabrina; Foroni, Laura; Orrico, Catia; Alviano, Francesco; Pasquinelli, Gianandrea

    2007-01-01

    Cooperation between research communities and software-development teams has led to the creation of novel software. The purpose of this paper is to show an alternative work method based on the usage of ImageJ (http://rsb.info.nih.gov/ij/), which can be effectively employed in solving common microscopic and ultrastructural image analysis tasks. As an open-source software, ImageJ provides the possibility to work in a free-development/sharing world. Its very "friendly" graphical user interface helps users to manage and edit biomedical images. The on-line material such as handbooks, wikis, and plugins leads users through various functions, giving clues about potential new applications. ImageJ is not only a morphometric analysis software, it is sufficiently flexible to be adapted to the numerous requirements tasked in the laboratories as routine as well as research demands. Examples include area measurements on selectively stained tissue components, cell count and area measurements at single cell level, immunohistochemical antigen quantification, and immunoelectron microscopy gold particle count.

  8. The role of camera-bundled image management software in the consumer digital imaging value chain

    Science.gov (United States)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  9. Software selection based on analysis and forecasting methods, practised in 1C

    Science.gov (United States)

    Vazhdaev, A. N.; Chernysheva, T. Y.; Lisacheva, E. I.

    2015-09-01

    The research focuses on the problem of a “1C: Enterprise 8” platform inboard mechanisms for data analysis and forecasting. It is important to evaluate and select proper software to develop effective strategies for customer relationship management in terms of sales, as well as implementation and further maintenance of software. Research data allows creating new forecast models to schedule further software distribution.

  10. Software maintenance: an analysis of industrial needs and constraints

    OpenAIRE

    Haziza, Marc; Voidrot, Jean-François; Queille, Jean-Pierre; Pofelski, Lech; Blazy, Sandrine

    1992-01-01

    The results are given of a series of case studies conducted at different industrial sites in the framework of the ESF/EPSOM (Eureka Software Factory/European Platform for Software Maintenance) project. The approach taken in the case studies was to directly contact software maintainers and obtain their own view of their activity, mainly through the use of interactive methods based on group work. This approach is intended to complement statistical studies which can be found in the literature, b...

  11. Analysis of Software Design Artifacts for Socio-Technical Aspects

    OpenAIRE

    Damaševičius, Robertas; Kaunas University of Technology

    2007-01-01

    Software systems are not purely technical objects. They are designed, constructed and used by people. Therefore, software design process is not purely a technical task, but a socio-technical process embedded within organizational and social structures. These social structures influence and govern their work behavior and final work products such as program source code and documentation. This paper discusses the organizational, social and psychological aspects of software design; and formulates...

  12. Assessment of global und regional left ventricular function with a 16-slice spiral-CT using two different software tools for quantitative functional analysis and qualitative evaluation of wall motion changes in comparison with magnetic resonance imaging; Moeglichkeiten der 16-Schicht-CT bei der linksventrikulaeren Funktionsbestimmung: Beurteilung zweier unterschiedlicher Software-Tools zur quantitativen Funktionsanalyse sowie qualitative Bewertung von Wandbewegungsstoerungen im Vergleich zur Magnetresonanztomographie

    Energy Technology Data Exchange (ETDEWEB)

    Koch, K.; Oellig, F.; Kunz, P.; Bender, P.; Oberholzer, K.; Mildenberger, P.; Kreitner, K.F.; Thelen, M. [Klinik und Poliklinik fuer Radiologie, Johannes Gutenberg-Univ. Mainz (Germany); Hake, U. [Klinik fuer Herz-, Thorax- und Gefaesschirurgie, Johannes Gutenberg-Univ. Mainz (Germany)

    2004-12-01

    Purpose: To determine global and regional left ventricular (LV) function from retrospectively gated multidetector row computed tomography (CT) by using two different semiautomated analysis tools and to correlate the results with those of magnetic resonance imaging (MRI). Materials and Methods: Nineteen patients (5 females, 14males, mean age 69 years) underwent 16-slice spiral-CT (MS-CT) with standard technique without administration of {beta}-blockers for a decrease in the cardiac rate. Ten series of images were reconstructed at every 10% of the RR-interval. With commercially available software capable of semiautomated contour detection, end-diastolic and end-systolic LV volumes (EDV and ESV) were determined from short-axis multiplanar CT reformations (MPR). Axial images of the end-systolic and end-diastolic cardiac phase were transformed to 3D volumes (3D) to determine EDV and ESV by using a threshold-supported reconstruction algorithm dependent on the contrast enhancement of the left ventricle. Steady-state free-precession cine MR images were acquired in short-axis orientation on the same day in all but one patient. Regional wall motion was assessed qualitatively in 17 left ventricular segments and classified as normo-, hypo-, a- or dyskinetic. Bland-Altman analysis was performed to calculate limits of agreement and systematic errors between CT and MRI. Results: For MPR/3D, mean end-diastolic (144.4/142.8 mL {+-} 67.5/67.1) and end-systolic (66.4/68.7 mL {+-} 52.1/49.9) LV volumes as determined with MS-CT correlated well with MRI measurements (147.6 mL {+-} 67.6 [r = 0.98/0.96] and 73.3 mL {+-} 55.5 [r = 0.98/0.98], respectively [p <.001]). LV stroke volume (77.6/74.1 {+-} 19.2/23.4 mL for CT vs. 74.4 mL {+-} 13.4 for MRI, r = 0.92/0.74) and LV ejection fraction (58.6/55.9% {+-} 13.5/13.7 for CT vs. 55.6% {+-} 13.5 for MRI, r = 0.95/0.91) also showed good correlation (p<.001). Regional wall motion analysis revealed agreement between CT and MRI in 316/323 (97

  13. Mississippi Company Using NASA Software Program to Provide Unique Imaging Service: DATASTAR Success Story

    Science.gov (United States)

    2001-01-01

    DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.

  14. Strategy and software for the statistical spatial analysis of 3D intracellular distributions.

    Science.gov (United States)

    Biot, Eric; Crowell, Elizabeth; Burguet, Jasmine; Höfte, Herman; Vernhettes, Samantha; Andrey, Philippe

    2016-07-01

    The localization of proteins in specific domains or compartments in the 3D cellular space is essential for many fundamental processes in eukaryotic cells. Deciphering spatial organization principles within cells is a challenging task, in particular because of the large morphological variations between individual cells. We present here an approach for normalizing variations in cell morphology and for statistically analyzing spatial distributions of intracellular compartments from collections of 3D images. The method relies on the processing and analysis of 3D geometrical models that are generated from image stacks and that are used to build representations at progressively increasing levels of integration, ultimately revealing statistical significant traits of spatial distributions. To make this methodology widely available to end-users, we implemented our algorithmic pipeline into a user-friendly, multi-platform, and freely available software. To validate our approach, we generated 3D statistical maps of endomembrane compartments at subcellular resolution within an average epidermal root cell from collections of image stacks. This revealed unsuspected polar distribution patterns of organelles that were not detectable in individual images. By reversing the classical 'measure-then-average' paradigm, one major benefit of the proposed strategy is the production and display of statistical 3D representations of spatial organizations, thus fully preserving the spatial dimension of image data and at the same time allowing their integration over individual observations. The approach and software are generic and should be of general interest for experimental and modeling studies of spatial organizations at multiple scales (subcellular, cellular, tissular) in biological systems.

  15. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    Science.gov (United States)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  16. Mutation Analysis Approach to Develop Reliable Object-Oriented Software

    Directory of Open Access Journals (Sweden)

    Monalisa Sarma

    2014-01-01

    Full Text Available In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.

  17. Reduction EMI of BLDC Motor Drive Based on Software Analysis

    Directory of Open Access Journals (Sweden)

    Navid Mousavi

    2016-01-01

    Full Text Available In the BLDC motor-drive system, the leakage current from a motor to a ground network and existence of high-frequency components of the DC link current are the most important factors that cause conducting interference. The leakage currents of the motors, flow through common ground, will interfere with other equipment because of the high density of electrical and electronic systems in the spacecraft and aircrafts. Moreover, generally there are common DC buses in the mentioned systems, which aggravate the problem. Function of the electric motor causes appearance of the high-frequency components in the DC link current, which can interfere with other subsystems. In this paper, the analysis of electromagnetic noise and presentation of the proposed method based on the frequency spectrum of the DC link current and the leakage current from the motor to the ground network are done. The proposed method presents a new process based on the filtering method to overcome EMI. To cover the requirement analysis, the Maxwell software is used.

  18. HistFitter software framework for statistical data analysis

    Science.gov (United States)

    Baak, M.; Besjes, G. J.; Côté, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-04-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface.

  19. How qualitative data analysis software may support the qualitative analysis process

    NARCIS (Netherlands)

    Peters, V.A.M.; Wester, F.P.J.

    2007-01-01

    The last decades have shown large progress in the elaboration of procedures for qualitative data analysis and in the development of computer programs to support this kind of analysis. We believe, however, that the link between methodology and computer software tools is too loose, especially for a no

  20. The Financial Analysis System: An Integrated Software System for Financial Analysis and Modeling.

    Science.gov (United States)

    Groomer, S. Michael

    This paper discusses the Financial Analysis System (FAS), a software system for financial analysis, display, and modeling of the data found in the COMPUSTAT Annual Industrial, Over-the-Counter and Canadian Company files. The educational utility of FAS is also discussed briefly. (Author)

  1. Research and Development of Statistical Analysis Software System of Maize Seedling Experiment

    Directory of Open Access Journals (Sweden)

    Hui Cao

    2014-03-01

    Full Text Available In this study, software engineer measures were used to develop a set of software system for maize seedling experiments statistics and analysis works. During development works, B/S structure software design method was used and a set of statistics indicators for maize seedling evaluation were established. The experiments results indicated that this set of software system could finish quality statistics and analysis for maize seedling very well. The development of this software system explored a new method for screening of maize seedlings.

  2. a New Digital Image Correlation Software for Displacements Field Measurement in Structural Applications

    Science.gov (United States)

    Ravanelli, R.; Nascetti, A.; Di Rita, M.; Belloni, V.; Mattei, D.; Nisticó, N.; Crespi, M.

    2017-07-01

    Recently, there has been a growing interest in studying non-contact techniques for strain and displacement measurement. Within photogrammetry, Digital Image Correlation (DIC) has received particular attention thanks to the recent advances in the field of lowcost, high resolution digital cameras, computer power and memory storage. DIC is indeed an optical technique able to measure full field displacements and strain by comparing digital images of the surface of a material sample at different stages of deformation and thus can play a major role in structural monitoring applications. For all these reasons, a free and open source 2D DIC software, named py2DIC, was developed at the Geodesy and Geomatics Division of DICEA, University of Rome La Sapienza. Completely written in python, the software is based on the template matching method and computes the displacement and strain fields. The potentialities of Py2DIC were evaluated by processing the images captured during a tensile test performed in the Lab of Structural Engineering, where three different Glass Fiber Reinforced Polymer samples were subjected to a controlled tension by means of a universal testing machine. The results, compared with the values independently measured by several strain gauges fixed on the samples, demonstrate the possibility to successfully characterize the deformation mechanism of the investigated material. Py2DIC is indeed able to highlight displacements at few microns level, in reasonable agreement with the reference, both in terms of displacements (again, at few microns in the average) and Poisson's module.

  3. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  4. New Instruments for Survey: on Line Softwares for 3d Recontruction from Images

    Science.gov (United States)

    Fratus de Balestrini, E.; Guerra, F.

    2011-09-01

    's result is a critical analysis of the software's potentialities, with an indication of the areas in which it is possible an effective and alternative use to other methods of survey.

  5. NEW INSTRUMENTS FOR SURVEY: ON LINE SOFTWARES FOR 3D RECONTRUCTION FROM IMAGES

    Directory of Open Access Journals (Sweden)

    E. Fratus de Balestrini

    2012-09-01

    . The research's result is a critical analysis of the software's potentialities, with an indication of the areas in which it is possible an effective and alternative use to other methods of survey.

  6. ANATI QUANTI: software de análises quantitativas para estudos em anatomia vegetal ANATI QUANTI: quantitative analysis software for plant anatomy studies

    Directory of Open Access Journals (Sweden)

    T.V. Aguiar

    2007-12-01

    Full Text Available Em diversos estudos interdisciplinares em que a Anatomia Vegetal é utilizada, análises quantitativas complementares são necessárias. Geralmente, a avaliação micromorfométrica é feita manualmente e/ou utilizando programas computacionais de análise de imagens não específicos. Este trabalho teve como objetivo desenvolver um programa específico para Anatomia Vegetal quantitativa e testar sua eficiência e aceitação por usuários. A solução foi elaborada na linguagem Java, visando maior mobilidade em relação ao sistema operacional a ser usado. O software desenvolvido foi denominado ANATI QUANTI e testado pelos alunos, pesquisadores e professores do Laboratório de Anatomia Vegetal da Universidade Federal de Viçosa (UFV. Todos os entrevistados receberam fotos para efetuarem medições no ANATI QUANTI e comparar com os resultados obtidos utilizando o software disponível. Os voluntários, através de questionários previamente formulados, destacaram as principais vantagens e desvantagens do programa desenvolvido em relação ao software disponível. Além de ser mais específico, simples e ágil do que o software disponível, o ANATI QUANTI é confiável, atendendo à expectativa dos entrevistados. Entretanto, há necessidade de acrescentar recursos adicionais, como a inserção de novas escalas, o que aumentaria a gama de usuários. O ANATI QUANTI já está em uso nas pesquisas desenvolvidas por usuários na UFV. Por ser um software livre e de código aberto, será disponibilizado na internet gratuitamente.Complementary quantitative analyses are necessary for several interdisciplinary studies using Plant Anatomy. Generally, micromorphometric evaluation is performed manually and/or using non-specific software for image analyses. This work aimed to develop specific quantitative analysis software for Plant Anatomy and test its efficiency and acceptance by users. The solution was elaborated in the JAVA language, which has a greater

  7. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  8. Normative data of outer photoreceptor layer thickness obtained by software image enhancing based on Stratus optical coherence tomography images

    DEFF Research Database (Denmark)

    Christensen, U.C.; Kroyer, K.; Thomadsen, J.

    2008-01-01

    Aim: To present normative data of outer photoreceptor layer thickness obtained by a new semiautomatic image analysis algorithm operating on contrast-enhanced optical coherence tomography (OCT) images. Methods: Eight Stratus OCT3 scans from identical retinal locations from 25 normal eyes were regi...

  9. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  10. Multi-criteria decision analysis methods and software

    CERN Document Server

    Ishizaka, Alessio

    2013-01-01

    This book presents an introduction to MCDA followed by more detailed chapters about each of the leading methods used in this field. Comparison of methods and software is also featured to enable readers to choose the most appropriate method needed in their research. Worked examples as well as the software featured in the book are available on an accompanying website.

  11. An Analysis of Open Source Security Software Products Downloads

    Science.gov (United States)

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  12. An Analysis of Open Source Security Software Products Downloads

    Science.gov (United States)

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  13. A pattern framework for software quality assessment and tradeoff analysis

    NARCIS (Netherlands)

    Folmer, Eelke; Boscht, Jan

    2007-01-01

    The earliest design decisions often have a significant impact on software quality and are the most costly to revoke. One of the challenges in architecture design is to reduce the frequency of retrofit problems in software designs; not being able to improve the quality of a system cost effectively, a

  14. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  15. TweezPal - Optical tweezers analysis and calibration software

    Science.gov (United States)

    Osterman, Natan

    2010-11-01

    Optical tweezers, a powerful tool for optical trapping, micromanipulation and force transduction, have in recent years become a standard technique commonly used in many research laboratories and university courses. Knowledge about the optical force acting on a trapped object can be gained only after a calibration procedure which has to be performed (by an expert) for each type of trapped objects. In this paper we present TweezPal, a user-friendly, standalone Windows software tool for optical tweezers analysis and calibration. Using TweezPal, the procedure can be performed in a matter of minutes even by non-expert users. The calibration is based on the Brownian motion of a particle trapped in a stationary optical trap, which is being monitored using video or photodiode detection. The particle trajectory is imported into the software which instantly calculates position histogram, trapping potential, stiffness and anisotropy. Program summaryProgram title: TweezPal Catalogue identifier: AEGR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 44 891 No. of bytes in distributed program, including test data, etc.: 792 653 Distribution format: tar.gz Programming language: Borland Delphi Computer: Any PC running Microsoft Windows Operating system: Windows 95, 98, 2000, XP, Vista, 7 RAM: 12 Mbytes Classification: 3, 4.14, 18, 23 Nature of problem: Quick, robust and user-friendly calibration and analysis of optical tweezers. The optical trap is calibrated from the trajectory of a trapped particle undergoing Brownian motion in a stationary optical trap (input data) using two methods. Solution method: Elimination of the experimental drift in position data. Direct calculation of the trap stiffness from the positional

  16. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Solves-Llorens, J. A.; Rupérez, M. J., E-mail: mjruperez@labhuman.i3bh.es; Monserrat, C. [LabHuman, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia (Spain); Feliu, E.; García, M. [Hospital Clínica Benidorm, Avda. Alfonso Puchades, 8, 03501 Benidorm (Alicante) (Spain); Lloret, M. [Hospital Universitari y Politècnic La Fe, Bulevar Sur, 46026 Valencia (Spain)

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  17. Electron Microscopy and Image Analysis for Selected Materials

    Science.gov (United States)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  18. Assessment of global longitudinal strain using standardized myocardial deformation imaging: a modality independent software approach.

    Science.gov (United States)

    Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J

    2015-07-01

    Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may, therefore, serve as a reliable parameter for the assessment of global left ventricular function in clinical routine besides standard evaluation of the ejection fraction.

  19. Pocket pumped image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kotov, I.V., E-mail: kotov@bnl.gov [Brookhaven National Laboratory, Upton, NY 11973 (United States); O' Connor, P. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Murray, N. [Centre for Electronic Imaging, Open University, Milton Keynes, MK7 6AA (United Kingdom)

    2015-07-01

    The pocket pumping technique is used to detect small electron trap sites. These traps, if present, degrade CCD charge transfer efficiency. To reveal traps in the active area, a CCD is illuminated with a flat field and, before image is read out, accumulated charges are moved back and forth number of times in parallel direction. As charges are moved over a trap, an electron is removed from the original pocket and re-emitted in the following pocket. As process repeats one pocket gets depleted and the neighboring pocket gets excess of charges. As a result a “dipole” signal appears on the otherwise flat background level. The amplitude of the dipole signal depends on the trap pumping efficiency. This paper is focused on trap identification technique and particularly on new methods developed for this purpose. The sensor with bad segments was deliberately chosen for algorithms development and to demonstrate sensitivity and power of new methods in uncovering sensor defects.

  20. Performance Analysis of Software Effort Estimation Models Using Neural Networks

    Directory of Open Access Journals (Sweden)

    P.Latha

    2013-08-01

    Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.

  1. Gemini Planet Imager integration to the Gemini South telescope software environment

    CERN Document Server

    Rantakyrö, Fredrik T; Chilcote, Jeffrey; Dunn, Jennifer; Goodsell, Stephen; Hibon, Pascale; Macintosh, Bruce; Quiroz, Carlos; Perrin, Marshall D; Sadakuni, Naru; Saddlemyer, Leslie; Savransky, Dmitry; Serio, Andrew; Winge, Claudia; Galvez, Ramon; Gausachs, Gaston; Hardie, Kayla; Hartung, Markus; Luhrs, Javier; Poyneer, Lisa; Thomas, Sandrine

    2014-01-01

    The Gemini Planet Imager is an extreme AO instrument with an integral field spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope and the GPI instrument are very complex systems. Our goal is that the combined telescope and instrument system may be run by one observer operating the instrument, and one operator controlling the telescope and the acquisition of light to the instrument. This requires a smooth integration between the two systems and easily operated control interfaces. We discuss the definition of the software and hardware interfaces, their implementation and testing, and the integration of the instrument with the telescope environment.

  2. Software Developed for the Reduction, Analysis and Presentation of MILOCSURVNORLANT Environmental Data,

    Science.gov (United States)

    seasonal and spatial dependance upon environmental factors. The major software, developed on an Elliott 503 computer, for the reduction, analysis and presentation of MILOCSURVNORLANT 70 data is described.

  3. Research on Application of Enhanced Neural Networks in Software Risk Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhenbang Rong; Juhua Chen; Mei Liu; Yong Hu

    2006-01-01

    This paper puts forward a risk analysis model for software projects using enranced neural networks. The data for analysis are acquired through questionnaires from real software projects. To solve the multicollinearity in software risks, the method of principal components analysis is adopted in the model to enhance network stability. To solve uncertainty of the neural networks structure and the uncertainty of the initial weights, genetic algorithms is employed. The experimental result reveals that the precision of software risk analysis can be improved by using the erhanced neural networks model.

  4. ScreenMill: A freely available software suite for growth measurement, analysis and visualization of high-throughput screen data

    Directory of Open Access Journals (Sweden)

    Rothstein Rodney

    2010-06-01

    Full Text Available Abstract Background Many high-throughput genomic experiments, such as Synthetic Genetic Array and yeast two-hybrid, use colony growth on solid media as a screen metric. These experiments routinely generate over 100,000 data points, making data analysis a time consuming and painstaking process. Here we describe ScreenMill, a new software suite that automates image analysis and simplifies data review and analysis for high-throughput biological experiments. Results The ScreenMill, software suite includes three software tools or "engines": an open source Colony Measurement Engine (CM Engine to quantitate colony growth data from plate images, a web-based Data Review Engine (DR Engine to validate and analyze quantitative screen data, and a web-based Statistics Visualization Engine (SV Engine to visualize screen data with statistical information overlaid. The methods and software described here can be applied to any screen in which growth is measured by colony size. In addition, the DR Engine and SV Engine can be used to visualize and analyze other types of quantitative high-throughput data. Conclusions ScreenMill automates quantification, analysis and visualization of high-throughput screen data. The algorithms implemented in ScreenMill are transparent allowing users to be confident about the results ScreenMill produces. Taken together, the tools of ScreenMill offer biologists a simple and flexible way of analyzing their data, without requiring programming skills.

  5. Software safety analysis on the model specified by NuSCR and SMV input language at requirements phase of software development life cycle using SMV

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2005-07-01

    Safety-critical software process is composed of development process, verification and validation (V and V) process and safety analysis process. Safety analysis process has been often treated as an additional process and not found in a conventional software process. But software safety analysis (SSA) is required if software is applied to a safety system, and the SSA shall be performed independently for the safety software through software development life cycle (SDLC). Of all the phases in software development, requirements engineering is generally considered to play the most critical role in determining the overall software quality. NASA data demonstrate that nearly 75% of failures found in operational software were caused by errors in the requirements. The verification process in requirements phase checks the correctness of software requirements specification, and the safety analysis process analyzes the safety-related properties in detail. In this paper, the method for safety analysis at requirements phase of software development life cycle using symbolic model verifier (SMV) is proposed. Hazard is discovered by hazard analysis and in other to use SMV for the safety analysis, the safety-related properties are expressed by computation tree logic (CTL)

  6. Imaging spectroscopy for scene analysis

    CERN Document Server

    Robles-Kelly, Antonio

    2012-01-01

    This book presents a detailed analysis of spectral imaging, describing how it can be used for the purposes of material identification, object recognition and scene understanding. The opportunities and challenges of combining spatial and spectral information are explored in depth, as are a wide range of applications. Features: discusses spectral image acquisition by hyperspectral cameras, and the process of spectral image formation; examines models of surface reflectance, the recovery of photometric invariants, and the estimation of the illuminant power spectrum from spectral imagery; describes

  7. GammaLib and ctools: A software framework for the analysis of astronomical gamma-ray data

    CERN Document Server

    Knödlseder, J; Deil, C; Cayrou, J -B; Owen, E; Kelley-Hoskins, N; Lu, C -C; Buehler, R; Forest, F; Louge, T; Siejkowski, H; Kosack, K; Gerard, L; Schulz, A; Martin, P; Sanchez, D; Ohm, S; Hassan, T; Brau-Nogué, S

    2016-01-01

    The field of gamma-ray astronomy has seen important progress during the last decade, yet there exists so far no common software framework for the scientific analysis of gamma-ray telescope data. We propose to fill this gap by means of the GammaLib software, a generic library that we have developed to support the analysis of gamma-ray event data. GammaLib has been written in C++ and all functionality is available in Python through an extension module. On top of this framework we have developed the ctools software package, a suite of software tools that enables building of flexible workflows for the analysis of Imaging Air Cherenkov Telescope event data. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center. The ctools have been written in Python and C++, and can be either used from the command line, via shell scripts, or directly from Python...

  8. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Science.gov (United States)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  9. BROCCOLI: Software for Fast fMRI Analysis on Many-Core CPUs and GPUs

    Directory of Open Access Journals (Sweden)

    Anders eEklund

    2014-03-01

    Full Text Available Analysis of functional magnetic resonance imaging (fMRI data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU can perform non-linear spatial normalization to a 1 mm3 brain template in 4-6 seconds, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/.

  10. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Villani, Mattias; Laconte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm(3) brain template in 4-6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/).

  11. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  12. RNAstructure: software for RNA secondary structure prediction and analysis

    Directory of Open Access Journals (Sweden)

    Mathews David H

    2010-03-01

    Full Text Available Abstract Background To understand an RNA sequence's mechanism of action, the structure must be known. Furthermore, target RNA structure is an important consideration in the design of small interfering RNAs and antisense DNA oligonucleotides. RNA secondary structure prediction, using thermodynamics, can be used to develop hypotheses about the structure of an RNA sequence. Results RNAstructure is a software package for RNA secondary structure prediction and analysis. It uses thermodynamics and utilizes the most recent set of nearest neighbor parameters from the Turner group. It includes methods for secondary structure prediction (using several algorithms, prediction of base pair probabilities, bimolecular structure prediction, and prediction of a structure common to two sequences. This contribution describes new extensions to the package, including a library of C++ classes for incorporation into other programs, a user-friendly graphical user interface written in JAVA, and new Unix-style text interfaces. The original graphical user interface for Microsoft Windows is still maintained. Conclusion The extensions to RNAstructure serve to make RNA secondary structure prediction user-friendly. The package is available for download from the Mathews lab homepage at http://rna.urmc.rochester.edu/RNAstructure.html.

  13. Software Aging Analysis of Web Server Using Neural Networks

    Directory of Open Access Journals (Sweden)

    G.Sumathi

    2012-05-01

    Full Text Available Software aging is a phenomenon that refers to progressive performance degradation or transient failures or even crashes in long running software systems such as web servers. It mainly occurs due to the deterioration of operating system resource, fragmentation and numerical error accumulation. A primitive method to fight against software aging is software rejuvenation. Software rejuvenation is a proactive fault management technique aimed at cleaning up the system internal state to prevent the occurrence of more severe crash failures in the future. It involves occasionally stopping the running software, cleaning its internal state and restarting it. An optimized schedule for performing the software rejuvenation has to be derived in advance because a long running application could not be put down now and then as it may lead to waste of cost. This paper proposes a method to derive an accurate and optimized schedule for rejuvenation of a web server (Apache by using Radial Basis Function (RBF based Feed Forward Neural Network, a variant of Artificial Neural Networks (ANN. Aging indicators are obtained through experimental setup involving Apache web server and clients, which acts as input to the neural network model. This method is better than existing ones because usage of RBF leads to better accuracy and speed in convergence.

  14. Image processing in biodosimetry: A proposal of a generic free software platform.

    Science.gov (United States)

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  15. Multivariate image analysis in biomedicine.

    Science.gov (United States)

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  16. Object-oriented data handler for sequence analysis software development.

    Science.gov (United States)

    Ptitsyn, A A; Grigorovich, D A

    1995-12-01

    We report an object-oriented data handler and supplementary tools for the development of molecular genetics application software for various sequence analyses. Our data handler has a flexible and expandable format that supports the most common data types for molecular genetic software. New data types can be constructed in an object-oriented manner from the basic units. The data handler includes an object library, a format-converting program and a viewer that can visualize simultaneously the data contained in several files to construct a general picture from separate data. This software has been implemented on an IBM PC-compatible personal computer.

  17. User's Guide for the MapImage Reprojection Software Package, Version 1.01

    Science.gov (United States)

    Finn, Michael P.; Trent, Jason R.

    2004-01-01

    Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets (such as 30-m data) for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Recently, Usery and others (2003a) expanded on the previously limited empirical work with real geographic data by compiling and tabulating the accuracy of categorical areas in projected raster datasets of global extent. Geographers and applications programmers at the U.S. Geological Survey's (USGS) Mid-Continent Mapping Center (MCMC) undertook an effort to expand and evolve an internal USGS software package, MapImage, or mapimg, for raster map projection transformation (Usery and others, 2003a). Daniel R. Steinwand of Science Applications International Corporation, Earth Resources Observation Systems Data Center in Sioux Falls, S. Dak., originally developed mapimg for the USGS, basing it on the USGS's General Cartographic Transformation Package (GCTP). It operated as a command line program on the Unix operating system. Through efforts at MCMC, and in coordination with Mr. Steinwand, this program has been transformed from an application based on a command line into a software package based on a graphic user interface for Windows, Linux, and Unix machines. Usery and others (2003b) pointed out that many commercial software packages do not use exact projection equations and that even when exact projection equations are used, the software often results in error and sometimes does not complete the transformation for specific projections, at specific resampling resolutions, and for specific singularities. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in these software packages, but implementation with data other than points requires specific adaptation of the equations or prior preparation of the data to allow the transformation to succeed. Additional

  18. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  19. Ten years of software sustainability at the Infrared Processing and Analysis Center.

    Science.gov (United States)

    Berriman, G Bruce; Good, John; Deelman, Ewa; Alexov, Anastasia

    2011-08-28

    This paper presents a case study of an approach to sustainable software architecture that has been successfully applied over a period of 10 years to astronomy software services at the NASA Infrared Processing and Analysis Center (IPAC), Caltech (http://www.ipac.caltech.edu). The approach was developed in response to the need to build and maintain the NASA Infrared Science Archive (http://irsa.ipac.caltech.edu), NASA's archive node for infrared astronomy datasets. When the archive opened for business in 1999 serving only two datasets, it was understood that the holdings would grow rapidly in size and diversity, and consequently in the number of queries and volume of data download. It was also understood that platforms and browsers would be modernized, that user interfaces would need to be replaced and that new functionality outside of the scope of the original specifications would be needed. The changes in scientific functionality over time are largely driven by the archive user community, whose interests are represented by a formal user panel. The approach has been extended to support four more major astronomy archives, which today host data from more than 40 missions and projects, to support a complete modernization of a powerful and unique legacy astronomy application for co-adding survey data, and to support deployment of Montage, a powerful image mosaic engine for astronomy. The approach involves using a component-based architecture, designed from the outset to support sustainability, extensibility and portability. Although successful, the approach demands careful assessment of new and emer