WorldWideScience

Sample records for visual reconstruction validation

  1. Raw-data display and visual reconstruction validation in ALICE

    International Nuclear Information System (INIS)

    Tadel, M

    2008-01-01

    ALICE Event Visualization Environment (AliEVE) is based on ROOT and its GUI, 2D and 3D graphics classes. A small application kernel provides for registration and management of visualization objects. CINT scripts are used as an extensible mechanism for data extraction, selection and processing as well as for steering of frequent event-related tasks. AliEVE is used for event visualization in offline and high-level trigger frameworks. Mechanisms and base-classes provided for visual representation of raw-data for different detector-types are described. Common infrastructure for thresholding and color-coding of signal/time information, placement of detector-modules in various 2D/3D layouts and for user-interaction with displayed data is presented. Methods for visualization of raw-data on different levels of detail are discussed as they are expected to play an important role during early detector operation with poorly understood detector calibration, occupancy and noise-levels. Since September 2006 ALICE applies a regular visual-scanning procedure to simulated proton-proton data to detect any shortcomings in cluster finding, tracking and primary and secondary vertex reconstruction. A high-level of interactivity is required to allow in-depth exploration of event-structure. Navigation back to simulation records is supported for debugging purposes. Standard 2D projections and transformations are available for clusters, tracks and simplified detector geometry

  2. Reflective Reconstruction of Visual Products

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin

    2017-01-01

    objects. The teacher used a range of visual materials including video clips, diagrams, student drawings, and student produced 3D models, each endowed with unique material and visual dimensions. The teacher activated those through talking, writing, drawing and working with artifacts and purposefully...

  3. Three-dimensional reconstruction and visualization system for medical images

    International Nuclear Information System (INIS)

    Preston, D.F.; Batnitzky, S.; Kyo Rak Lee; Cook, P.N.; Cook, L.T.; Dwyer, S.J.

    1982-01-01

    A three-dimensional reconstruction and visualization system could be of significant advantage in medical application such as neurosurgery and radiation treatment planning. The reconstructed anatomic structures from CT head scans could be used in a head stereotactic system to help plan the surgical procedure and the radiation treatment for a brain lesion. Also, the use of three-dimensional reconstruction algorithm provides for quantitative measures such as volume and surface area estimation of the anatomic features. This aspect of the three-dimensional reconstruction system may be used to monitor the progress or staging of a disease and the effects of patient treatment. Two cases are presented to illustrate the three-dimensional surface reconstruction and visualization system

  4. Visualization and Analysis-Oriented Reconstruction of Material Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Childs, Henry R.

    2010-03-05

    Reconstructing boundaries along material interfaces from volume fractions is a difficult problem, especially because the under-resolved nature of the input data allows for many correct interpretations. Worse, algorithms widely accepted as appropriate for simulation are inappropriate for visualization. In this paper, we describe a new algorithm that is specifically intended for reconstructing material interfaces for visualization and analysis requirements. The algorithm performs well with respect to memory footprint and execution time, has desirable properties in various accuracy metrics, and also produces smooth surfaces with few artifacts, even when faced with more than two materials per cell.

  5. Validation of Magnetic Reconstruction Codes for Real-Time Applications

    International Nuclear Information System (INIS)

    Mazon, D.; Murari, A.; Boulbe, C.; Faugeras, B.; Blum, J.; Svensson, J.; Quilichini, T.; Gelfusa, M.

    2010-01-01

    The real-time reconstruction of the plasma magnetic equilibrium in a tokamak is a key point to access high-performance regimes. Indeed, the shape of the plasma current density profile is a direct output of the reconstruction and has a leading effect for reaching a steady-state high-performance regime of operation. The challenge is thus to develop real-time methods and algorithms that reconstruct the magnetic equilibrium from the perspective of using these outputs for feedback control purposes. In this paper the validation of the JET real-time equilibrium reconstruction codes using both a Bayesian approach and a full equilibrium solver named Equinox will be detailed, the comparison being performed with the off-line equilibrium code EFIT (equilibrium fitting) or the real-time boundary reconstruction code XLOC (X-point local expansion). In this way a significant database, a methodology, and a strategy for the validation are presented. The validation of the results has been performed using a validated database of 130 JET discharges with a large variety of magnetic configurations. Internal measurements like polarimetry and motional Stark effect have been also used for the Equinox validation including some magnetohydrodynamic signatures for the assessment of the reconstructed safety profile and current density. (authors)

  6. Reconstruction and visualization of nanoparticle composites by transmission electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Lockwood, R. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Malac, M., E-mail: marek.malac@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Furukawa, H. [SYSTEM IN FRONTIER INC., 2-8-3, Shinsuzuharu bldg. 4F, Akebono-cho, Tachikawa-shi, Tokyo 190-0012 (Japan); Li, P.; Meldrum, A. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada)

    2012-02-15

    This paper examines the limits of transmission electron tomography reconstruction methods for a nanocomposite object composed of many closely packed nanoparticles. Two commonly used reconstruction methods in TEM tomography were examined and compared, and the sources of various artefacts were explored. Common visualization methods were investigated, and the resulting 'interpretation artefacts' ( i.e., deviations from 'actual' particle sizes and shapes arising from the visualization) were determined. Setting a known or estimated nanoparticle volume fraction as a criterion for thresholding does not in fact give a good visualization. Unexpected effects associated with common built-in image filtering methods were also found. Ultimately, this work set out to establish the common problems and pitfalls associated with electron beam tomographic reconstruction and visualization of samples consisting of closely spaced nanoparticles. -- Highlights: Black-Right-Pointing-Pointer Electron tomography limits were explored by both experiment and simulation. Black-Right-Pointing-Pointer Reliable quantitative volumetry using electron tomography is not presently feasible. Black-Right-Pointing-Pointer Volume rendering appears to be better choice for visualization of composite samples.

  7. Visual reconstruction of Hampi Temple - Construed Graphically, Pictorially and Digitally

    Directory of Open Access Journals (Sweden)

    Meera Natampally

    2014-05-01

    Full Text Available The existing temple complex in Hampi, Karnataka, India was extensively studied, analyzed and documented. The complex was measured-drawn and digitized by plotting its edges and vertices using AutoCAD to generate 2d drawings. The graphic 2d elements developed were extended into 3 dimensional objects using Google sketch-up. The tool has been used to facilitate the visual re-construction to achieve the architecture of the temple in its original form. 3D virtual modelling / visual reconstruction helps us to visualize the structure in its original form giving a holistic picture of the Vijayanagara Empire in all its former glory. The project is interpreted graphically using Auto-CAD drawings, pictorially, digitally using Sketch-Up model and Kinect.

  8. Reconstruction and visualization of planetary nebulae.

    Science.gov (United States)

    Magnor, Marcus; Kindlmann, Gordon; Hansen, Charles; Duric, Neb

    2005-01-01

    From our terrestrially confined viewpoint, the actual three-dimensional shape of distant astronomical objects is, in general, very challenging to determine. For one class of astronomical objects, however, spatial structure can be recovered from conventional 2D images alone. So-called planetary nebulae (PNe) exhibit pronounced symmetry characteristics that come about due to fundamental physical processes. Making use of this symmetry constraint, we present a technique to automatically recover the axisymmetric structure of many planetary nebulae from photographs. With GPU-based volume rendering driving a nonlinear optimization, we estimate the nebula's local emission density as a function of its radial and axial coordinates and we recover the orientation of the nebula relative to Earth. The optimization refines the nebula model and its orientation by minimizing the differences between the rendered image and the original astronomical image. The resulting model allows creating realistic 3D visualizations of these nebulae, for example, for planetarium shows and other educational purposes. In addition, the recovered spatial distribution of the emissive gas can help astrophysicists gain deeper insight into the formation processes of planetary nebulae.

  9. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    Science.gov (United States)

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  10. Predictive Validity And Usefulness Of Visual Scanning Task In Hiv ...

    African Journals Online (AJOL)

    The visual scanning task is a useful screening tool for brain damage in HIV/AIDS by inference from impairment of visual information processing and disturbances in perceptual mental strategies. There is progressive neuro-cognitive decline as the disease worsens. Keywords: brain, cognition, HIV/AIDS, predictive validity, ...

  11. Validation of Visual Caries Activity Assessment

    DEFF Research Database (Denmark)

    Guedes, R S; Piovesan, C; Ardenghi, T M

    2014-01-01

    We evaluated the predictive and construct validity of a caries activity assessment system associated with the International Caries Detection and Assessment System (ICDAS) in primary teeth. A total of 469 children were reexamined: participants of a caries survey performed 2 yr before (follow-up rate...... of 73.4%). At baseline, children (12-59 mo old) were examined with the ICDAS and a caries activity assessment system. The predictive validity was assessed by evaluating the risk of active caries lesion progression to more severe conditions in the follow-up, compared with inactive lesions. We also...... assessed if children with a higher number of active caries lesions were more likely to develop new lesions (construct validity). Noncavitated active caries lesions at occlusal surfaces presented higher risk of progression than inactive ones. Children with a higher number of active lesions and with higher...

  12. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    Science.gov (United States)

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. A uniform geometry description for simulation, reconstruction and visualization in the BESIII experiment

    Energy Technology Data Exchange (ETDEWEB)

    Liang Yutie [School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China)], E-mail: liangyt@hep.pku.edu.cn; Zhu Bo; You Zhengyun; Liu Kun; Ye Hongxue; Xu Guangming; Wang Siguang [School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Li Weidong; Liu Huaimin; Mao Zepu [Institute of High Energy Physics, CAS, Beijing 100049 (China); Mao Yajun [School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China)

    2009-05-21

    In the BESIII experiment, the simulation, reconstruction and visualization were designed to use the same geometry description in order to ensure the consistency of the geometry for different applications. Geometry Description Markup Language (GDML), an application-independent persistent format for describing the geometries of detectors, was chosen and met our requirement. The detector of BESIII was described with GDML and then used in Geant4-based simulation and ROOT-based reconstruction and visualization.

  14. A uniform geometry description for simulation, reconstruction and visualization in the BESIII experiment

    International Nuclear Information System (INIS)

    Liang Yutie; Zhu Bo; You Zhengyun; Liu Kun; Ye Hongxue; Xu Guangming; Wang Siguang; Li Weidong; Liu Huaimin; Mao Zepu; Mao Yajun

    2009-01-01

    In the BESIII experiment, the simulation, reconstruction and visualization were designed to use the same geometry description in order to ensure the consistency of the geometry for different applications. Geometry Description Markup Language (GDML), an application-independent persistent format for describing the geometries of detectors, was chosen and met our requirement. The detector of BESIII was described with GDML and then used in Geant4-based simulation and ROOT-based reconstruction and visualization.

  15. A Visual Framework for Digital Reconstruction of Topographic Maps

    KAUST Repository

    Thabet, Ali Kassem; Smith, Neil; Wittmann, Roland; Schneider, Jens

    2014-01-01

    , this method has broad applicability for digitization and reconstruction of the world's old topographic maps that are often the only record of past landscapess and cultural heritage before their destruction under modern development.

  16. Visualizing and Validating Metadata Traceability within the CDISC Standards.

    Science.gov (United States)

    Hume, Sam; Sarnikar, Surendra; Becnel, Lauren; Bennett, Dorine

    2017-01-01

    The Food & Drug Administration has begun requiring that electronic submissions of regulated clinical studies utilize the Clinical Data Information Standards Consortium data standards. Within regulated clinical research, traceability is a requirement and indicates that the analysis results can be traced back to the original source data. Current solutions for clinical research data traceability are limited in terms of querying, validation and visualization capabilities. This paper describes (1) the development of metadata models to support computable traceability and traceability visualizations that are compatible with industry data standards for the regulated clinical research domain, (2) adaptation of graph traversal algorithms to make them capable of identifying traceability gaps and validating traceability across the clinical research data lifecycle, and (3) development of a traceability query capability for retrieval and visualization of traceability information.

  17. Visual Impairment Screening Assessment (VISA) tool: pilot validation.

    Science.gov (United States)

    Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire

    2018-03-06

    To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. A Visual Framework for Digital Reconstruction of Topographic Maps

    KAUST Repository

    Thabet, Ali Kassem

    2014-09-30

    We present a framework for reconstructing Digital Elevation Maps (DEM) from scanned topographic maps. We first rectify the images to ensure that maps fit together without distortion. To segment iso-contours, we have developed a novel semi-automated method based on mean-shifts that requires only minimal user interaction. Contour labels are automatically read using an OCR module. To reconstruct the output DEM from scattered data, we generalize natural neighbor interpolation to handle the transfinite case (contours and points). To this end, we use parallel vector propagation to compute a discrete Voronoi diagram of the constraints, and a modified floodfill to compute virtual Voronoi tiles. Our framework is able to handle tens of thousands of contours and points and can generate DEMs comprising more than 100 million samples. We provide quantitative comparison to commercial software and show the benefits of our approach. We furthermore show the robustness of our method on a massive set of old maps predating satellite acquisition. Compared to other methods, our framework is able to accurately and efficiently generate a final DEM despite inconsistencies, sparse or missing contours even for highly complex and cluttered maps. Therefore, this method has broad applicability for digitization and reconstruction of the world\\'s old topographic maps that are often the only record of past landscapess and cultural heritage before their destruction under modern development.

  19. Implicit vessel surface reconstruction for visualization and CFD simulation

    International Nuclear Information System (INIS)

    Schumann, Christian; Peitgen, Heinz-Otto; Neugebauer, Mathias; Bade, Ragnar; Preim, Bernhard

    2008-01-01

    Accurate and high-quality reconstructions of vascular structures are essential for vascular disease diagnosis and blood flow simulations.These applications necessitate a trade-off between accuracy and smoothness. An additional requirement for the volume grid generation for Computational Fluid Dynamics (CFD) simulations is a high triangle quality. We propose a method that produces an accurate reconstruction of the vessel surface with satisfactory surface quality. A point cloud representing the vascular boundary is generated based on a segmentation result. Thin vessels are subsampled to enable an accurate reconstruction. A signed distance field is generated using Multi-level Partition of Unity Implicits and subsequently polygonized using a surface tracking approach. To guarantee a high triangle quality, the surface is remeshed. Compared to other methods, our approach represents a good trade-off between accuracy and smoothness. For the tested data, the average surface deviation to the segmentation results is 0.19 voxel diagonals and the maximum equi-angle skewness values are below 0.75. The generated surfaces are considerably more accurate than those obtained using model-based approaches. Compared to other model-free approaches, the proposed method produces smoother results and thus better supports the perception and interpretation of the vascular topology. Moreover, the triangle quality of the generated surfaces is suitable for CFD simulations. (orig.)

  20. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    International Nuclear Information System (INIS)

    Bastiens, K.; Lemahieu, I.

    1994-01-01

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors)

  1. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    Energy Technology Data Exchange (ETDEWEB)

    Bastiens, K; Lemahieu, I [University of Ghent - ELIS Department, St. Pietersnieuwstraat 41, B-9000 Ghent (Belgium)

    1994-12-31

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors). 8 refs, 3 figs, 1 tab.

  2. Algorithm Validation of the Current Profile Reconstruction of EAST Based on Polarimeter/Interferometer

    International Nuclear Information System (INIS)

    Qian Jinping; Ren Qilong; Wan Baonian; Liu Haiqin; Zeng Long; Luo Zhengping; Chen Dalong; Shi Tonghui; Sun Youwen; Shen Biao; Xiao Bingjia; Lao, L. L.; Hanada, K.

    2015-01-01

    The method of plasma current profile reconstruction using the polarimeter/interferometer (POINT) data from a simulated equilibrium is explored and validated. It is shown that the safety factor (q) profile can be generally reconstructed from the external magnetic and POINT data. The reconstructed q profile is found to reasonably agree with the initial equilibriums. Comparisons of reconstructed q and density profiles using the magnetic data and the POINT data with 3%, 5% and 10% random errors are investigated. The result shows that the POINT data could be used to a reasonably accurate determination of the q profile. (fusion engineering)

  3. Validating a visual version of the metronome response task.

    Science.gov (United States)

    Laflamme, Patrick; Seli, Paul; Smilek, Daniel

    2018-02-12

    The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.

  4. Validation of an instrument to assess visual ability in children with visual impairment in China.

    Science.gov (United States)

    Huang, Jinhai; Khadka, Jyoti; Gao, Rongrong; Zhang, Sifang; Dong, Wenpeng; Bao, Fangjun; Chen, Haisi; Wang, Qinmei; Chen, Hao; Pesudovs, Konrad

    2017-04-01

    To validate a visual ability instrument for school-aged children with visual impairment in China by translating, culturally adopting and Rasch scaling the Cardiff Visual Ability Questionnaire for Children (CVAQC). The 25-item CVAQC was translated into Mandarin using a standard protocol. The translated version (CVAQC-CN) was subjected to cognitive testing to ensure a proper cultural adaptation of its content. Then, the CVAQC-CN was interviewer-administered to 114 school-aged children and young people with visual impairment. Rasch analysis was carried out to assess its psychometric properties. The correlation between the CVAQC-CN visual ability scores and clinical measure of vision (visual acuity; VA and contrast sensitivity, CS) were assessed using Spearman's r. Based on cultural adaptation exercise, cognitive testing, missing data and Rasch metrics-based iterative item removal, three items were removed from the original 25. The 22-item CVAQC-CN demonstrated excellent measurement precision (person separation index, 3.08), content validity (item separation, 10.09) and item reliability (0.99). Moreover, the CVAQC-CN was unidimensional and had no item bias. The person-item map indicated good targeting of item difficulty to person ability. The CVAQC-CN had moderate correlations between CS (-0.53, pvisual ability in children with visual impairment in China. The instrument can be used as a clinical and research outcome measure to assess the change in visual ability after low vision rehabilitation intervention. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Visual image reconstruction from human brain activity: A modular decoding approach

    International Nuclear Information System (INIS)

    Miyawaki, Yoichi; Uchida, Hajime; Yamashita, Okito; Sato, Masa-aki; Kamitani, Yukiyasu; Morito, Yusuke; Tanabe, Hiroki C; Sadato, Norihiro

    2009-01-01

    Brain activity represents our perceptual experience. But the potential for reading out perceptual contents from human brain activity has not been fully explored. In this study, we demonstrate constraint-free reconstruction of visual images perceived by a subject, from the brain activity pattern. We reconstructed visual images by combining local image bases with multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2 100 possible states), were accurately reconstructed without any image prior by measuring brain activity only for several hundred random images. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multi-voxel patterns.

  6. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    Science.gov (United States)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  7. DEVELOPING VISUAL PRESENTATION ATTITUDE RUBRIC: VALIDITY AND RELIABILITY STUDY

    OpenAIRE

    ATEŞ, Hatice KADIOĞLU; ADA, Sefer; BAYSAL, Z. Nurdan

    2015-01-01

    Abstract The aim of this study is to develop visual presentation attitude rubric which is valid and reliable for the 4th grade students. 218 students took part in this study from Engin Can Güre which located in Istanbul, Esenler. While preparing this assessment tool with 34 criterias , 6 university lecturers view have been taken who are experts in their field. The answer key sheet has 4 (likert )type options. The rubric has been first tested by Kaiser-Meyer Olkin and Bartletts tests an...

  8. Standard anatomical and visual space for the mouse retina: computational reconstruction and transformation of flattened retinae with the Retistruct package.

    Directory of Open Access Journals (Sweden)

    David C Sterratt

    Full Text Available The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis. The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches

  9. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  10. Images from the Mind: BCI image reconstruction based on Rapid Serial Visual Presentations of polygon primitives

    Directory of Open Access Journals (Sweden)

    Luís F Seoane

    2015-04-01

    Full Text Available We provide a proof of concept for an EEG-based reconstruction of a visual image which is on a user's mind. Our approach is based on the Rapid Serial Visual Presentation (RSVP of polygon primitives and Brain-Computer Interface (BCI technology. In an experimental setup, subjects were presented bursts of polygons: some of them contributed to building a target image (because they matched the shape and/or color of the target while some of them did not. The presentation of the contributing polygons triggered attention-related EEG patterns. These Event Related Potentials (ERPs could be determined using BCI classification and could be matched to the stimuli that elicited them. These stimuli (i.e. the ERP-correlated polygons were accumulated in the display until a satisfactory reconstruction of the target image was reached. As more polygons were accumulated, finer visual details were attained resulting in more challenging classification tasks. In our experiments, we observe an average classification accuracy of around 75%. An in-depth investigation suggests that many of the misclassifications were not misinterpretations of the BCI concerning the users' intent, but rather caused by ambiguous polygons that could contribute to reconstruct several different images. When we put our BCI-image reconstruction in perspective with other RSVP BCI paradigms, there is large room for improvement both in speed and accuracy. These results invite us to be optimistic. They open a plethora of possibilities to explore non-invasive BCIs for image reconstruction both in healthy and impaired subjects and, accordingly, suggest interesting recreational and clinical applications.

  11. Experimental validation of incomplete data CT image reconstruction techniques

    International Nuclear Information System (INIS)

    Eberhard, J.W.; Hsiao, M.L.; Tam, K.C.

    1989-01-01

    X-ray CT inspection of large metal parts is often limited by x-ray penetration problems along many of the ray paths required for a complete CT data set. In addition, because of the complex geometry of many industrial parts, manipulation difficulties often prevent scanning over some range of angles. CT images reconstructed from these incomplete data sets contain a variety of artifacts which limit their usefulness in part quality determination. Over the past several years, the authors' company has developed 2 new methods of incorporating a priori information about the parts under inspection to significantly improve incomplete data CT image quality. This work reviews the methods which were developed and presents experimental results which confirm the effectiveness of the techniques. The new methods for dealing with incomplete CT data sets rely on a priori information from part blueprints (in electronic form), outer boundary information from touch sensors, estimates of part outer boundaries from available x-ray data, and linear x-ray attenuation coefficients of the part. The two methods make use of this information in different fashions. The relative performance of the two methods in detecting various flaw types is compared. Methods for accurately registering a priori information with x-ray data are also described. These results are critical to a new industrial x-ray inspection cell built for inspection of large aircraft engine parts

  12. Reconstruction of Eroded and Visually Complicated Archaeological Geometric Patterns: Minaret Choli, Iraq

    Directory of Open Access Journals (Sweden)

    Rima Al Ajlouni

    2011-12-01

    Full Text Available Visually complicated patterns can be found in many cultural heritages of the world. Islamic geometric patterns present us with one example of such visually complicated archaeological ornaments. As long-lived artifacts, these patterns have gone through many phases of construction, damage, and repair and are constantly subject to erosion and vandalism. The task of reconstructing these visually complicated ornaments faces many practical challenges. The main challenge is posed by the fact that archaeological reality often deals with ornaments that are broken, incomplete or hidden. Recognizing faint traces of eroded or missing parts proved to be an extremely difficult task. This is also combined with the need for specialized knowledge about the mathematical rules of patterns’ structure, in order to regenerate the missing data. This paper presents a methodology for reconstructing deteriorated Islamic geometric patterns; to predict the features that are not observed and output a complete reconstructed two-dimension accurate measurable model. The simulation process depends primarily on finding the parameters necessary to predict information, at other locations, based on the relationships embedded in the existing data and in the prior -knowledge of these relations. The aim is to build up from the fragmented data and from the historic and general knowledge, a model of the reconstructed object. The proposed methodology was proven to be successful in capturing the accurate structural geometry of many of the deteriorated ornaments on the Minaret Choli, Iraq. However, in the case of extremely deteriorated samples, the proposed methodology failed to recognize the correct geometry. The conceptual framework proposed by this paper can serve as a platform for developing professional tools for fast and efficient results.

  13. Automated retinofugal visual pathway reconstruction with multi-shell HARDI and FOD-based analysis.

    Science.gov (United States)

    Kammen, Alexandra; Law, Meng; Tjan, Bosco S; Toga, Arthur W; Shi, Yonggang

    2016-01-15

    Diffusion MRI tractography provides a non-invasive modality to examine the human retinofugal projection, which consists of the optic nerves, optic chiasm, optic tracts, the lateral geniculate nuclei (LGN) and the optic radiations. However, the pathway has several anatomic features that make it particularly challenging to study with tractography, including its location near blood vessels and bone-air interface at the base of the cerebrum, crossing fibers at the chiasm, somewhat-tortuous course around the temporal horn via Meyer's Loop, and multiple closely neighboring fiber bundles. To date, these unique complexities of the visual pathway have impeded the development of a robust and automated reconstruction method using tractography. To overcome these challenges, we develop a novel, fully automated system to reconstruct the retinofugal visual pathway from high-resolution diffusion imaging data. Using multi-shell, high angular resolution diffusion imaging (HARDI) data, we reconstruct precise fiber orientation distributions (FODs) with high order spherical harmonics (SPHARM) to resolve fiber crossings, which allows the tractography algorithm to successfully navigate the complicated anatomy surrounding the retinofugal pathway. We also develop automated algorithms for the identification of ROIs used for fiber bundle reconstruction. In particular, we develop a novel approach to extract the LGN region of interest (ROI) based on intrinsic shape analysis of a fiber bundle computed from a seed region at the optic chiasm to a target at the primary visual cortex. By combining automatically identified ROIs and FOD-based tractography, we obtain a fully automated system to compute the main components of the retinofugal pathway, including the optic tract and the optic radiation. We apply our method to the multi-shell HARDI data of 215 subjects from the Human Connectome Project (HCP). Through comparisons with post-mortem dissection measurements, we demonstrate the retinotopic

  14. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  15. Reconstruction and validation of RefRec: a global model for the yeast molecular interaction network.

    Directory of Open Access Journals (Sweden)

    Tommi Aho

    2010-05-01

    Full Text Available Molecular interaction networks establish all cell biological processes. The networks are under intensive research that is facilitated by new high-throughput measurement techniques for the detection, quantification, and characterization of molecules and their physical interactions. For the common model organism yeast Saccharomyces cerevisiae, public databases store a significant part of the accumulated information and, on the way to better understanding of the cellular processes, there is a need to integrate this information into a consistent reconstruction of the molecular interaction network. This work presents and validates RefRec, the most comprehensive molecular interaction network reconstruction currently available for yeast. The reconstruction integrates protein synthesis pathways, a metabolic network, and a protein-protein interaction network from major biological databases. The core of the reconstruction is based on a reference object approach in which genes, transcripts, and proteins are identified using their primary sequences. This enables their unambiguous identification and non-redundant integration. The obtained total number of different molecular species and their connecting interactions is approximately 67,000. In order to demonstrate the capacity of RefRec for functional predictions, it was used for simulating the gene knockout damage propagation in the molecular interaction network in approximately 590,000 experimentally validated mutant strains. Based on the simulation results, a statistical classifier was subsequently able to correctly predict the viability of most of the strains. The results also showed that the usage of different types of molecular species in the reconstruction is important for accurate phenotype prediction. In general, the findings demonstrate the benefits of global reconstructions of molecular interaction networks. With all the molecular species and their physical interactions explicitly modeled, our

  16. Development of a system for acquiring, reconstructing, and visualizing three-dimensional ultrasonic angiograms

    Science.gov (United States)

    Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.

    1995-04-01

    We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.

  17. Visualized Evaluation of Blood Flow to the Gastric Conduit and Complications in Esophageal Reconstruction.

    Science.gov (United States)

    Noma, Kazuhiro; Shirakawa, Yasuhiro; Kanaya, Nobuhiko; Okada, Tsuyoshi; Maeda, Naoaki; Ninomiya, Takayuki; Tanabe, Shunsuke; Sakurama, Kazufumi; Fujiwara, Toshiyoshi

    2018-03-01

    Evaluation of the blood supply to gastric conduits is critically important to avoid complications after esophagectomy. We began visual evaluation of blood flow using indocyanine green (ICG) fluorescent imaging in July 2015, to reduce reconstructive complications. In this study, we aimed to statistically verify the efficacy of blood flow evaluation using our simplified ICG method. A total of 285 consecutive patients who underwent esophagectomy and gastric conduit reconstruction were reviewed and divided into 2 groups: before and after introduction of ICG evaluation. The entire cohort and 68 patient pairs after propensity score matching (PS-M) were evaluated for clinical outcomes and the effect of visualized evaluation on reducing the risk of complication. The leakage rate in the ICG group was significantly lower than in the non-ICG group for each severity grade, both in the entire cohort (285 subjects) and after PS-M; the rates of other major complications, including recurrent laryngeal nerve palsy and pneumonia, were not different. The duration of postoperative ICU stay was approximately 1 day shorter in the ICG group than in the non-ICG group in the entire cohort, and approximately 2 days shorter after PS-M. Visualized evaluation of blood flow with ICG methods significantly reduced the rate of anastomotic complications of all Clavien-Dindo (CD) grades. Odds ratios for ICG evaluation decreased with CD grade (0.3419 for CD ≥ 1; 0.241 for CD ≥ 2; and 0.2153 for CD ≥ 3). Objective evaluation of blood supply to the reconstructed conduit using ICG fluorescent imaging reduces the risk and degree of anastomotic complication. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Optimizing the reconstruction filter in cone-beam CT to improve periodontal ligament space visualization: An in vitro study

    Energy Technology Data Exchange (ETDEWEB)

    Houno, Yuuki; Kodera, Yoshie [Graduate School of Medicine, Nagoya University, Nagoya (Japan); Hishikawa, Toshimitsu; Naitoh, Munetaka; Mitani, Akio; Noguchi, Toshihide; Ariji, Eiichiro [Aichi Gakuin University, Nisshin (Japan); Gotoh, Kenichi [Div. of Radiology, Dental Hospital, Aichi Gakuin University, Nisshin (Japan)

    2017-09-15

    Evaluation of alveolar bone is important in the diagnosis of dental diseases. The periodontal ligament space is difficult to clearly depict in cone-beam computed tomography images because the reconstruction filter conditions during image processing cause image blurring, resulting in decreased spatial resolution. We examined different reconstruction filters to assess their ability to improve spatial resolution and allow for a clearer visualization of the periodontal ligament space. Cone-beam computed tomography projections of 2 skull phantoms were reconstructed using 6 reconstruction conditions and then compared using the Thurstone paired comparison method. Physical evaluations, including the modulation transfer function and the Wiener spectrum, as well as an assessment of space visibility, were undertaken using experimental phantoms. Image reconstruction using a modified Shepp-Logan filter resulted in better sensory, physical, and quantitative evaluations. The reconstruction conditions substantially improved the spatial resolution and visualization of the periodontal ligament space. The difference in sensitivity was obtained by altering the reconstruction filter. Modifying the characteristics of a reconstruction filter can generate significant improvement in assessments of the periodontal ligament space. A high-frequency enhancement filter improves the visualization of thin structures and will be useful when accurate assessment of the periodontal ligament space is necessary.

  19. Optimizing the reconstruction filter in cone-beam CT to improve periodontal ligament space visualization: An in vitro study

    International Nuclear Information System (INIS)

    Houno, Yuuki; Kodera, Yoshie; Hishikawa, Toshimitsu; Naitoh, Munetaka; Mitani, Akio; Noguchi, Toshihide; Ariji, Eiichiro; Gotoh, Kenichi

    2017-01-01

    Evaluation of alveolar bone is important in the diagnosis of dental diseases. The periodontal ligament space is difficult to clearly depict in cone-beam computed tomography images because the reconstruction filter conditions during image processing cause image blurring, resulting in decreased spatial resolution. We examined different reconstruction filters to assess their ability to improve spatial resolution and allow for a clearer visualization of the periodontal ligament space. Cone-beam computed tomography projections of 2 skull phantoms were reconstructed using 6 reconstruction conditions and then compared using the Thurstone paired comparison method. Physical evaluations, including the modulation transfer function and the Wiener spectrum, as well as an assessment of space visibility, were undertaken using experimental phantoms. Image reconstruction using a modified Shepp-Logan filter resulted in better sensory, physical, and quantitative evaluations. The reconstruction conditions substantially improved the spatial resolution and visualization of the periodontal ligament space. The difference in sensitivity was obtained by altering the reconstruction filter. Modifying the characteristics of a reconstruction filter can generate significant improvement in assessments of the periodontal ligament space. A high-frequency enhancement filter improves the visualization of thin structures and will be useful when accurate assessment of the periodontal ligament space is necessary

  20. Validity of proxy data obtained by different psychological autopsy information reconstruction techniques.

    Science.gov (United States)

    Fang, L; Zhang, J

    2010-01-01

    Two informants were interviewed for each of 416 living controls (individuals sampled from the normal population) interviewed in a Chinese case-control psychological autopsy study. The validity of proxy data, obtained using seven psychological autopsy information reconstruction techniques (types 1, 2 and A - E), was evaluated, with living controls' self reports used as the gold-standard. Proxy data for reconstruction technique types 1, 2 and D on the Impulsivity Inventory Scale (total impulsivity score) were no different from the living controls' self report gold standard, whereas data for types A and E were smaller than data from living controls. On the 'acceptance or resignation' sub-scale of the avoidance coping dimension of the Moos Coping Response Inventory, information obtained by reconstruction technique types 1 and D was not significantly different from the living controls' self reports, whereas proxy data from types 2, A and E were smaller than those from the living controls. No statistically significant differences were identified for other proxy data obtained by reconstruction technique types 1, 2, A, D and E. These results indicate that using a second informant does not significantly enhance information reconstruction for the target.

  1. Visually Induced Dizziness in Children and Validation of the Pediatric Visually Induced Dizziness Questionnaire

    Directory of Open Access Journals (Sweden)

    Marousa Pavlou

    2017-12-01

    Full Text Available AimsTo develop and validate the Pediatric Visually Induced Dizziness Questionnaire (PVID and quantify the presence and severity of visually induced dizziness (ViD, i.e., symptoms induced by visual motion stimuli including crowds and scrolling computer screens in children.Methods169 healthy (female n = 89; recruited from mainstream schools, London, UK and 114 children with a primary migraine, concussion, or vestibular disorder diagnosis (female n = 62, aged 6–17 years, were included. Children with primary migraine were recruited from mainstream schools while children with concussion or vestibular disorder were recruited from tertiary balance centers in London, UK, and Pittsburgh, PA, USA. Children completed the PVID, which assesses the frequency of dizziness and unsteadiness experienced in specific environmental situations, and Strength and Difficulties Questionnaire (SDQ, a brief behavioral screening instrument.ResultsThe PVID showed high internal consistency (11 items; α = 0.90. A significant between-group difference was noted with higher (i.e., worse PVID scores for patients vs. healthy participants (U = 2,436.5, z = −10.719, p < 0.001; a significant difference was noted between individual patient groups [χ2(2 = 11.014, p = 0.004] but post hoc analysis showed no significant pairwise comparisons. The optimal cut-off score for discriminating between individuals with and without abnormal ViD levels was 0.45 out of 3 (sensitivity 83%, specificity 75%. Self-rated emotional (U = 2,730.0, z = −6.169 and hyperactivity (U = 3,445.0, z = −4.506 SDQ subscale as well as informant (U = 188.5, z = −3.916 and self-rated (U = 3,178.5, z = −5.083 total scores were significantly worse for patients compared to healthy participants (p < 0.001.ConclusionViD is common in children with a primary concussion, migraine, or vestibular diagnosis. The PVID is a valid measure for

  2. Reconstructions of information in visual spatial working memory degrade with memory load.

    Science.gov (United States)

    Sprague, Thomas C; Ester, Edward F; Serences, John T

    2014-09-22

    Working memory (WM) enables the maintenance and manipulation of information relevant to behavioral goals. Variability in WM ability is strongly correlated with IQ [1], and WM function is impaired in many neurological and psychiatric disorders [2, 3], suggesting that this system is a core component of higher cognition. WM storage is thought to be mediated by patterns of activity in neural populations selective for specific properties (e.g., color, orientation, location, and motion direction) of memoranda [4-13]. Accordingly, many models propose that differences in the amplitude of these population responses should be related to differences in memory performance [14, 15]. Here, we used functional magnetic resonance imaging and an image reconstruction technique based on a spatial encoding model [16] to visualize and quantify population-level memory representations supported by multivoxel patterns of activation within regions of occipital, parietal and frontal cortex while participants precisely remembered the location(s) of zero, one, or two small stimuli. We successfully reconstructed images containing representations of the remembered-but not forgotten-locations within regions of occipital, parietal, and frontal cortex using delay-period activation patterns. Critically, the amplitude of representations of remembered locations and behavioral performance both decreased with increasing memory load. These results suggest that differences in visual WM performance between memory load conditions are mediated by changes in the fidelity of large-scale population response profiles distributed across multiple areas of human cortex. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. GPU-accelerated brain connectivity reconstruction and visualization in large-scale electron micrographs

    KAUST Repository

    Jeong, Wonki

    2011-01-01

    This chapter introduces a GPU-accelerated interactive, semiautomatic axon segmentation and visualization system. Two challenging problems have been addressed: the interactive 3D axon segmentation and the interactive 3D image filtering and rendering of implicit surfaces. The reconstruction of neural connections to understand the function of the brain is an emerging and active research area in neuroscience. With the advent of high-resolution scanning technologies, such as 3D light microscopy and electron microscopy (EM), reconstruction of complex 3D neural circuits from large volumes of neural tissues has become feasible. Among them, only EM data can provide sufficient resolution to identify synapses and to resolve extremely narrow neural processes. These high-resolution, large-scale datasets pose challenging problems, for example, how to process and manipulate large datasets to extract scientifically meaningful information using a compact representation in a reasonable processing time. The running time of the multiphase level set segmentation method has been measured on the CPU and GPU. The CPU version is implemented using the ITK image class and the ITK distance transform filter. The numerical part of the CPU implementation is similar to the GPU implementation for fair comparison. The main focus of this chapter is introducing the GPU algorithms and their implementation details, which are the core components of the interactive segmentation and visualization system. © 2011 Copyright © 2011 NVIDIA Corporation and Wen-mei W. Hwu Published by Elsevier Inc. All rights reserved..

  4. 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.

    Science.gov (United States)

    Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang

    2017-07-06

    We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

  5. GPS tomography: validation of reconstructed 3-D humidity fields with radiosonde profiles

    Directory of Open Access Journals (Sweden)

    M. Shangguan

    2013-09-01

    Full Text Available Water vapor plays an important role in meteorological applications; GeoForschungsZentrum (GFZ therefore developed a tomographic system to derive 3-D distributions of the tropospheric water vapor above Germany using GPS data from about 300 ground stations. Input data for the tomographic reconstructions are generated by the Earth Parameter and Orbit determination System (EPOS software of the GFZ, which provides zenith total delay (ZTD, integrated water vapor (IWV and slant total delay (STD data operationally with a temporal resolution of 2.5 min (STD and 15 min (ZTD, IWV. The water vapor distribution in the atmosphere is derived by tomographic reconstruction techniques. The quality of the solution is dependent on many factors such as the spatial coverage of the atmosphere with slant paths, the spatial distribution of their intersections and the accuracy of the input observations. Independent observations are required to validate the tomographic reconstructions and to get precise information on the accuracy of the derived 3-D water vapor fields. To determine the quality of the GPS tomography, more than 8000 vertical water vapor profiles at 13 German radiosonde stations were used for the comparison. The radiosondes were launched twice a day (at 00:00 UTC and 12:00 UTC in 2007. In this paper, parameters of the entire profiles such as the wet refractivity, and the zenith wet delay have been compared. Before the validation the temporal and spatial distribution of the slant paths, serving as a basis for tomographic reconstruction, as well as their angular distribution were studied. The mean wet refractivity differences between tomography and radiosonde data for all points vary from −1.3 to 0.3, and the root mean square is within the range of 6.5–9. About 32% of 6803 profiles match well, 23% match badly and 45% are difficult to classify as they match only in parts.

  6. GPS tomography. Validation of reconstructed 3-D humidity fields with radiosonde profiles

    Energy Technology Data Exchange (ETDEWEB)

    Shangguan, M.; Bender, M.; Ramatschi, M.; Dick, G.; Wickert, J. [Helmholtz Centre Potsdam, German Research Centre for Geosciences (GFZ), Potsdam (Germany); Raabe, A. [Leipzig Institute for Meteorology (LIM), Leipzig (Germany); Galas, R. [Technische Univ. Berlin (Germany). Dept. for Geodesy and Geoinformation Sciences

    2013-11-01

    Water vapor plays an important role in meteorological applications; GeoForschungsZentrum (GFZ) therefore developed a tomographic system to derive 3-D distributions of the tropospheric water vapor above Germany using GPS data from about 300 ground stations. Input data for the tomographic reconstructions are generated by the Earth Parameter and Orbit determination System (EPOS) software of the GFZ, which provides zenith total delay (ZTD), integrated water vapor (IWV) and slant total delay (STD) data operationally with a temporal resolution of 2.5 min (STD) and 15 min (ZTD, IWV). The water vapor distribution in the atmosphere is derived by tomographic reconstruction techniques. The quality of the solution is dependent on many factors such as the spatial coverage of the atmosphere with slant paths, the spatial distribution of their intersections and the accuracy of the input observations. Independent observations are required to validate the tomographic reconstructions and to get precise information on the accuracy of the derived 3-D water vapor fields. To determine the quality of the GPS tomography, more than 8000 vertical water vapor profiles at 13 German radiosonde stations were used for the comparison. The radiosondes were launched twice a day (at 00:00 UTC and 12:00 UTC) in 2007. In this paper, parameters of the entire profiles such as the wet refractivity, and the zenith wet delay have been compared. Before the validation the temporal and spatial distribution of the slant paths, serving as a basis for tomographic reconstruction, as well as their angular distribution were studied. The mean wet refractivity differences between tomography and radiosonde data for all points vary from -1.3 to 0.3, and the root mean square is within the range of 6.5-9. About 32% of 6803 profiles match well, 23% match badly and 45% are difficult to classify as they match only in parts.

  7. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  8. MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.

    Science.gov (United States)

    Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.

  9. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  10. Vivaldi: Visualization and validation of biomacromolecular NMR structures from the PDB

    Science.gov (United States)

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-01-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. © Proteins 2013. © 2012 Wiley Periodicals, Inc. PMID:23180575

  11. Validation of the Preverbal Visual Assessment (PreViAs) questionnaire.

    Science.gov (United States)

    García-Ormaechea, Inés; González, Inmaculada; Duplá, María; Andres, Eva; Pueyo, Victoria

    2014-10-01

    Visual cognitive integrative functions need to be evaluated by a behavioral assessment, which requires an experienced evaluator. The Preverbal Visual Assessment (PreViAs) questionnaire was designed to evaluate these functions, both in general pediatric population or in children with high risk of visual cognitive problems, through primary caregivers' answers. We aimed to validate the PreViAs questionnaire by comparing caregiver reports with results from a comprehensive clinical protocol. A total of 220 infants (visual development, as determined by the clinical protocol. Their primary caregivers completed the PreViAs questionnaire, which consists of 30 questions related to one or more visual domains: visual attention, visual communication, visual-motor coordination, and visual processing. Questionnaire answers were compared with results of behavioral assessments performed by three pediatric ophthalmologists. Results of the clinical protocol classified 128 infants as having normal visual maturation, and 92 as having abnormal visual maturation. The specificity of PreViAs questionnaire was >80%, and sensitivity was 64%-79%. More than 80% of the infants were correctly classified, and test-retest reliability exceeded 0.9 for all domains. The PreViAs questionnaire is useful to detect abnormal visual maturation in infants from birth to 24months of age. It improves the anamnesis process in infants at risk of visual dysfunctions. Copyright © 2014. Published by Elsevier Ireland Ltd.

  12. Validation of the stream function method used for reconstruction of experimental ionospheric convection patterns

    Directory of Open Access Journals (Sweden)

    P.L. Israelevich

    Full Text Available In this study we test a stream function method suggested by Israelevich and Ershkovich for instantaneous reconstruction of global, high-latitude ionospheric convection patterns from a limited set of experimental observations, namely, from the electric field or ion drift velocity vector measurements taken along two polar satellite orbits only. These two satellite passes subdivide the polar cap into several adjacent areas. Measured electric fields or ion drifts can be considered as boundary conditions (together with the zero electric potential condition at the low-latitude boundary for those areas, and the entire ionospheric convection pattern can be reconstructed as a solution of the boundary value problem for the stream function without any preliminary information on ionospheric conductivities. In order to validate the stream function method, we utilized the IZMIRAN electrodynamic model (IZMEM recently calibrated by the DMSP ionospheric electrostatic potential observations. For the sake of simplicity, we took the modeled electric fields along the noon-midnight and dawn-dusk meridians as the boundary conditions. Then, the solution(s of the boundary value problem (i.e., a reconstructed potential distribution over the entire polar region is compared with the original IZMEM/DMSP electric potential distribution(s, as well as with the various cross cuts of the polar cap. It is found that reconstructed convection patterns are in good agreement with the original modelled patterns in both the northern and southern polar caps. The analysis is carried out for the winter and summer conditions, as well as for a number of configurations of the interplanetary magnetic field.

    Key words: Ionosphere (electric fields and currents; plasma convection; modelling and forecasting

  13. Validation of a laboratory method for evaluating dynamic properties of reconstructed equine racetrack surfaces.

    Directory of Open Access Journals (Sweden)

    Jacob J Setterbo

    Full Text Available Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior.To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties.Track-testing device (TTD impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression.Most dynamic surface property setting differences (racetrack-laboratory were small relative to surface material type differences (dirt-synthetic. Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces.Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD.Dynamic impact properties of race surfaces

  14. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  15. Reliability and validity of the visual analogue scale for disability in patients with chronic musculoskeletal pain

    NARCIS (Netherlands)

    Boonstra, Anne M.; Schiphorst Preuper, Henrica R.; Reneman, Michiel F.; Posthumus, Jitze B.; Stewart, Roy E.

    To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional

  16. Validating visual disturbance types and classes used for forest soil monitoring protocols

    Science.gov (United States)

    D. S. Page-Dumroese; A. M. Abbott; M. P. Curran; M. F. Jurgensen

    2012-01-01

    We describe several methods for validating visual soil disturbance classes used during forest soil monitoring after specific management operations. Site-specific vegetative, soil, and hydrologic responses to soil disturbance are needed to identify sensitive and resilient soil properties and processes; therefore, validation of ecosystem responses can provide information...

  17. D Reconstruction and Visualization of Cultural Heritage: Analyzing Our Legacy Through Time

    Science.gov (United States)

    Rodríguez-Gonzálvez, P.; Muñoz-Nieto, A. L.; del Pozo, S.; Sanchez-Aparicio, L. J.; Gonzalez-Aguilera, D.; Micoli, L.; Gonizzi Barsanti, S.; Guidi, G.; Mills, J.; Fieber, K.; Haynes, I.; Hejmanowska, B.

    2017-02-01

    Temporal analyses and multi-temporal 3D reconstruction are fundamental for the preservation and maintenance of all forms of Cultural Heritage (CH) and are the basis for decisions related to interventions and promotion. Introducing the fourth dimension of time into three-dimensional geometric modelling of real data allows the creation of a multi-temporal representation of a site. In this way, scholars from various disciplines (surveyors, geologists, archaeologists, architects, philologists, etc.) are provided with a new set of tools and working methods to support the study of the evolution of heritage sites, both to develop hypotheses about the past and to model likely future developments. The capacity to "see" the dynamic evolution of CH assets across different spatial scales (e.g. building, site, city or territory) compressed in diachronic model, affords the possibility to better understand the present status of CH according to its history. However, there are numerous challenges in order to carry out 4D modelling and the requisite multi-data source integration. It is necessary to identify the specifications, needs and requirements of the CH community to understand the required levels of 4D model information. In this way, it is possible to determine the optimum material and technologies to be utilised at different CH scales, as well as the data management and visualization requirements. This manuscript aims to provide a comprehensive approach for CH time-varying representations, analysis and visualization across different working scales and environments: rural landscape, urban landscape and architectural scales. Within this aim, the different available metric data sources are systemized and evaluated in terms of their suitability.

  18. Desktop publishing and validation of custom near visual acuity charts.

    Science.gov (United States)

    Marran, Lynn; Liu, Lei; Lau, George

    2008-11-01

    Customized visual acuity (VA) assessment is an important part of basic and clinical vision research. Desktop computer based distance VA measurements have been utilized, and shown to be accurate and reliable, but computer based near VA measurements have not been attempted, mainly due to the limited spatial resolution of computer monitors. In this paper, we demonstrate how to use desktop publishing to create printed custom near VA charts. We created a set of six near VA charts in a logarithmic progression, 20/20 through 20/63, with multiple lines of the same acuity level, different letter arrangements in each line and a random noise background. This design allowed repeated measures of subjective accommodative amplitude without the potential artifact of familiarity of the optotypes. The background maintained a constant and spatial frequency rich peripheral stimulus for accommodation across the six different acuity levels. The paper describes in detail how pixel-wise accurate black and white bitmaps of Sloan optotypes were used to create the printed custom VA charts. At all acuity levels, the physical sizes of the printed custom optotypes deviated no more than 0.034 log units from that of the standard, satisfying the 0.05 log unit ISO criterion we used to demonstrate physical equivalence. Also, at all acuity levels, log unit differences in the mean target distance for which reliable recognition of letters first occurred for the printed custom optotypes compared to the standard were found to be below 0.05, satisfying the 0.05 log unit ISO criterion we used to demonstrate functional equivalence. It is possible to use desktop publishing to create custom near VA charts that are physically and functionally equivalent to standard VA charts produced by a commercial printing process.

  19. DQS advisor: a visual interface and knowledge-based system to balance dose, quality, and reconstruction speed in iterative CT reconstruction with application to NLM-regularization

    International Nuclear Information System (INIS)

    Zheng, Z; Papenhausen, E; Mueller, K

    2013-01-01

    Motivated by growing concerns with regards to the x-ray dose delivered to the patient, low-dose computed tomography (CT) has gained substantial interest in recent years. However, achieving high-quality CT reconstructions from the limited projection data collected at reduced x-ray radiation is challenging, and iterative algorithms have been shown to perform much better than conventional analytical schemes in these cases. A problem with iterative methods in general is that they require users to set many parameters, and if set incorrectly high reconstruction time and/or low image quality are likely consequences. Since the interactions among parameters can be complex and thus effective settings can be difficult to identify for a given scanning scenario, these choices are often left to a highly-experienced human expert. To help alleviate this problem, we devise a computer-based assistant for this purpose, called dose, quality and speed (DQS)-advisor. It allows users to balance the three most important CT metrics–-DQS-–by ways of an intuitive visual interface. Using a known gold-standard, the system uses the ant-colony optimization algorithm to generate the most effective parameter settings for a comprehensive set of DQS configurations. A visual interface then presents the numerical outcome of this optimization, while a matrix display allows users to compare the corresponding images. The interface allows users to intuitively trade-off GPU-enabled reconstruction speed with quality and dose, while the system picks the associated parameter settings automatically. Further, once the knowledge has been generated, it can be used to correctly set the parameters for any new CT scan taken at similar scenarios. (paper)

  20. Extended Reconstructed Sea Surface Temperature Version 5 (ERSSTv5): Upgrades, Validations, and Intercomparisons

    Science.gov (United States)

    Huang, B.; Thorne, P.; Banzon, P. V. F.; Chepurin, G. A.; Lawrimore, J. H.; Menne, M. J.; Vose, R. S.; Smith, T. M.; Zhang, H. M.

    2017-12-01

    The monthly global 2°×2° Extended Reconstructed Sea Surface Temperature (ERSST) has been revised and updated from version 4 to version 5. This update incorporates a new release of ICOADS R3.0, a decade of near-surface data from Argo floats, and a new estimate of centennial sea-ice from HadISST2. A number of choices in aspects of quality control, bias adjustment and interpolation have been substantively revised. The resulting ERSST estimates have more realistic spatio-temporal variations, better representation of high latitude SSTs, and ship SST biases are now calculated relative to more accurate buoy measurements, while the global long-term trend remains about the same. Progressive experiments have been undertaken to highlight the effects of each change in data source and analysis technique upon the final product. The reconstructed SST is systematically decreased by 0.077°C, as the reference data source is switched from ship SST in v4 to modern buoy SST in v5. Furthermore, high latitude SSTs are decreased by 0.1°-0.2°C by using sea-ice concentration from HadISST2 over HadISST1. Changes arising from remaining innovations are mostly important at small space and time scales, primarily having an impact where and when input observations are sparse. Cross-validations and verifications with independent modern observations show that the updates incorporated in ERSSTv5 have improved the representation of spatial variability over the global oceans, the magnitude of El Niño and La Niña events, and the decadal nature of SST changes over 1930s-40s when observation instruments changed rapidly. Both long (1900-2015) and short (2000-2015) term SST trends in ERSSTv5 remain significant as in ERSSTv4.

  1. Validity of the modified Berg Balance Scale in adults with intellectual and visual disabilities

    NARCIS (Netherlands)

    Dijkhuizen, Annemarie; Krijnen, Wim P; van der Schans, Cees; Waninge, Aly

    BACKGROUND: A modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined. AIM: The purpose of the current study was to evaluate the concurrent and

  2. Validity of the modified Berg Balance Scale in adults with intellectual and visual disabilities

    NARCIS (Netherlands)

    Dijkhuizen, Annemarie; Krijnen, Wim P.; van der Schans, Cees P.; Waninge, Aly

    Background: A modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined. Aim: The purpose of the current study was to evaluate the concurrent and

  3. Reliability and validity of the rey visual design learning test in primary school children

    NARCIS (Netherlands)

    Wilhelm, P.

    2004-01-01

    The Rey Visual Design Learning Test (Rey, 1964, in Spreen & Strauss, 1991) assesses immediate memory span, new learning and recognition for non-verbal material. Three studies are presented that focused on the reliability and validity of the RVDLT in primary school children. Test-retest reliability

  4. Validity of the rey visual design test in primary and secondary school children

    NARCIS (Netherlands)

    Wilhelm, P.; van Klink, M.; van Klink, M.

    2007-01-01

    The Rey Visual Design Learning Test (Rey, 1964, cited in Spreen & Strauss, 1991, Wilhelm, 2004) assesses immediate memory span, new learning, delayed recall and recognition for nonverbal material. Two studies are presented that focused on the construct validity of the RVDLT in primary and secondary

  5. Validity and reliability of self-assessed physical fitness using visual analogue scales

    DEFF Research Database (Denmark)

    Strøyer, Jesper; Essendrop, Morten; Jensen, Lone Donbaek

    2007-01-01

    To test the validity and reliability of self-assessed physical fitness samples included healthcare assistants working at a hospital (women=170, men=17), persons working with physically and mentally handicapped patients (women=530, men= 123), and two separate groups of healthcare students (a) women...... except for flexibility among men. The reliability was moderate to good (ICC = .62 - .80). Self-assessed aerobic fitness, muscle strength, and flexibility showed moderate construct validity and moderate to good reliability using visual analogues.......=91 and men=5 and (b) women=159 and men=10. Five components of physical fitness were self-assessed by Visual Analogue Scales with illustrations and verbal anchors for the extremes: aerobic fitness, muscle strength, endurance, flexibility, and balance. Convergent and divergent validity were evaluated...

  6. Assistive technology for visually impaired women for use of the female condom: a validation study

    Directory of Open Access Journals (Sweden)

    Luana Duarte Wanderley Cavalcante

    2015-02-01

    Full Text Available OBJECTIVE To validate assistive technology for visually impaired women to learn how to use the female condom. METHOD a methodological development study conducted on a web page, with data collection between May and October 2012. Participants were 14 judges; seven judges in sexual and reproductive health (1st stage and seven in special education (2nd stage. RESULTS All items have reached the adopted parameter of 70% agreement. In Stage 1 new materials were added to represent the cervix, and instructions that must be heard twice were included in the 2nd stage. CONCLUSION The technology has been validated and is appropriate for its objectives, structure / presentation and relevance. It is an innovative, low cost and valid instrument for promoting health and one which may help women with visual disabilities to use the female condom.

  7. Can DMCO Detect Visual Field Loss in Neurological Patients? A Secondary Validation Study

    DEFF Research Database (Denmark)

    Olsen, Ane Sophie; Steensberg, Alvilda Thougaard; la Cour, Morten

    2017-01-01

    Unrecognized visual field loss is caused by a range of blinding eye conditions as well as serious brain diseases. The commonest cause of asymptomatic visual field loss is glaucoma. No screening tools have been proven cost-effective. Damato Multifixation Campimetry Online (DMCO), an inexpensive...... online test, has been evaluated as a future cost-beneficial tool to detect glaucoma. To further validate DMCO, this study aimed to test DMCO in a preselected population with neurological visual field loss. Methods : The study design was an evaluation of a diagnostic test. Patients were included...... if they had undergone surgery for epilepsy during 2011-2014, resulting in visual field loss. They were examined with DMCO and results were compared with those obtained with the Humphrey Field Analyzer (30:2 SITA-Fast). DMCO sensitivity and specificity were estimated with 95% confidence intervals. Results...

  8. Validation of a questionnaire assessing patient's aesthetic and functional outcome after nasal reconstruction: the patient NAFEQ-score.

    Science.gov (United States)

    Moolenburgh, S E; Mureau, M A M; Duivenvoorden, H J; Hofer, S O P

    2009-05-01

    In determining patient satisfaction with functional and aesthetic outcome after reconstructive surgery, including nasal reconstruction, standardised assessment instruments are very important. These standardised tools are needed to adequately evaluate and compare outcome results. Since no such instrument existed for nasal reconstruction, a standardised evaluation questionnaire was developed to assess aesthetic and functional outcome after nasal reconstruction. Items of the Nasal Appearance and Function Evaluation Questionnaire (NAFEQ) were derived from both the literature and experiences with patients. The NAFEQ was validated on 30 nasal reconstruction patients and a reference group of 175 people. A factor analysis confirmed the arrangement of the questionnaire in two subscales: functional and aesthetic outcome. High Cronbach's alpha values (>0.70) for both subscales showed that the NAFEQ was an internally consistent instrument. This study demonstrated that the NAFEQ can be used as a standardised questionnaire for detailed evaluation of aesthetic and functional outcome after nasal reconstruction. Its widespread use would enable comparison of results achieved by different techniques, surgeons and centres in a standardised fashion.

  9. Real-time SPARSE-SENSE cardiac cine MR imaging: optimization of image reconstruction and sequence validation.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-12-01

    Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.

  10. Initial Validation Study for a Scale Used to Determine Service Intensity for Itinerant Teachers of Students with Visual Impairments

    Science.gov (United States)

    Pogrund, Rona L.; Darst, Shannon; Munro, Michael P.

    2015-01-01

    Introduction: The purpose of this study was to begin validation of a scale that will be used by teachers of students with visual impairments to determine appropriate recommended type and frequency of services for their students based on identified student need. Methods: Validity and reliability of the Visual Impairment Scale of Service Intensity…

  11. The validity of visual acuity assessment using mobile technology devices in the primary care setting.

    Science.gov (United States)

    O'Neill, Samuel; McAndrew, Darryl J

    2016-04-01

    The assessment of visual acuity is indicated in a number of clinical circumstances. It is commonly conducted through the use of a Snellen wall chart. Mobile technology developments and adoption rates by clinicians may potentially provide more convenient methods of assessing visual acuity. Limited data exist on the validity of these devices and applications. The objective of this study was to evaluate the assessment of distance visual acuity using mobile technology devices against the commonly used 3-metre Snellen chart in a primary care setting. A prospective quantitative comparative study was conducted at a regional medical practice. The visual acuity of 60 participants was assessed on a Snellen wall chart and two mobile technology devices (iPhone, iPad). Visual acuity intervals were converted to logarithm of minimum angle of resolution (logMAR) scores and subjected to intraclass correlation coefficient (ICC) assessment. The results show a high level of general agreement between testing modality (ICC 0.917 with a 95% confidence interval of 0.887-0.940). The high level of agreement of visual acuity results between the Snellen wall chart and both mobile technology devices suggests that clinicians can use this technology with confidence in the primary care setting.

  12. The development and validation of the Visual Analogue Self-Esteem Scale (VASES).

    Science.gov (United States)

    Brumfitt, S M; Sheeran, P

    1999-11-01

    To develop a visual analogue measure of self-esteem and test its psychometric properties. Two correlational studies involving samples of university students and aphasic speakers. Two hundred and forty-three university students completed multiple measures of self-esteem, depression and anxiety as well as measures of transitory mood and social desirability (Study 1). Two samples of aphasic speakers (N = 14 and N = 20) completed the Visual Analogue Self-Esteem Scale (VASES), the Rosenberg (1965) self-esteem scale and measures of depression and anxiety. (Study 2). Study 1 found evidence of good internal and test-retest reliability, construct validity and convergent and discriminant validity for a 10-item VASES. Study 2 demonstrated good internal reliability among aphasic speakers. The VASES is a short and easy to administer measure of self-esteem that possesses good psychometric properties.

  13. Reconstruction methods for sound visualization based on acousto-optic tomography

    DEFF Research Database (Denmark)

    Torras Rosell, Antoni; Lylloff, Oliver; Barrera Figueroa, Salvador

    2013-01-01

    The visualization of acoustic fields using acousto-optic tomography has recently proved to yield satisfactory results in the audible frequency range. The current implementation of this visualization technique uses a laser Doppler vibrometer (LDV) to measure the acousto-optic effect, that is, the ...

  14. Quantitative regional validation of the visual rating scale for posterior cortical atrophy

    International Nuclear Information System (INIS)

    Moeller, Christiane; Benedictus, Marije R.; Koedam, Esther L.G.M.; Scheltens, Philip; Flier, Wiesje M. van der; Versteeg, Adriaan; Wattjes, Mike P.; Barkhof, Frederik; Vrenken, Hugo

    2014-01-01

    Validate the four-point visual rating scale for posterior cortical atrophy (PCA) on magnetic resonance images (MRI) through quantitative grey matter (GM) volumetry and voxel-based morphometry (VBM) to justify its use in clinical practice. Two hundred twenty-nine patients with probable Alzheimer's disease and 128 with subjective memory complaints underwent 3T MRI. PCA was rated according to the visual rating scale. GM volumes of six posterior structures and the total posterior region were extracted using IBASPM and compared among PCA groups. To determine which anatomical regions contributed most to the visual scores, we used binary logistic regression. VBM compared local GM density among groups. Patients were categorised according to their PCA scores: PCA-0 (n = 122), PCA-1 (n = 143), PCA-2 (n = 79), and PCA-3 (n = 13). All structures except the posterior cingulate differed significantly among groups. The inferior parietal gyrus volume discriminated the most between rating scale levels. VBM showed that PCA-1 had a lower GM volume than PCA-0 in the parietal region and other brain regions, whereas between PCA-1 and PCA-2/3 GM atrophy was mostly restricted to posterior regions. The visual PCA rating scale is quantitatively validated and reliably reflects GM atrophy in parietal regions, making it a valuable tool for the daily radiological assessment of dementia. (orig.)

  15. Quantitative regional validation of the visual rating scale for posterior cortical atrophy

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, Christiane; Benedictus, Marije R.; Koedam, Esther L.G.M.; Scheltens, Philip [VU University Medical Center, Alzheimer Center and Department of Neurology, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands); Flier, Wiesje M. van der [VU University Medical Center, Alzheimer Center and Department of Neurology, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands); VU University Medical Center, Department of Epidemiology and Biostatistics, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands); Versteeg, Adriaan; Wattjes, Mike P.; Barkhof, Frederik [VU University Medical Center, Department of Radiology and Nuclear Medicine, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands); Vrenken, Hugo [VU University Medical Center, Department of Radiology and Nuclear Medicine, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands); VU University Medical Center, Department of Physics and Medical Technology, Neuroscience Campus Amsterdam, P.O. Box 7057, Amsterdam (Netherlands)

    2014-02-15

    Validate the four-point visual rating scale for posterior cortical atrophy (PCA) on magnetic resonance images (MRI) through quantitative grey matter (GM) volumetry and voxel-based morphometry (VBM) to justify its use in clinical practice. Two hundred twenty-nine patients with probable Alzheimer's disease and 128 with subjective memory complaints underwent 3T MRI. PCA was rated according to the visual rating scale. GM volumes of six posterior structures and the total posterior region were extracted using IBASPM and compared among PCA groups. To determine which anatomical regions contributed most to the visual scores, we used binary logistic regression. VBM compared local GM density among groups. Patients were categorised according to their PCA scores: PCA-0 (n = 122), PCA-1 (n = 143), PCA-2 (n = 79), and PCA-3 (n = 13). All structures except the posterior cingulate differed significantly among groups. The inferior parietal gyrus volume discriminated the most between rating scale levels. VBM showed that PCA-1 had a lower GM volume than PCA-0 in the parietal region and other brain regions, whereas between PCA-1 and PCA-2/3 GM atrophy was mostly restricted to posterior regions. The visual PCA rating scale is quantitatively validated and reliably reflects GM atrophy in parietal regions, making it a valuable tool for the daily radiological assessment of dementia. (orig.)

  16. Visual event-related potential studies supporting the validity of VARK learning styles' visual and read/write learners.

    Science.gov (United States)

    Thepsatitporn, Sarawin; Pichitpornchai, Chailerd

    2016-06-01

    The validity of learning styles needs supports of additional objective evidence. The identification of learning styles using subjective evidence from VARK questionnaires (where V is visual, A is auditory, R is read/write, and K is kinesthetic) combined with objective evidence from visual event-related potential (vERP) studies has never been investigated. It is questionable whether picture superiority effects exist in V learners and R learners. Thus, the present study aimed to investigate whether vERP could show the relationship between vERP components and VARK learning styles and to identify the existence of picture superiority effects in V learners and R learners. Thirty medical students (15 V learners and 15 R learners) performed recognition tasks with vERP and an intermediate-term memory (ITM) test. The results of within-group comparisons showed that pictures elicited larger P200 amplitudes than words at the occipital 2 site (P < 0.05) in V learners and at the occipital 1 and 2 sites (P < 0.05) in R learners. The between-groups comparison showed that P200 amplitudes elicited by pictures in V learners were larger than those of R learners at the parietal 4 site (P < 0.05). The ITM test result showed that a picture set showed distinctively more correct responses than that of a word set for both V learners (P < 0.001) and R learners (P < 0.01). In conclusion, the result indicated that the P200 amplitude at the parietal 4 site could be used to objectively distinguish V learners from R learners. A lateralization existed to the right brain (occipital 2 site) in V learners. The ITM test demonstrated the existence of picture superiority effects in both learners. The results revealed the first objective electrophysiological evidence partially supporting the validity of the subjective psychological VARK questionnaire study. Copyright © 2016 The American Physiological Society.

  17. Reliability and Validity of the Japanese Version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ).

    Science.gov (United States)

    Nakano, Hideki; Kodama, Takayuki; Ukai, Kazumasa; Kawahara, Satoru; Horikawa, Shiori; Murata, Shin

    2018-05-02

    In this study, we aimed to (1) translate the English version of the Kinesthetic and Visual Imagery Questionnaire (KVIQ), which assesses motor imagery ability, into Japanese, and (2) investigate the reliability and validity of the Japanese KVIQ. We enrolled 28 healthy adults in this study. We used Cronbach’s alpha coefficients to assess reliability reflected by the internal consistency. Additionally, we assessed validity reflected by the criterion-related validity between the Japanese KVIQ and the Japanese version of the Movement Imagery Questionnaire-Revised (MIQ-R) with Spearman’s rank correlation coefficients. The Cronbach’s alpha coefficients for the KVIQ-20 were 0.88 (Visual) and 0.91 (Kinesthetic), which indicates high reliability. There was a significant positive correlation between the Japanese KVIQ-20 (Total) and the Japanese MIQ-R (Total) (r = 0.86, p < 0.01). Our results suggest that the Japanese KVIQ is an assessment that is a reliable and valid index of motor imagery ability.

  18. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    Science.gov (United States)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk

  19. GPU-accelerated brain connectivity reconstruction and visualization in large-scale electron micrographs

    KAUST Repository

    Jeong, Wonki; Pfister, Hanspeter; Beyer, Johanna; Hadwiger, Markus

    2011-01-01

    for fair comparison. The main focus of this chapter is introducing the GPU algorithms and their implementation details, which are the core components of the interactive segmentation and visualization system. © 2011 Copyright © 2011 NVIDIA Corporation

  20. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    Science.gov (United States)

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  1. Cue-induced craving among inhalant users: Development and preliminary validation of a visual cue paradigm.

    Science.gov (United States)

    Jain, Shobhit; Dhawan, Anju; Kumaran, S Senthil; Pattanayak, Raman Deep; Jain, Raka

    2017-12-01

    Cue-induced craving is known to be associated with a higher risk of relapse, wherein drug-specific cues become conditioned stimuli, eliciting conditioned responses. Cue-reactivity paradigm are important tools to study psychological responses and functional neuroimaging changes. However, till date, there has been no specific study or a validated paradigm for inhalant cue-induced craving research. The study aimed to develop and validate visual cue stimulus for inhalant cue-associated craving. The first step (picture selection) involved screening and careful selection of 30 cue- and 30 neutral-pictures based on their relevance for naturalistic settings. In the second step (time optimization), a random selection of ten cue-pictures each was presented for 4s, 6s, and 8s to seven adolescent male inhalant users, and pre-post craving response was compared using a Visual Analogue Scale(VAS) for each of the picture and time. In the third step (validation), craving response for each of 30 cue- and 30 neutral-pictures were analysed among 20 adolescent inhalant users. Findings revealed a significant difference in before and after craving response for the cue-pictures, but not neutral-pictures. Using ROC-curve, pictures were arranged in order of craving intensity. Finally, 20 best cue- and 20 neutral-pictures were used for the development of a 480s visual cue paradigm. This is the first study to systematically develop an inhalant cue picture paradigm which can be used as a tool to examine cue induced craving in neurobiological studies. Further research, including its further validation in larger study and diverse samples, is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  3. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  4. Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization

    Science.gov (United States)

    Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.

    2016-06-01

    The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.

  5. VIRTUAL RECONSTRUCTION OF LOST ARCHITECTURES: FROM THE TLS SURVEY TO AR VISUALIZATION

    Directory of Open Access Journals (Sweden)

    R. Quattrini

    2016-06-01

    Full Text Available The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.

  6. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  7. Palmyra as it Once Was: 3d Virtual Reconstruction and Visualization of AN Irreplacable Lost Treasure

    Science.gov (United States)

    Denker, A.

    2017-05-01

    Palmyra was a mosaic which was composed through its colourful past, by Assyrians, Parthians, Greeks and Romans. For centuries, the spectacular ruins and impressive panorama of the antique city used to captivate and inspire the visitors as the witnesses of its illustrious history. As a grim consequence of the horrific conflict that engulfed Syria, since May 2015 they are no more to be seen. Palmyra has been purposely targeted and obliterated, the ruins have been reduced to rubble. The immense beauty and rich heritage of Palmyra which has been lost forever is reconstructed here as it was once was, at the top of its glory, with the hope of preserving its memory.

  8. VISUAL TOOLS FOR CROWDSOURCING DATA VALIDATION WITHIN THE GLOBELAND30 GEOPORTAL

    Directory of Open Access Journals (Sweden)

    E. Chuprikova

    2016-06-01

    Full Text Available This research aims to investigate the role of visualization of the user generated data that can empower the geoportal of GlobeLand30 produced by NGCC (National Geomatics Center of China. The focus is set on the development of a concept of tools that can extend the Geo-tagging functionality and make use of it for different target groups. The anticipated tools should improve the continuous data validation, updating and efficient use of the remotely-sensed data distributed within GlobeLand30.

  9. Visual Tools for Crowdsourcing Data Validation Within the GLOBELAND30 Geoportal

    Science.gov (United States)

    Chuprikova, E.; Wu, H.; Murphy, C. E.; Meng, L.

    2016-06-01

    This research aims to investigate the role of visualization of the user generated data that can empower the geoportal of GlobeLand30 produced by NGCC (National Geomatics Center of China). The focus is set on the development of a concept of tools that can extend the Geo-tagging functionality and make use of it for different target groups. The anticipated tools should improve the continuous data validation, updating and efficient use of the remotely-sensed data distributed within GlobeLand30.

  10. Reconstruction and Visualization of Fiber and Laminar Structure in the Normal Human Heart from Ex Vivo DTMRI Data

    International Nuclear Information System (INIS)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-01-01

    Background--The human heart is composed of a helical network of muscle fibers. These fibers are organized to form sheets that are separated by cleavage surfaces. This complex structure of fibers and sheets is responsible for the orthotropic mechanical properties of cardiac muscle. The understanding of the configuration of the 3D fiber and sheet structure is important for modeling the mechanical and electrical properties of the heart and changes in this configuration maybe of significant importance to understand the remodeling after myocardial infarction. Methods--Anisotropic least square filtering followed by fiber and sheet tracking techniques were applied to Diffusion Tensor Magnetic Resonance Imaging (DTMRI) data of the excised human heart. The fiber configuration was visualized by using thin tubes to increase 3-dimensional visual perception of the complex structure. The sheet structures were reconstructed from the DTMRI data, obtaining surfaces that span the wall from the endo- to the epicardium. All visualizations were performed using the high-quality ray-tracing software POV-Ray. Results--The fibers are shown to lie in sheets that have concave or convex transmural structure which correspond to histological studies published in the literature. The fiber angles varied depending on the position between the epi- and endocardium. The sheets had a complex structure that depended on the location within the myocardium. In the apex region the sheets had more curvature. Conclusions--A high-quality visualization algorithm applied to demonstrated high quality DTMRI data is able to elicit the comprehension of the complex 3 dimensional structure of the fibers and sheets in the heart

  11. Data Visualization and Analysis Tools for the Global Precipitation Measurement (GPM) Validation Network

    Science.gov (United States)

    Morris, Kenneth R.; Schwaller, Mathew

    2010-01-01

    The Validation Network (VN) prototype for the Global Precipitation Measurement (GPM) Mission compares data from the Tropical Rainfall Measuring Mission (TRMM) satellite Precipitation Radar (PR) to similar measurements from U.S. and international operational weather radars. This prototype is a major component of the GPM Ground Validation System (GVS). The VN provides a means for the precipitation measurement community to identify and resolve significant discrepancies between the ground radar (GR) observations and similar satellite observations. The VN prototype is based on research results and computer code described by Anagnostou et al. (2001), Bolen and Chandrasekar (2000), and Liao et al. (2001), and has previously been described by Morris, et al. (2007). Morris and Schwaller (2009) describe the PR-GR volume-matching algorithm used to create the VN match-up data set used for the comparisons. This paper describes software tools that have been developed for visualization and statistical analysis of the original and volume matched PR and GR data.

  12. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  13. RECONSTRUCTION, QUANTIFICATION, AND VISUALIZATION OF FOREST CANOPY BASED ON 3D TRIANGULATIONS OF AIRBORNE LASER SCANNING POINT DATA

    Directory of Open Access Journals (Sweden)

    J. Vauhkonen

    2015-03-01

    Full Text Available Reconstruction of three-dimensional (3D forest canopy is described and quantified using airborne laser scanning (ALS data with densities of 0.6–0.8 points m-2 and field measurements aggregated at resolutions of 400–900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i to optimize the degree of filtration with respect to the field measurements, and (ii to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2 with the stem volume considered, both alone (R2=0.65 and together with other predictors (R2=0.78. When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  14. (Re)Constructing the Wicked Problem Through the Visual and the Verbal

    DEFF Research Database (Denmark)

    Holm Jacobsen, Peter; Harty, Chris; Tryggestad, Kjell

    2016-01-01

    Wicked problems are open ended and complex societal problems. There is a lack of empirical research into the dynamics and mechanisms that (re) construct problems to become wicked. This paper builds on an ethnographic study of a dialogue-based architect competition to do just that. The competition...... processes creates new knowledge and insights, but at the same time present new problems related to the ongoing verbal feedback. The design problem being (re) constructed appears as Heracles' fight with Hydra: Every time Heracles cut of a head, two new heads grow back. The paper contributes to understanding...... the relationship between the visual and the verbal (dialogue) in complex design processes in the early phases of large construction projects, and how the dynamic interplay between the design visualization and verbal dialogue develops before the competition produces, or negotiates, “a "winning design”....

  15. Reliability and validity of the visual analogue scale for disability in patients with chronic musculoskeletal pain.

    Science.gov (United States)

    Boonstra, Anne M; Schiphorst Preuper, Henrica R; Reneman, Michiel F; Posthumus, Jitze B; Stewart, Roy E

    2008-06-01

    To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.

  16. Validation of the MASK-rhinitis visual analogue scale on smartphone screens to assess allergic rhinitis control

    NARCIS (Netherlands)

    Caimmi, D.; Baiz, N.; Tanno, L. K.; Demoly, P.; Arnavielhe, S.; Murray, R.; Bedbrook, A.; Bergmann, K. C.; de Vries, G.; Fokkens, W. J.; Fonseca, J.; Haahtela, T.; Keil, T.; Kuna, P.; Mullol, J.; Papadopoulos, N.; Passalacqua, G.; Samolinski, B.; Tomazic, P. V.; Valiulis, A.; van Eerd, M.; Wickman, M.; Annesi-Maesano, I.; Bousquet, J.; Agache, I.; Angles, R.; Anto, J. M.; Asayag, E.; Bacci, E.; Bachert, C.; Baroni, I.; Barreto, B. A.; Bedolla-Barajas, M.; Bertorello, L.; Bewick, M.; Bieber, T.; Birov, S.; Bindslev-Jensen, C.; Blua, A.; Bochenska Marciniak, M.; Bogus-Buczynska, I.; Bosnic-Ancevich, S.; Bosse, I.; Bourret, R.; Bucca, C.; Buonaiuto, R.; Caiazza, D.; Caillot, D.; Caimmi, D. P.; Camargos, P.; Canfora, G.; Cardona, V.; Carriazo, A. M.; Cartier, C.; Castellano, G.; Chavannes, N. H.; Ciaravolo, M. M.; Cingi, C.; Ciceran, A.; Colas, L.; Colgan, E.; Coll, J.; Conforti, D.; Correira de Sousa, J.; Cortés-Grimaldo, R. M.; Corti, F.; Costa, E.; Courbis, A. L.; Cruz, A.; Custovic, A.; Dario, C.; da Silva, M.; Dauvilliers, Y.; de Blay, F.; Dedeu, T.; de Feo, G.; de Martino, B.; Di Capua, S.; Di Carluccio, N.; Dray, G.; Dubakiene, R.; Eller, E.; Emuzyte, R.; Espinoza-Contreras, J. M.; Estrada-Cardona, A.; Farrell, J.; Ferrero, J.; Fontaine, J. F.; Forti, S.; Gálvez-Romero, J. L.; Garcia Cruz, M. H.; García-Cobas, C. I.; Gemicioğlu, B.; Gerth van Wijck, R.; Guidacci, M.; Gómez-Vera, J.; Guldemond, N. A.; Gutter, Z.; Hajjam, J.; Hellings, P.; Hernández-Velázquez, L.; Illario, M.; Ivancevich, J. C.; Jares, E.; Joos, G.; Just, J.; Kalayci, O.; Kalyoncu, A. F.; Karjalainen, J.; Khaltaev, N.; Klimek, L.; Kull, I.; Kuna, T. P.; Kvedariene, V.; Kolek, V.; Krzych-Fałta, E.; Kupczyk, M.; Lacwik, P.; Larenas-Linnemann, D.; Laune, D.; Lauri, D.; Lavrut, J.; Lessa, M.; Levato, G.; Lewis, L.; Lieten, I.; Lipiec, A.; Louis, R.; Luna-Pech, J. A.; Magnan, A.; Malva, J.; Maspero, J. F.; Mayora, O.; Medina-Ávalos, M. A.; Melen, E.; Menditto, E.; Millot-Keurinck, J.; Moda, G.; Morais-Almeida, M.; Mösges, R.; Mota-Pinto, A.; Muraro, A.; Noguès, M.; Nalin, M.; Napoli, L.; Neffen, H.; O'Hehir, R.; Olivé Elias, M.; Onorato, G.; Palkonen, S.; Pépin, J. L.; Pereira, A. M.; Persico, M.; Pfaar, O.; Pozzi, A. C.; Prokopakis, E. P.; Raciborski, F.; Rizzo, J. A.; Robalo-Cordeiro, C.; Rodríguez-González, M.; Rolla, G.; Roller-Wirnsberger, R. E.; Romano, A.; Romano, M.; Salimäki, J.; Serpa, F. S.; Shamai, S.; Sierra, M.; Sova, M.; Sorlini, M.; Stellato, C.; Stelmach, R.; Strandberg, T.; Stroetman, V.; Stukas, R.; Szylling, A.; Tibaldi, V.; Todo-Bom, A.; Toppila-Salmi, S.; Tomazic, P.; Trama, U.; Triggiani, M.; Valero, A.; Valovirta, E.; Vasankari, T.; Vatrella, A.; Ventura, M. T.; Verissimo, M. T.; Viart, F.; Williams, S.; Wagenmann, M.; Wanscher, C.; Westman, M.; Young, I.; Yorgancioglu, A.; Zernotti, E.; Zurbierber, T.; Zurkuhlen, A.; de Oliviera, B.; Senn, A.

    2017-01-01

    Background: Visual Analogue Scale (VAS) is a validated tool to assess control in allergic rhinitis patients. Objective: The aim of this study was to validate the use of VAS in the MASK-rhinitis (MACVIA-ARIA Sentinel NetworK for allergic rhinitis) app (Allergy Diary) on smartphones screens to

  17. Reconstruction and Visualization of Fiber and Laminar Structure inthe Normal Human Heart from Ex Vivo DTMRI Data

    Energy Technology Data Exchange (ETDEWEB)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-12-18

    Background - The human heart is composed of a helicalnetwork of muscle fibers. These fibers are organized to form sheets thatare separated by cleavage surfaces. This complex structure of fibers andsheets is responsible for the orthotropic mechanical properties ofcardiac muscle. The understanding of the configuration of the 3D fiberand sheet structure is important for modeling the mechanical andelectrical properties of the heart and changes in this configuration maybe of significant importance to understand the remodeling aftermyocardial infarction.Methods - Anisotropic least square filteringfollowed by fiber and sheet tracking techniques were applied to DiffusionTensor Magnetic Resonance Imaging (DTMRI) data of the excised humanheart. The fiber configuration was visualized by using thin tubes toincrease 3-dimensional visual perception of the complex structure. Thesheet structures were reconstructed from the DTMRI data, obtainingsurfaces that span the wall from the endo- to the epicardium. Allvisualizations were performed using the high-quality ray-tracing softwarePOV-Ray. Results - The fibers are shown to lie in sheets that haveconcave or convex transmural structure which correspond to histologicalstudies published in the literature. The fiber angles varied depending onthe position between the epi- and endocardium. The sheets had a complexstructure that depended on the location within the myocardium. In theapex region the sheets had more curvature. Conclusions - A high-qualityvisualization algorithm applied to demonstrated high quality DTMRI datais able to elicit the comprehension of the complex 3 dimensionalstructure of the fibers and sheets in the heart.

  18. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    Science.gov (United States)

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  19. Visualization of postoperative anterior cruciate ligament reconstruction bone tunnels: Reliability of standard radiographs, CT scans, and 3D virtual reality images

    NARCIS (Netherlands)

    D.E. Meuffels (Duncan); J.W. Potters (Jan Willem); A.H.J. Koning (Anton); C.H. Brown Jr Jr. (Charles); J.A.N. Verhaar (Jan); M. Reijman (Max)

    2011-01-01

    textabstractBackground and purpose: Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans,

  20. Effect of cognitive challenge on the postural control of patients with ACL reconstruction under visual and surface perturbations.

    Science.gov (United States)

    Lion, Alexis; Gette, Paul; Meyer, Christophe; Seil, Romain; Theisen, Daniel

    2018-02-01

    Our study aimed to evaluate the effect of cognitive challenge on double-leg postural control under visual and surface perturbations of patients with anterior cruciate ligament reconstruction (ACLR) cleared to return to sport. Double-leg stance postural control of 19 rehabilitated patients with ACLR (age: 24.8 ± 6.7 years, time since surgery: 9.2 ± 1.6 months) and 21 controls (age: 24.9 ± 3.7 years) was evaluated in eight randomized situations combining two cognitive (with and without silent backward counting in steps of seven), two visual (eyes open, eyes closed) and two surface (stable support, foam support) conditions. Sway area and sway path of the centre of foot pressure were measured during three 20-s recordings for each situation. Higher values indicated poorer postural control. Generally, postural control of patients with ACLR and controls was similar for sway area and sway path (p > 0.05). The lack of visual anchorage and the disturbance of the plantar input by the foam support increased sway area and sway path (p postural control during double-leg stance tests. The use of a dual task paradigm under increased task complexity modified postural control, but in a similar way in patients with ACLR than in healthy controls. Double-leg stance tests, even under challenging conditions, are not sensitive enough to reveal postural control differences between rehabilitated patients with ACLR and controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Prevention of sexually transmitted diseases among visually impaired people: educational text validation 1

    Science.gov (United States)

    Oliveira, Giselly Oseni Barbosa; Cavalcante, Luana Duarte Wanderley; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César; Rebouças, Cristiana Brasil de Almeida

    2016-01-01

    ABSTRACT Objective: to validate an educational text in the context of Sexually Transmitted Diseases (STD) for visually impaired persons, making it accessible to this population. Method: a validation study, in a virtual environment. Data collection occurred from May to September 2012 by emailing the subjects, and was composed by seven content experts about STDs. Analysis was based on the considerations of the experts about Objectives, Structure and Presentation, and Relevance. Results: on the Objectives and Structure and Presentation blocks, 77 (84.6%) and 48 (85.7%) were fully adequate or appropriate, respectively. In the Relevance block, items 3.2 - Allows transfer and generalization of learning, and 3.5 - Portrays aspects needed to clarify the family, showed bad agreement indices of 0.42 and 0.57, respectively. The analysis was followed by reformulating the text according to the relevant suggestions. Conclusion: the text was validated regarding the content of sexually transmitted diseases. A total of 35 stanzas were removed and nine others included, following the recommendations of the experts. PMID:27556880

  2. The validation of the visual analogue scale for patient satisfaction after total hip arthroplasty.

    Science.gov (United States)

    Brokelman, Roy B G; Haverkamp, Daniel; van Loon, Corné; Hol, Annemiek; van Kampen, Albert; Veth, Rene

    2012-06-01

    INTRODUCTION: Patient satisfaction becomes more important in our modern health care system. The assessment of satisfaction is difficult because it is a multifactorial item for which no golden standard exists. One of the potential methods of measuring satisfaction is by using the well-known visual analogue scale (VAS). In this study, we validated VAS for satisfaction. PATIENT AND METHODS: In this prospective study, we studied 147 patients (153 hips). The construct validity was measured using the Spearman correlation test that compares the satisfaction VAS with the Harris hip score, pain VAS at rest and during activity, Oxford hip score, Short Form 36 and Western Ontario McMaster Universities Osteoarthritis Index. The reliability was tested using the intra-class coefficient. RESULTS: The Pearson correlation test showed correlations in the range of 0.40-0.80. The satisfaction VAS had a high correlation between the pain VAS and Oxford hip score, which could mean that pain is one of the most important factors in patient satisfaction. The intra-class coefficient was 0.95. CONCLUSIONS: There is a moderate to mark degree of correlation between the satisfaction VAS and the currently available subjective and objective scoring systems. The intra-class coefficient of 0.95 indicates an excellent test-retest reliability. The VAS satisfaction is a simple instrument to quantify the satisfaction of a patient after total hip arthroplasty. In this study, we showed that the satisfaction VAS has a good validity and reliability.

  3. Validity and Reliability of Visual Analog Scale Foot and Ankle: The Turkish Version.

    Science.gov (United States)

    Gur, Gozde; Turgut, Elif; Dilek, Burcu; Baltaci, Gul; Bek, Nilgun; Yakut, Yavuz

    The present study tested the reliability and validity of the Turkish version of the visual analog scale foot and ankle (VAS-FA) among healthy subjects and patients with foot problems. A total of 128 participants, 65 healthy subjects and 63 patients with foot problems, were evaluated. The VAS-FA was translated into Turkish and administered to the 128 subjects on 2 separate occasions with a 5-day interval. The test-retest reliability and internal consistency were assessed with the intraclass correlation coefficient and Cronbach's α. The validity was assessed using the correlations with Turkish versions of the Foot Function Index, the Foot and Ankle Outcome Score, and the Short-Form 36-item Health Survey. A statistically significant difference was found between the healthy group and the patient group in the overall score and subscale scores of the VAS-FA (p Foot Function Index, Foot and Ankle Outcome Score, and Short-Form 36-item Health Survey scores in the healthy and patient groups both. The Turkish version of the VAS-FA is sensitive enough to distinguish foot and ankle-specific pathologic conditions from asymptomatic conditions. The Turkish version of the VAS-FA is a reliable and valid method and can be used for foot-related problems. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  4. MRI segmentation by active contours model, 3D reconstruction, and visualization

    Science.gov (United States)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  5. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation

    International Nuclear Information System (INIS)

    Elbakri, Idris A; Fessler, Jeffrey A

    2003-01-01

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications

  6. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    Science.gov (United States)

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  7. 3D reconstruction of a patient-specific surface model of the proximal femur from calibrated x-ray radiographs: A validation study

    International Nuclear Information System (INIS)

    Zheng Guoyan; Schumann, Steffen

    2009-01-01

    Twenty-three femurs (one plastic bone and twenty-two cadaver bones) with both nonpathologic and pathologic cases were considered to validate a statistical shape model based technique for three-dimensional (3D) reconstruction of a patient-specific surface model from calibrated x-ray radiographs. The 3D reconstruction technique is based on an iterative nonrigid registration of the features extracted from a statistically instantiated 3D surface model to those interactively identified from the radiographs. The surface models reconstructed from the radiographs were compared to the associated ground truths derived either from a 3D CT-scan reconstruction method or from a 3D laser-scan reconstruction method and an average error distance of 0.95 mm were found. Compared to the existing works, our approach has the advantage of seamlessly handling both nonpathologic and pathologic cases even when the statistical shape model that we used was constructed from surface models of nonpathologic bones.

  8. Reconstructive endovascular treatment of vertebral artery dissecting aneurysms with the Low-profile Visualized Intraluminal Support (LVIS device.

    Directory of Open Access Journals (Sweden)

    Chuan-Chuan Wang

    Full Text Available The Low-profile Visualized Intraluminal Support (LVIS device is a new generation of self-expanding braided stent recently introduced in China for stent assisted coiling of intracranial aneurysms. The aim of our study is to evaluate the feasibility, safety, and efficacy of the LVIS device in reconstructive treatment of vertebral artery dissecting aneurysms (VADAs.We retrospectively reviewed the neurointerventional database of our institution from June 2014 to May 2016. Patients who underwent endovascular treatment of VADAs with LVIS stents were included in this study. Clinical presentation, aneurysmal characteristics, technical feasibility, procedural complications, and angiographic and clinical follow-up results were evaluated.38 patients with VADAs who underwent treatment with LVIS stent were identified, including 3 ruptured VADAs. All VADAs were successfully treated with reconstructive techniques including the stent-assisted coiling (n = 34 and stenting only (n = 4. Post-procedural complications developed in 3 patients (7.9% including two small brainstem infarctions and one delayed thromboembolic event. Complications resulted in one case of minor permanent morbidity (2.6%. There was no procedure-related mortality. The follow-up angiogram was available in 30 patients at an average of 8.3 months (range, 2 to 30 months, which revealed complete occlusion in 23 patients (76.7%, residual neck in five patients (16.7%, and residual sac in two patients (6.7%. The follow-up of 25 aneurysms with incomplete immediate occlusion revealed 22 aneurysms (88% with improvement in the Raymond class. One aneurysm (3.3% showed recanalization and required retreatment. Clinical followed-up at 5-28 months (mean 14.1 months was achieved in 36 patients because two patients died of pancreatic cancer and basal ganglia hemorrhage, respectively. No new neurologic deterioration or aneurysm (rebleeding was observed.Our preliminary experience with reconstruction of VADAs with

  9. Reconstructing the Curve-Skeletons of 3D Shapes Using the Visual Hull.

    Science.gov (United States)

    Livesu, Marco; Guggeri, Fabio; Scateni, Riccardo

    2012-11-01

    Curve-skeletons are the most important descriptors for shapes, capable of capturing in a synthetic manner the most relevant features. They are useful for many different applications: from shape matching and retrieval, to medical imaging, to animation. This has led, over the years, to the development of several different techniques for extraction, each trying to comply with specific goals. We propose a novel technique which stems from the intuition of reproducing what a human being does to deduce the shape of an object holding it in his or her hand and rotating. To accomplish this, we use the formal definitions of epipolar geometry and visual hull. We show how it is possible to infer the curve-skeleton of a broad class of 3D shapes, along with an estimation of the radii of the maximal inscribed balls, by gathering information about the medial axes of their projections on the image planes of the stereographic vision. It is definitely worth to point out that our method works indifferently on (even unoriented) polygonal meshes, voxel models, and point clouds. Moreover, it is insensitive to noise, pose-invariant, resolution-invariant, and robust when applied to incomplete data sets.

  10. Genotet: An Interactive Web-based Visual Exploration Framework to Support Validation of Gene Regulatory Networks.

    Science.gov (United States)

    Yu, Bowen; Doraiswamy, Harish; Chen, Xi; Miraldi, Emily; Arrieta-Ortiz, Mario Luis; Hafemeister, Christoph; Madar, Aviv; Bonneau, Richard; Silva, Cláudio T

    2014-12-01

    Elucidation of transcriptional regulatory networks (TRNs) is a fundamental goal in biology, and one of the most important components of TRNs are transcription factors (TFs), proteins that specifically bind to gene promoter and enhancer regions to alter target gene expression patterns. Advances in genomic technologies as well as advances in computational biology have led to multiple large regulatory network models (directed networks) each with a large corpus of supporting data and gene-annotation. There are multiple possible biological motivations for exploring large regulatory network models, including: validating TF-target gene relationships, figuring out co-regulation patterns, and exploring the coordination of cell processes in response to changes in cell state or environment. Here we focus on queries aimed at validating regulatory network models, and on coordinating visualization of primary data and directed weighted gene regulatory networks. The large size of both the network models and the primary data can make such coordinated queries cumbersome with existing tools and, in particular, inhibits the sharing of results between collaborators. In this work, we develop and demonstrate a web-based framework for coordinating visualization and exploration of expression data (RNA-seq, microarray), network models and gene-binding data (ChIP-seq). Using specialized data structures and multiple coordinated views, we design an efficient querying model to support interactive analysis of the data. Finally, we show the effectiveness of our framework through case studies for the mouse immune system (a dataset focused on a subset of key cellular functions) and a model bacteria (a small genome with high data-completeness).

  11. Validity and sensitivity of a model for assessment of impacts of river floodplain reconstruction on protected and endangered species

    International Nuclear Information System (INIS)

    Nooij, R.J.W. de; Lotterman, K.M.; Sande, P.H.J. van de; Pelsma, T.; Leuven, R.S.E.W.; Lenders, H.J.R.

    2006-01-01

    Environmental Impact Assessment (EIA) must account for legally protected and endangered species. Uncertainties relating to the validity and sensitivity of EIA arise from predictions and valuation of effects on these species. This paper presents a validity and sensitivity analysis of a model (BIO-SAFE) for assessment of impacts of land use changes and physical reconstruction measures on legally protected and endangered river species. The assessment is based on links between species (higher plants, birds, mammals, reptiles and amphibians, butterflies and dragon- and damselflies) and ecotopes (landscape ecological units, e.g., river dune, soft wood alluvial forests), and on value assignment to protected and endangered species using different valuation criteria (i.e., EU Habitats and Birds directive, Conventions of Bern and Bonn and Red Lists). The validity of BIO-SAFE has been tested by comparing predicted effects of landscape changes on the diversity of protected and endangered species with observed changes in biodiversity in five reconstructed floodplains. The sensitivity of BIO-SAFE to value assignment has been analysed using data of a Strategic Environmental Assessment concerning the Spatial Planning Key Decision for reconstruction of the Dutch floodplains of the river Rhine, aimed at flood defence and ecological rehabilitation. The weights given to the valuation criteria for protected and endangered species were varied and the effects on ranking of alternatives were quantified. A statistically significant correlation (p < 0.01) between predicted and observed values for protected and endangered species was found. The sensitivity of the model to value assignment proved to be low. Comparison of five realistic valuation options showed that different rankings of scenarios predominantly occur when valuation criteria are left out of the assessment. Based on these results we conclude that linking species to ecotopes can be used for adequate impact assessments

  12. Validity of the growth model of the 'computerized visual perception assessment tool for Chinese characters structures'.

    Science.gov (United States)

    Wu, Huey-Min; Li, Cheng-Hsaun; Kuo, Bor-Chen; Yang, Yu-Mao; Lin, Chin-Kai; Wan, Wei-Hsiang

    2017-08-01

    Morphological awareness is the foundation for the important developmental skills involved with vocabulary, as well as understanding the meaning of words, orthographic knowledge, reading, and writing. Visual perception of space and radicals in two-dimensional positions of Chinese characters' morphology is very important in identifying Chinese characters. The important predictive variables of special and visual perception in Chinese characters identification were investigated in the growth model in this research. The assessment tool is the "Computerized Visual Perception Assessment Tool for Chinese Characters Structures" developed by this study. There are two constructs, basic stroke and character structure. In the basic stroke, there are three subtests of one, two, and more than three strokes. In the character structure, there are three subtests of single-component character, horizontal-compound character, and vertical-compound character. This study used purposive sampling. In the first year, 551 children 4-6 years old participated in the study and were monitored for one year. In the second year, 388 children remained in the study and the successful follow-up rate was 70.4%. This study used a two-wave cross-lagged panel design to validate the growth model of the basic stroke and the character structure. There was significant correlation of the basic stroke and the character structure at different time points. The abilities in the basic stroke and in the character structure steadily developed over time for preschool children. Children's knowledge of the basic stroke effectively predicted their knowledge of the basic stroke and the character structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Non-Destructive Approaches for the Validation of Visually Observed Spatial Patterns of Decay

    Science.gov (United States)

    Johnston, Brian; McKinley, Jennifer; Warke, Patricia; Ruffell, Alastair

    2017-04-01

    Historical structures are regarded as a built legacy that is passed down through the generations and as such the conservation and restoration of these buildings is of great importance to governmental, religious and charitable organisations. As these groups play the role of custodians of this built heritage, they are therefore keen that the approaches employed in these studies of stone condition are non-destructive in nature. Determining sections of facades requiring repair work is often achieved through a visual conditional inspection of the stonework by a specialist. However, these reports focus upon the need to identify blocks requiring restorative action rather than the determination of spatial trends that lead to the identification of causes. This fixation on decay occurring at the block scale results in the spatial distribution of weathering present at the larger 'wall' scale appearing to have developed chaotically. Recent work has shown the importance of adopting a geomorphological focus when undertaking visual inspection of the facades of historical buildings to overcome this issue. Once trends have been ascertained, they can be used to bolster remedial strategies that target the sources of decay rather than just undertaking an aesthetic treatment of symptoms. Visual inspection of the study site, Fitzroy Presbyterian Church in Belfast, using the geomorphologically driven approach revealed three features suggestive of decay extending beyond the block scale. Firstly, the influence of architectural features on the susceptibility of blocks to decay. Secondly, the impact of the fluctuation in groundwater rise over the seasons and the influence of aspect upon this process. And finally, the interconnectivity of blocks, due to deteriorating mortar and poor repointing, providing conduits for the passage of moisture. Once these patterns were identified, it has proven necessary to validate the outcome of the visual inspection using other techniques. In this study

  14. Development and face validity of a cerebral visual impairment motor questionnaire for children with cerebral palsy.

    Science.gov (United States)

    Salavati, M; Waninge, A; Rameckers, E A A; van der Steen, J; Krijnen, W P; van der Schans, C P; Steenbergen, B

    2017-01-01

    The objectives of this study were (i) to develop two cerebral visual impairment motor questionnaires (CVI-MQ's) for children with cerebral palsy (CP): one for children with Gross Motor Function Classification System (GMFCS) levels I, II and III and one for children with GMFCS levels IV and V; (ii) to describe their face validity and usability; and (iii) to determine their sensitivity and specificity. The initial versions of the two CVI-MQ's were developed based on literature. Subsequently, the Delphi method was used in two groups of experts, one familiar with CVI and one not familiar with CVI, in order to gain consensus about face validity and usability. The sensitivity and specificity of the CVI-MQ's were subsequently assessed in 82 children with CP with (n = 39) and without CVI (n = 43). With the receiver operating curve the cut-off scores were determined to detect possible presence or absence of CVI in children with CP. Both questionnaires showed very good face validity (percentage agreement above 96%) and good usability (percentage agreement 95%) for practical use. The CVI-MQ version for GMFCS levels I, II and III had a sensitivity of 1.00 and specificity of 0.96, with a cut-off score of 12 points or higher, and the version for GMFCS levels IV and V had a sensitivity of 0.97 and a specificity of 0.98, with a cut-off score of eight points or higher. The CVI-MQ is able to identify at-risk children with CP for the probability of having CVI. © 2016 John Wiley & Sons Ltd.

  15. In Silico Genome-Scale Reconstruction and Validation of the Corynebacterium glutamicum Metabolic Network

    DEFF Research Database (Denmark)

    Kjeldsen, Kjeld Raunkjær; Nielsen, J.

    2009-01-01

    A genome-scale metabolic model of the Gram-positive bacteria Corynebacterium glutamicum ATCC 13032 was constructed comprising 446 reactions and 411 metabolite, based on the annotated genome and available biochemical information. The network was analyzed using constraint based methods. The model...... was extensively validated against published flux data, and flux distribution values were found to correlate well between simulations and experiments. The split pathway of the lysine synthesis pathway of C. glutamicum was investigated, and it was found that the direct dehydrogenase variant gave a higher lysine...... yield than the alternative succinyl pathway at high lysine production rates. The NADPH demand of the network was not found to be critical for lysine production until lysine yields exceeded 55% (mmol lysine (mmol glucose)(-1)). The model was validated during growth on the organic acids acetate...

  16. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    International Nuclear Information System (INIS)

    Dave, A.J.; Manera, A.; Beyer, M.; Lucas, D.; Prasser, H.-M.

    2016-01-01

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  17. Uncertainty analysis of an interfacial area reconstruction algorithm and its application to two group interfacial area transport equation validation

    Energy Technology Data Exchange (ETDEWEB)

    Dave, A.J., E-mail: akshayjd@umich.edu [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Manera, A. [Department of Nuclear Engineering and Rad. Sciences, University of Michigan, Ann Arbor, MI 48105 (United States); Beyer, M.; Lucas, D. [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Fluid Dynamics, 01314 Dresden (Germany); Prasser, H.-M. [Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich (Switzerland)

    2016-12-15

    Wire mesh sensors (WMS) are state of the art devices that allow high resolution (in space and time) measurement of 2D void fraction distribution over a wide range of two-phase flow regimes, from bubbly to annular. Data using WMS have been recorded at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) (Lucas et al., 2010; Beyer et al., 2008; Prasser et al., 2003) for a wide combination of superficial gas and liquid velocities, providing an excellent database for advances in two-phase flow modeling. In two-phase flow, the interfacial area plays an integral role in coupling the mass, momentum and energy transport equations of the liquid and gas phase. While current models used in best-estimate thermal-hydraulic codes (e.g. RELAP5, TRACE, TRACG, etc.) are still based on algebraic correlations for the estimation of the interfacial area in different flow regimes, interfacial area transport equations (IATE) have been proposed to predict the dynamic propagation in space and time of interfacial area (Ishii and Hibiki, 2010). IATE models are still under development and the HZDR WMS experimental data provide an excellent basis for the validation and further advance of these models. The current paper is focused on the uncertainty analysis of algorithms used to reconstruct interfacial area densities from the void-fraction voxel data measured using WMS and their application towards validation efforts of two-group IATE models. In previous research efforts, a surface triangularization algorithm has been developed in order to estimate the surface area of individual bubbles recorded with the WMS, and estimate the interfacial area in the given flow condition. In the present paper, synthetically generated bubbles are used to assess the algorithm’s accuracy. As the interfacial area of the synthetic bubbles are defined by user inputs, the error introduced by the algorithm can be quantitatively obtained. The accuracy of interfacial area measurements is characterized for different bubbles

  18. Screening for hearing, visual and dual sensory impairment in older adults using behavioural cues : A validation study

    NARCIS (Netherlands)

    Roets-Merken, Lieve M.; Zuidema, Sytse U.; Vernooij-Dassen, Myrra J. F. J.; Kempen, Gertrudis I. J. M.

    2014-01-01

    Objective: This study investigated the psychometric properties of the Severe Dual Sensory Loss screening tool, a tool designed to help nurses and care assistants to identify hearing, visual and dual sensory impairment in older adults. Design: Construct validity of the Severe Dual Sensory Loss

  19. Validity and Reliability of Dynamic Visual Acuity (DVA) Measurement During Walking

    Science.gov (United States)

    Deshpande, Nandini; Peters, Brian T.; Bloomberg, Jacob J.

    2014-01-01

    DVA is primarily subserved by the vestibulo-ocular reflex mechanism. Individuals with vestibular hypofunction commonly experience highly debilitating illusory movement or blurring of visual images during daily activities possibly, due to impaired DVA. Even without pathologies, gradual age-related morphological deterioration is evident in all components of the vestibular system. We examined the construct validity to detect age-related differences and test-retest reliability of DVA measurements performed during walking. METHODS: Healthy adults were recruited into 3 groups: 1. young (20-39years, n=18), 2. middle-aged (40-59years, n=14), and 3. older adults (60-80years, n=15). Randomly selected seven participants from each group (n=21) participated in retesting. Participants were excluded if they had a history of vestibular or neuromuscular pathologies, dizziness/vertigo or >1 falls in the past year. Older persons with MMSE scores reliability. RESULTS: The three age groups were not different in their height, weight and normal walking speed (p>0.05). The post hoc analyses for DVA measurements demonstrated that each group was significantly different from the other two groups for Near as well as FarDVA (preliability. FarDVA at 0.8 m/s and 1.0 m/s demonstrated good test-retest reliability (ICCs 0.71 and 0.77, respectively).

  20. Examining ecological validity in social interaction: problems of visual fidelity, gaze, and social potential.

    Science.gov (United States)

    Reader, Arran T; Holmes, Nicholas P

    2016-01-01

    Social interaction is an essential part of the human experience, and much work has been done to study it. However, several common approaches to examining social interactions in psychological research may inadvertently either unnaturally constrain the observed behaviour by causing it to deviate from naturalistic performance, or introduce unwanted sources of variance. In particular, these sources are the differences between naturalistic and experimental behaviour that occur from changes in visual fidelity (quality of the observed stimuli), gaze (whether it is controlled for in the stimuli), and social potential (potential for the stimuli to provide actual interaction). We expand on these possible sources of extraneous variance and why they may be important. We review the ways in which experimenters have developed novel designs to remove these sources of extraneous variance. New experimental designs using a 'two-person' approach are argued to be one of the most effective ways to develop more ecologically valid measures of social interaction, and we suggest that future work on social interaction should use these designs wherever possible.

  1. Automated Quantitative Computed Tomography Versus Visual Computed Tomography Scoring in Idiopathic Pulmonary Fibrosis: Validation Against Pulmonary Function.

    Science.gov (United States)

    Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Kokosi, Maria; Nair, Arjun; Karwoski, Ronald; Raghunath, Sushravya M; Walsh, Simon L F; Wells, Athol U; Hansell, David M

    2016-09-01

    The aim of the study was to determine whether a novel computed tomography (CT) postprocessing software technique (CALIPER) is superior to visual CT scoring as judged by functional correlations in idiopathic pulmonary fibrosis (IPF). A total of 283 consecutive patients with IPF had CT parenchymal patterns evaluated quantitatively with CALIPER and by visual scoring. These 2 techniques were evaluated against: forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), diffusing capacity for carbon monoxide (DLco), carbon monoxide transfer coefficient (Kco), and a composite physiological index (CPI), with regard to extent of interstitial lung disease (ILD), extent of emphysema, and pulmonary vascular abnormalities. CALIPER-derived estimates of ILD extent demonstrated stronger univariate correlations than visual scores for most pulmonary function tests (PFTs): (FEV1: CALIPER R=0.29, visual R=0.18; FVC: CALIPER R=0.41, visual R=0.27; DLco: CALIPER R=0.31, visual R=0.35; CPI: CALIPER R=0.48, visual R=0.44). Correlations between CT measures of emphysema extent and PFTs were weak and did not differ significantly between CALIPER and visual scoring. Intriguingly, the pulmonary vessel volume provided similar correlations to total ILD extent scored by CALIPER for FVC, DLco, and CPI (FVC: R=0.45; DLco: R=0.34; CPI: R=0.53). CALIPER was superior to visual scoring as validated by functional correlations with PFTs. The pulmonary vessel volume, a novel CALIPER CT parameter with no visual scoring equivalent, has the potential to be a CT feature in the assessment of patients with IPF and requires further exploration.

  2. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    Directory of Open Access Journals (Sweden)

    Carolin Helbig

    differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.

  3. The PedsQL™ Present Functioning Visual Analogue Scales: preliminary reliability and validity

    Directory of Open Access Journals (Sweden)

    Varni James W

    2006-10-01

    Full Text Available Abstract Background The PedsQL™ Present Functioning Visual Analogue Scales (PedsQL™ VAS were designed as an ecological momentary assessment (EMA instrument to rapidly measure present or at-the-moment functioning in children and adolescents. The PedsQL™ VAS assess child self-report and parent-proxy report of anxiety, sadness, anger, worry, fatigue, and pain utilizing six developmentally appropriate visual analogue scales based on the well-established Varni/Thompson Pediatric Pain Questionnaire (PPQ Pain Intensity VAS format. Methods The six-item PedsQL™ VAS was administered to 70 pediatric patients ages 5–17 and their parents upon admittance to the hospital environment (Time 1: T1 and again two hours later (Time 2: T2. It was hypothesized that the PedsQL™ VAS Emotional Distress Summary Score (anxiety, sadness, anger, worry and the fatigue VAS would demonstrate moderate to large effect size correlations with the PPQ Pain Intensity VAS, and that patient" parent concordance would increase over time. Results Test-retest reliability was demonstrated from T1 to T2 in the large effect size range. Internal consistency reliability was demonstrated for the PedsQL™ VAS Total Symptom Score (patient self-report: T1 alpha = .72, T2 alpha = .80; parent proxy-report: T1 alpha = .80, T2 alpha = .84 and Emotional Distress Summary Score (patient self-report: T1 alpha = .74, T2 alpha = .73; parent proxy-report: T1 alpha = .76, T2 alpha = .81. As hypothesized, the Emotional Distress Summary Score and Fatigue VAS were significantly correlated with the PPQ Pain VAS in the medium to large effect size range, and patient and parent concordance increased from T1 to T2. Conclusion The results demonstrate preliminary test-retest and internal consistency reliability and construct validity of the PedsQL™ Present Functioning VAS instrument for both pediatric patient self-report and parent proxy-report. Further field testing is required to extend these initial

  4. Validation of a raw data-based synchronization signal (kymogram) for phase-correlated cardiac image reconstruction

    International Nuclear Information System (INIS)

    Ertel, Dirk; Kachelriess, Marc; Kalender, Willi A.; Pflederer, Tobias; Achenbach, Stephan; Steffen, Peter

    2008-01-01

    Phase-correlated reconstruction is commonly used in computed tomography (CT)-based cardiac imaging. Alternatively to the commonly used ECG, the raw data-based kymogram function can be used as a synchronization signal. We used raw data of 100 consecutive patient exams to compare the performance of kymogram function to the ECG signal. For objective validation the correlation of the ECG and the kymogram was assessed. Additionally, we performed a double-blinded comparison of ECG-based and kymogram-based phase-correlated images. The two synchronization signals showed good correlation indicated by a mean difference in the detected heart rate of negligible 0.2 bpm. The mean image quality score was 2.0 points for kymogram-correlated images and 2.3 points for ECG-correlated images, respectively (3: best; 0: worst). The kymogram and the ECG provided images adequate for diagnosis for 93 and 97 patients, respectively. For 50% of the datasets the kymogram provided an equivalent or even higher image quality compared with the ECG signal. We conclude that an acceptable image quality can be assured in most cases by the kymogram. Improvements of image quality by the kymogram function were observed in a noticeable number of cases. The kymogram can serve as a backup solution when an ECG is not available or lacking in quality. (orig.)

  5. Prevention of sexually transmitted diseases among visually impaired people: educational text validation.

    Science.gov (United States)

    Oliveira, Giselly Oseni Barbosa; Cavalcante, Luana Duarte Wanderley; Pagliuca, Lorita Marlena Freitag; Almeida, Paulo César de; Rebouças, Cristiana Brasil de Almeida

    2016-08-18

    to validate an educational text in the context of Sexually Transmitted Diseases (STD) for visually impaired persons, making it accessible to this population. a validation study, in a virtual environment. Data collection occurred from May to September 2012 by emailing the subjects, and was composed by seven content experts about STDs. Analysis was based on the considerations of the experts about Objectives, Structure and Presentation, and Relevance. on the Objectives and Structure and Presentation blocks, 77 (84.6%) and 48 (85.7%) were fully adequate or appropriate, respectively. In the Relevance block, items 3.2 - Allows transfer and generalization of learning, and 3.5 - Portrays aspects needed to clarify the family, showed bad agreement indices of 0.42 and 0.57, respectively. The analysis was followed by reformulating the text according to the relevant suggestions. the text was validated regarding the content of sexually transmitted diseases. A total of 35 stanzas were removed and nine others included, following the recommendations of the experts. validar texto educativo no contexto das doenças sexualmente transmissíveis para pessoas com deficiência visual para torná-lo acessível a essa população. estudo de validação, em ambiente virtual. Coleta de dados de maio a setembro de 2012, por meio da utilização dos e-mails eletrônicos dos sujeitos, compostos por sete especialistas em conteúdo na temática, através de instrumento próprio. Análise ocorreu com base nas considerações dos especialistas sobre os Objetivos, Estrutura e Apresentação e Relevância. nos blocos de Objetivos e Estrutura e Apresentação, 77 (84,6%) e 48 (85,7%) eram totalmente adequados ou adequados, respectivamente. No bloco de Relevância, os itens 3.2 - Permite transferência e generalização da aprendizagem, e 3.5 - Mostra aspectos necessários para informar a família, revelaram índices de concordância ruins de 0,42 e 0,57, respectivamente. Após a análise, o texto foi

  6. A nursing home staff tool for the indoor visual environment : the content validity

    NARCIS (Netherlands)

    Sinoo, M.M.; Kort, H.S.M.; Loomans, M.G.L.C.; Schols, J.M.G.A.

    2016-01-01

    In the Netherlands, over 40% of nursing home residents are estimated to have visual impairments. This results in the loss of basic visual abilities. The nursing home environment fits more or less to residents’ activities and social participation. This is referred to as environmental fit. To raise

  7. A nursing home staff tool for the indoor visual environment: The content validity

    NARCIS (Netherlands)

    Marcel G.L.C. Loomans; Dr. H.S.M. Kort; Marianne M. Sinoo; Jos M.G.A Schols

    2016-01-01

    In the Netherlands, over 40% of nursing home residents are estimated to have visual impairments. This results in the loss of basic visual abilities. The nursing home environment fits more or less to residents’ activities and social participation. This is referred to as environmental fit. To raise

  8. UK-based prospective cohort study to anglicise and validate the FACE-Q Skin Cancer Module in patients with facial skin cancer undergoing surgical reconstruction: the PROMISCR (Patient-Reported Outcome Measure in Skin Cancer Reconstruction) study.

    Science.gov (United States)

    Dobbs, Thomas; Hutchings, Hayley A; Whitaker, Iain S

    2017-09-24

    Skin cancer is the most common malignancy worldwide, often occurring on the face, where the cosmetic outcome of treatment is paramount. A number of skin cancer-specific patient-reported outcome measures (PROMs) exist, however none adequately consider the difference in type of reconstruction from a patient's point of view. It is the aim of this study to 'anglicise' (to UK English) a recently developed US PROM for facial skin cancer (the FACE-Q Skin Cancer Module) and to validate this UK version of the PROM. The validation will also involve an assessment of the items for relevance to facial reconstruction patients. This will either validate this new measure for the use in clinical care and research of various facial reconstructive options, or provide evidence that a more specific PROM is required. This is a prospective validation study of the FACE-Q Skin Cancer Module in a UK facial skin cancer population with a specific focus on the difference between types of reconstruction. The face and content validity of the FACE-Q questionnaire will initially be assessed by a review process involving patients, skin cancer specialists and methodologists. An assessment of whether questions are relevant and any missing questions will be made. Initial validation will then be carried out by recruiting a cohort of 100 study participants with skin cancer of the face pre-operatively. All eligible patients will be invited to complete the questionnaire preoperatively and postoperatively. Psychometric analysis will be performed to test validity, reliability and responsiveness to change. Subgroup analysis will be performed on patients undergoing different forms of reconstruction postexcision of their skin cancer. This study has been approved by the West Midlands, Edgbaston Research Ethics Committee (Ref 16/WM/0445). All personal data collected will be anonymised and patient-specific data will only be reported in terms of group demographics. Identifiable data collected will include the

  9. The reliability and validity study of the Kinesthetic and Visual Imagery Questionnaire in individuals with Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Yousef Moghadas Tabrizi

    2013-12-01

    Full Text Available OBJECTIVE: Motor imagery (MI has been recently considered as an adjunct to physical rehabilitation in patients with multiple sclerosis (MS. It is necessary to assess MI abilities and benefits in patients with MS by using a reliable tool. The Kinesthetic and Visual Imagery Questionnaire (KVIQ was recently developed to assess MI ability in patients with stroke and other disabilities. Considering the different underlying pathologies, the present study aimed to examine the validity and reliability of the KVIQ in MS patients. METHOD: Fifteen MS patients were assessed using the KVIQ in 2 sessions (5-14days apart by the same examiner. In the second session, the participants also completed a revised MI questionnaire (MIQ-R as the gold standard. Intra-class correlation coefficients (ICCs were measured to determine test-retest reliability. Spearman's correlation analysis was performed to assess concurrent validity with the MIQ-R. Furthermore, the internal consistency (Cronbach's alpha and factorial structure of the KVIQ were studied. RESULTS: The test-retest reliability for the KVIQ was good (ICCs: total KVIQ=0.89, visual KVIQ=0.85, and kinesthetic KVIQ=0.93, and the concurrent validity between the KVIQ and MIQ-R was good (r=0.79. The KVIQ had good internal consistency, with high Cronbach's alpha (alpha=0.84. Factorial analysis showed the bi-factorial structure of the KVIQ, which was explained by visual=57.6% and kinesthetic=32.4%. CONCLUSIONS: The results of the present study revealed that the KVIQ is a valid and reliable tool for assessing MI in MS patients.

  10. The reliability and validity study of the Kinesthetic and Visual Imagery Questionnaire in individuals with multiple sclerosis.

    Science.gov (United States)

    Tabrizi, Yousef Moghadas; Zangiabadi, Nasser; Mazhari, Shahrzad; Zolala, Farzaneh

    2013-01-01

    Motor imagery (MI) has been recently considered as an adjunct to physical rehabilitation in patients with multiple sclerosis (MS). It is necessary to assess MI abilities and benefits in patients with MS by using a reliable tool. The Kinesthetic and Visual Imagery Questionnaire (KVIQ) was recently developed to assess MI ability in patients with stroke and other disabilities. Considering the different underlying pathologies, the present study aimed to examine the validity and reliability of the KVIQ in MS patients. Fifteen MS patients were assessed using the KVIQ in 2 sessions (5-14 days apart) by the same examiner. In the second session, the participants also completed a revised MI questionnaire (MIQ-R) as the gold standard. Intra-class correlation coefficients (ICCs) were measured to determine test-retest reliability. Spearman's correlation analysis was performed to assess concurrent validity with the MIQ-R. Furthermore, the internal consistency (Cronbach's alpha) and factorial structure of the KVIQ were studied. The test-retest reliability for the KVIQ was good (ICCs: total KVIQ=0.89, visual KVIQ=0.85, and kinesthetic KVIQ=0.93), and the concurrent validity between the KVIQ and MIQ-R was good (r=0.79). The KVIQ had good internal consistency, with high Cronbach's alpha (alpha=0.84). Factorial analysis showed the bi-factorial structure of the KVIQ, which was explained by visual=57.6% and kinesthetic=32.4%. The results of the present study revealed that the KVIQ is a valid and reliable tool for assessing MI in MS patients.

  11. Improved visualization of collateral ligaments of the ankle: multiplanar reconstructions based on standard 2D turbo spin-echo MR images

    International Nuclear Information System (INIS)

    Duc, Sylvain R.; Mengiardi, Bernard; Pfirrmann, Christian W.A.; Hodler, Juerg; Zanetti, Marco

    2007-01-01

    The purpose of the study was to evaluate the visualization of the collateral ankle ligaments on multiplanar reconstructions (MPR) based on standard 2D turbo spin-echo images. Coronal and axial T2-weighted turbo spin-echo and MPR angled parallel to the course of the ligaments of 15 asymptomatic and 15 symptomatic ankles were separately analyzed by two musculoskeletal radiologists. Image quality was assessed in the asymptomatic ankles qualitatively. In the symptomatic ankles interobserver agreement and reader confidence was determined for each ligament. On MPR the tibionavicular and calcaneofibular ligaments were more commonly demonstrated on a single image than on standard MR images (reader 1: 13 versus 0, P=0.002; reader 2: 14 versus 1, P=0.001 and reader 1: 13 versus 2, P=0.001; reader 2: 14 versus 0, P<0.001). The tibionavicular ligament was considered to be better delineated on MPR by reader 1 (12 versus 3, P=0.031). In the symptomatic ankles, reader confidence was greater with MPR for all ligaments except for the tibiocalcanear ligament (both readers) and the anterior and posterior talofibular ligaments (for reader 2). Interobserver agreement was increased with MPR for the tibionavicular ligament. Multiplanar reconstructions of 2D turbo spin-echo images improve the visualization of the tibionavicular and calcaneofibular ligaments and strengthen diagnostic confidence for these ligaments. (orig.)

  12. Reliable categorisation of visual scoring of coronary artery calcification on low-dose CT for lung cancer screening: validation with the standard Agatston score

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yi-Luan; Wu, Fu-Zong; Wang, Yen-Chi [Kaohsiung Veterans General Hospital, Department of Radiology, Kaohsiung 813 (China); National Yang Ming University, Faculty of Medicine, School of Medicine, Taipei (China); Ju, Yu-Jeng [National Taiwan University, Department of Psychology, Taipei (China); Mar, Guang-Yuan [Kaohsiung Veterans General Hospital, Division of Cardiology, Department of Medicine, Kaohsiung 813 (China); Chuo, Chiung-Chen [Kaohsiung Veterans General Hospital, Department of Radiology, Kaohsiung 813 (China); Lin, Huey-Shyan [Fooyin University, School of Nursing, Kaohsiung (China); Wu, Ming-Ting [Kaohsiung Veterans General Hospital, Department of Radiology, Kaohsiung 813 (China); National Yang Ming University, Faculty of Medicine, School of Medicine, Taipei (China); National Yang Ming University, Institute of Clinical Medicine, Taipei (China)

    2013-05-15

    To validate the reliability of the visual coronary artery calcification score (VCACS) on low-dose CT (LDCT) for concurrent screening of CAC and lung cancer. We enrolled 401 subjects receiving LDCT for lung cancer screening and ECG-gated CT for the Agatston score (AS). LDCT was reconstructed with 3- and 5-mm slice thickness (LDCT-3mm and LDCT-5mm respectively) for VCACS to obtain VCACS-3mm and VCACS-5mm respectively. After a training session comprising 32 cases, two observers performed four-scale VCACS (absent, mild, moderate, severe) of 369 data sets independently, the results were compared with four-scale AS (0, 1-100, 101-400, >400). CACs were present in 39.6 % (146/369) of subjects. The sensitivity of VCACS-3mm was higher than for VCACS-5mm (83.6 % versus 74.0 %). The median of AS of the 24 false-negative cases in VCACS-3mm was 2.3 (range 1.1-21.1). The false-negative rate for detecting AS {>=} 10 on LDCT-3mm was 1.9 %. VCACS-3mm had higher concordance with AS than VCACS-5mm (k = 0.813 versus k = 0.685). An extended test of VCACS-3mm for four junior observers showed high inter-observer reliability (intra-class correlation = 0.90) and good concordance with AS (k = 0.662-0.747). This study validated the reliability of VCACS on LDCT for lung cancer screening and showed that LDCT-3mm was more feasible than LDCT-5mm for CAD risk stratification. (orig.)

  13. P2-7: Encoding of Graded Changes in Validity of Spatial Priors in Human Visual Cortex

    Directory of Open Access Journals (Sweden)

    Yuko Hara

    2012-10-01

    Full Text Available If the spatial validity of prior information is varied systematically, does human behavioral performance improve in a graded fashion, and if so, does visual cortex represent the probability directly? Cortical activity was measured with fMRI while subjects performed a contrast-discrimination task in which the spatial validity of a prior cue for target location was systematically varied. Subjects viewed four sinusoidal gratings (randomized contrasts of 12.5, 25, and 50% shown in discrete visual quadrants presented twice. The contrast in one location (target was incremented in one of the two presentations. Subjects reported with a button press which presentation contained the greater contrast. The target grating was signaled in advance by a cue which varied in spatial validity; at trial onset, small lines pointed to four, two, or one of the possible target locations, thus indicating the target with 25, 50, or 100% probability. Behavioral performance was 2.1 and 3.3 times better in the 100% probability condition than the 50% and 25%, respectively (p < .001, ANOVA. Unlike behavioral performance, cortical activity in early visual areas showed the same increase in response amplitude for cued versus uncued stimuli for both 100% and 50% probability (V1-V4, V3A all p < .18, Student's t-test, 25% had no uncued condition. How could behavioral performance improve in a graded fashion if cortical activity showed the same effect for different probabilities? A model of efficient selection in which V1 responses were pooled according to their magnitude rather than as a simple average explained the observations (AIC difference = −15.

  14. Reliability and Validity of the TGMD-2 in Primary-School-Age Children With Visual Impairments

    NARCIS (Netherlands)

    Houwen, Suzanne; Hartman, Esther; Jonker, Laura; Visscher, Chris

    This study examines the psychometric properties of the Test of Gross Motor Development-2 (TGMD-2) in children with visual impairments (VI). Seventy-five children aged between 6 and 12 years with VI completed the TGMD-2 and the Movement Assessment Battery for Children (Movement ABC). The internal

  15. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  16. The DiaNAH test battery for visual perceptual disorders : Validity and efficacy in rehabilitation practice

    NARCIS (Netherlands)

    Heutink, Jochem; de Vries, Stefanie; Melis, Bart; Vrijling, Anne; Tucha, Oliver

    2018-01-01

    We developed the DiaNAH test battery for the screening of mid-level and higher-order visual perceptual disorders in clinical practice. The DiaNAH battery comprises 11 different tests and can be administered in 30-60 minutes. Important feature of the DiaNAH battery is that it is administered on a 24”

  17. Development and face validity of a cerebral visual impairment motor questionnaire for children with cerebral palsy

    NARCIS (Netherlands)

    Salavati, Masoud; Waninge, Aly; Rameckers, E.A.A.; van der Steen, J; Krijnen, W.P.; van der Schans, C.P.; Steenbergen, B.

    2016-01-01

    AIM: The objectives of this study were (i) to develop two cerebral visual impairment motor questionnaires (CVI-MQ's) for children with cerebral palsy (CP): one for children with Gross Motor Function Classification System (GMFCS) levels I, II and III and one for children with GMFCS levels IV and V;

  18. Spatially valid proprioceptive cues improve the detection of a visual stimulus

    DEFF Research Database (Denmark)

    Jackson, Carl P T; Miall, R Chris; Balslev, Daniela

    2010-01-01

    , which has been demonstrated for other modality pairings. The aim of this study was to test whether proprioceptive signals can spatially cue a visual target to improve its detection. Participants were instructed to use a planar manipulandum in a forward reaching action and determine during this movement...

  19. Testing the validity of wireless EEG for cognitive research with auditory and visual paradigms

    DEFF Research Database (Denmark)

    Weed, Ethan; Kratschmer, Alexandra Regina; Pedersen, Michael Nygaard

    and smaller cognitive components. To test the feasibility of these headsets for cognitive research, we compared performance of the Emotiv Epoc wireless headset (EM) with Brain Products ActiCAP (BP) active electrodes on two well-studied components: the auditory mismatch negativity (MMN) and the visual face...

  20. The validation of the visual analogue scale for patient satisfaction after total hip arthroplasty.

    NARCIS (Netherlands)

    Brokelman, R.B.G.; Haverkamp, D.; Loon, C. van; Hol, A.; Kampen, A. van; Veth, R.P.H.

    2012-01-01

    INTRODUCTION: Patient satisfaction becomes more important in our modern health care system. The assessment of satisfaction is difficult because it is a multifactorial item for which no golden standard exists. One of the potential methods of measuring satisfaction is by using the well-known visual

  1. Reliability and Validity of the TGMD-2 in Primary-School-Age Children with Visual Impairments

    Science.gov (United States)

    Houwen, Suzanne; Hartman, Esther; Jonker, Laura; Visscher, Chris

    2010-01-01

    This study examines the psychometric properties of the Test of Gross Motor Development-2 (TGMD-2) in children with visual impairments (VI). Seventy-five children aged between 6 and 12 years with VI completed the TGMD-2 and the Movement Assessment Battery for Children (Movement ABC). The internal consistency of the TGMD-2 was found to be high…

  2. Development and face validity of a cerebral visual impairment motor questionnaire for children with cerebral palsy

    NARCIS (Netherlands)

    Salavati, M.; Waninge, A.; Rameckers, E. A. A.; van der Steen, J.; Krijnen, W. P.; van der Schans, C. P.; Steenbergen, B.

    Aim The objectives of this study were (i) to develop two cerebral visual impairment motor questionnaires (CVI-MQ's) for children with cerebral palsy (CP): one for children with Gross Motor Function Classification System (GMFCS) levels I, II and III and one for children with GMFCS levels IV and V;

  3. Optimisation and validation of a 3D reconstruction algorithm for single photon emission computed tomography by means of GATE simulation platform

    International Nuclear Information System (INIS)

    El Bitar, Ziad

    2006-12-01

    Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)

  4. Analysis of internal and external validity criteria for a computerized visual search task: A pilot study.

    Science.gov (United States)

    Richard's, María M; Introzzi, Isabel; Zamora, Eliana; Vernucci, Santiago

    2017-01-01

    Inhibition is one of the main executive functions, because of its fundamental role in cognitive and social development. Given the importance of reliable and computerized measurements to assessment inhibitory performance, this research intends to analyze the internal and external criteria of validity of a computerized conjunction search task, to evaluate the role of perceptual inhibition. A sample of 41 children (21 females and 20 males), aged between 6 and 11 years old (M = 8.49, SD = 1.47), intentionally selected from a private management school of Mar del Plata (Argentina), middle socio-economic level were assessed. The Conjunction Search Task from the TAC Battery, Coding and Symbol Search tasks from Wechsler Intelligence Scale for Children were used. Overall, results allow us to confirm that the perceptual inhibition task form TAC presents solid rates of internal and external validity that make a valid measurement instrument of this process.

  5. Validation of the Gate simulation platform in single photon emission computed tomography and application to the development of a complete 3-dimensional reconstruction algorithm

    International Nuclear Information System (INIS)

    Lazaro, D.

    2003-10-01

    Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)

  6. Creation and validation of a visual macroscopic hematuria scale for optimal communication and an objective hematuria index.

    Science.gov (United States)

    Wong, Lih-Ming; Chum, Jia-Min; Maddy, Peter; Chan, Steven T F; Travis, Douglas; Lawrentschuk, Nathan

    2010-07-01

    Macroscopic hematuria is a common symptom and sign that is challenging to quantify and describe. The degree of hematuria communicated is variable due to health worker experience combined with lack of a reliable grading tool. We produced a reliable, standardized visual scale to describe hematuria severity. Our secondary aim was to validate a new laboratory test to quantify hemoglobin in hematuria specimens. Nurses were surveyed to ascertain current hematuria descriptions. Blood and urine were titrated at varying concentrations and digitally photographed in catheter bag tubing. Photos were processed and printed on transparency paper to create a prototype swatch or card showing light, medium, heavy and old hematuria. Using the swatch 60 samples were rated by nurses and laymen. Interobserver variability was reported using the generalized kappa coefficient of agreement. Specimens were analyzed for hemolysis by measuring optical density at oxyhemoglobin absorption peaks. Interobserver agreement between nurses and laymen was good (kappa = 0.51, p visual scale to grade and communicate hematuria with adequate interobserver agreement is feasible. The test for optical density at oxyhemoglobin absorption peaks is a new method, validated in our study, to quantify hemoglobin in a hematuria specimen. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    Science.gov (United States)

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  8. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    Science.gov (United States)

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  9. Flow temporal reconstruction from non time-resolved data part II: practical implementation, methodology validation, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Legrand, Mathieu; Nogueira, Jose; Lecuona, Antonio [Universidad Carlos III, Department of Thermal and Fluids Engineering, Madrid (Spain); Tachibana, Shigeru [Japan Aerospace Exploration Agency, Aerospace Research and Development, Tokyo (Japan); Nauri, Sara [QinetiQ, Design Systems and Services, Farnborough (United Kingdom)

    2011-10-15

    This paper proposes a method to sort experimental snapshots of a periodic flow using information from the first three POD coefficients. Even in presence of turbulence, phase-average flow fields are reconstructed with this novel technique. The main objective is to identify and track traveling coherent structures in these pseudo periodic flows. This provides a tool for shedding light on flow dynamics and allows for dynamical contents comparison, instead of using mean statistics or traditional point-based correlation techniques. To evaluate the performance of the technique, apart from a laminar test on the relative strength of the POD modes, four additional tests have been performed. In the first of these tests, time-resolved PIV measurements of a turbulent flow with an externally forced main frequency allows to compare real phase-locked average data with reconstructed phase obtained using the technique proposed in the paper. The reconstruction technique is then applied to a set of non-forced, non time-resolved Stereo PIV measurements in an atmospheric burner, under combustion conditions. Besides checking that the reconstruction on different planes matches, there is no indication of the magnitude of the error for the proposed technique. In order to obtain some data regarding this aspect, two additional tests are performed on simulated non-externally forced laminar flows with the addition of a digital filter resembling turbulence (Klein et al. in J Comput Phys 186:652-665, 2003). With this information, the limitation of the technique applicability to periodic flows including turbulence or secondary frequency features is further discussed on the basis of the relative strength of the Proper Orthogonal Decomposition (POD) modes. The discussion offered indicates coherence between the reconstructed results and those obtained in the simulations. In addition, it allows defining a threshold parameter that indicates when the proposed technique is suitable or not. For those

  10. Validation of a computer modelled forensic facial reconstruction technique using CT data from live subjects: a pilot study.

    Science.gov (United States)

    Short, Laura J; Khambay, Balvinder; Ayoub, Ashraf; Erolin, Caroline; Rynn, Chris; Wilkinson, Caroline

    2014-04-01

    Human forensic facial soft tissue reconstructions are used when post-mortem deterioration makes identification difficult by usual means. The aim is to trigger recognition of the in vivo countenance of the individual by a friend or family member. A further use is in the field of archaeology. There are a number of different methods that can be applied to complete the facial reconstruction, ranging from two dimensional drawings, three dimensional clay models and now, with the advances of three dimensional technology, three dimensional computerised modelling. Studies carried out to assess the accuracy of facial reconstructions have produced variable results over the years. Advances in three dimensional imaging techniques in the field of oral and maxillofacial surgery, particularly cone beam computed tomography (CBCT), now provides an opportunity to utilise the data of live subjects and assess the accuracy of the three dimensional computerised facial reconstruction technique. The aim of this study was to assess the accuracy of a computer modelled facial reconstruction technique using CBCT data from live subjects. This retrospective pilot study was carried out at the Glasgow Dental Hospital Orthodontic Department and the Centre of Anatomy and Human Identification, Dundee University School of Life Sciences. Ten patients (5 male and 5 female; mean age 23 years) with mild skeletal discrepancies with pre-surgical cone beam CT data (CBCT) were included in this study. The actual and forensic reconstruction soft tissues were analysed using 3D software to look at differences between landmarks, linear and angular measurements and surface meshes. There were no statistical differences for 18 out of the 23 linear and 7 out of 8 angular measurements between the reconstruction and the target (p<0.05). The use of Procrustes superimposition has highlighted potential problems with soft tissue depth and anatomical landmarks' position. Surface mesh analysis showed that this virtual

  11. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    Science.gov (United States)

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  12. 3D-reconstructions and virtual 4D-visualization to study metamorphic brain development in the sphinx moth Manduca sexta

    Directory of Open Access Journals (Sweden)

    Wolf Huetteroth

    2010-03-01

    Full Text Available During metamorphosis, the transition from the larva to the adult, the insect brain undergoes considerable remodeling: New neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  13. Task-Difficulty Homeostasis in Car Following Models: Experimental Validation Using Self-Paced Visual Occlusion.

    Directory of Open Access Journals (Sweden)

    Jami Pekkanen

    Full Text Available Car following (CF models used in traffic engineering are often criticized for not incorporating "human factors" well known to affect driving. Some recent work has addressed this by augmenting the CF models with the Task-Capability Interface (TCI model, by dynamically changing driving parameters as function of driver capability. We examined assumptions of these models experimentally using a self-paced visual occlusion paradigm in a simulated car following task. The results show strong, approximately one-to-one, correspondence between occlusion duration and increase in time headway. The correspondence was found between subjects and within subjects, on aggregate and individual sample level. The long time scale aggregate results support TCI-CF models that assume a linear increase in time headway in response to increased distraction. The short time scale individual sample level results suggest that drivers also adapt their visual sampling in response to transient changes in time headway, a mechanism which isn't incorporated in the current models.

  14. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  15. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    Science.gov (United States)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  16. Validation of assistive technology for the visually impaired in the prevention of Sexually Transmitted Diseases.

    OpenAIRE

    Giselly Oseni Laurentino Barbosa

    2013-01-01

    à indiscutÃvel a relevÃncia da temÃtica de orientaÃÃo à Pessoa com DeficiÃncia (PcD) visual quanto à prevenÃÃo das DoenÃas Sexualmente TransmissÃveis (DST). Se as DST representam um risco Ãs pessoas sem deficiÃncia, para as PcD, os riscos podem se tornar ampliados. Para essa populaÃÃo, dispÃe-se da Tecnologia Assistiva (TA), a qual se constitui de materiais, mÃtodos e processos adaptados Ãs suas necessidades. O crescente nÃmero de ferramentas computacionais direcionadas para a PcD permite a i...

  17. Catch-up validation study of an in vitro skin irritation test method based on an open source reconstructed epidermis (phase II).

    Science.gov (United States)

    Groeber, F; Schober, L; Schmid, F F; Traube, A; Kolbus-Hernandez, S; Daton, K; Hoffmann, S; Petersohn, D; Schäfer-Korting, M; Walles, H; Mewes, K R

    2016-10-01

    To replace the Draize skin irritation assay (OECD guideline 404) several test methods based on reconstructed human epidermis (RHE) have been developed and were adopted in the OECD test guideline 439. However, all validated test methods in the guideline are linked to RHE provided by only three companies. Thus, the availability of these test models is dependent on the commercial interest of the producer. To overcome this limitation and thus to increase the accessibility of in vitro skin irritation testing, an open source reconstructed epidermis (OS-REp) was introduced. To demonstrate the capacity of the OS-REp in regulatory risk assessment, a catch-up validation study was performed. The participating laboratories used in-house generated OS-REp to assess the set of 20 reference substances according to the performance standards amending the OECD test guideline 439. Testing was performed under blinded conditions. The within-laboratory reproducibility of 87% and the inter-laboratory reproducibility of 85% prove a high reliability of irritancy testing using the OS-REp protocol. In addition, the prediction capacity was with an accuracy of 80% comparable to previous published RHE based test protocols. Taken together the results indicate that the OS-REp test method can be used as a standalone alternative skin irritation test replacing the OECD test guideline 404. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Development and validation of an achievement test in introductory quantum mechanics: The Quantum Mechanics Visualization Instrument (QMVI)

    Science.gov (United States)

    Cataloglu, Erdat

    The purpose of this study was to construct a valid and reliable multiple-choice achievement test to assess students' understanding of core concepts of introductory quantum mechanics. Development of the Quantum Mechanics Visualization Instrument (QMVI) occurred across four successive semesters in 1999--2001. During this time 213 undergraduate and graduate students attending the Pennsylvania State University (PSU) at University Park and Arizona State University (ASU) participated in this development and validation study. Participating students were enrolled in four distinct groups of courses: Modern Physics, Undergraduate Quantum Mechanics, Graduate Quantum Mechanics, and Chemistry Quantum Mechanics. Expert panels of professors of physics experienced in teaching quantum mechanics courses and graduate students in physics and science education established the core content and assisted in the validating of successive versions of the 24-question QMVI. Instrument development was guided by procedures outlined in the Standards for Educational and Psychological Testing (AERA-APA-NCME, 1999). Data gathered in this study provided information used in the development of successive versions of the QMVI. Data gathered in the final phase of administration of the QMVI also provided evidence that the intended score interpretation of the QMVI achievement test is valid and reliable. A moderate positive correlation coefficient of 0.49 was observed between the students' QMVI scores and their confidence levels. Analyses of variance indicated that students' scores in Graduate Quantum Mechanics and Undergraduate Quantum Mechanics courses were significantly higher than the mean scores of students in Modern Physics and Chemistry Quantum Mechanics courses (p important factor for students in acquiring a successful understanding of quantum mechanics.

  19. Remote Sensing and GIS Applied to the Landscape for the Environmental Restoration of Urbanizations by Means of 3D Virtual Reconstruction and Visualization (Salamanca, Spain

    Directory of Open Access Journals (Sweden)

    Antonio Miguel Martínez-Graña

    2016-01-01

    Full Text Available The key focus of this paper is to establish a procedure that combines the use of Geographical Information Systems (GIS and remote sensing in order to achieve simulation and modeling of the landscape impact caused by construction. The procedure should be easily and inexpensively developed. With the aid of 3D virtual reconstruction and visualization, this paper proposes that the technologies of remote sensing and GIS can be applied to the landscape for post-urbanization environmental restoration. The goal is to create a rural zone in an urban development sector that integrates the residential areas and local infrastructure into the surrounding natural environment in order to measure the changes to the preliminary urban design. The units of the landscape are determined by means of two cartographic methods: (1 indirect, using the components of the landscape; and (2 direct methods, using the landscape’s elements. The visual basins are calculated for the most transited by the population points, while establishing the zones that present major impacts for the urbanization of their landscape. Based on this, the different construction types are distributed (one-family houses, blocks of houses, etc., selecting the types of plant masses either with ornamentals or integration depending on the zone; integrating water channels, creating a water channel in recirculation and green spaces and leisure time facilities. The techniques of remote sensing and GIS allow for the visualization and modeling of the urbanization in 3D, simulating the virtual reality of the infrastructure as well as the actions that need to be taken for restoration, thereby providing at a low cost an understanding of landscape integration before it takes place.

  20. Validation of Catquest-9SF-A Visual Disability Instrument to Evaluate Patient Function After Corneal Transplantation.

    Science.gov (United States)

    Claesson, Margareta; Armitage, W John; Byström, Berit; Montan, Per; Samolov, Branka; Stenvi, Ulf; Lundström, Mats

    2017-09-01

    Catquest-9SF is a 9-item visual disability questionnaire developed for evaluating patient-reported outcome measures after cataract surgery. The aim of this study was to use Rasch analysis to determine the responsiveness of Catquest-9SF for corneal transplant patients. Patients who underwent corneal transplantation primarily to improve vision were included. One group (n = 199) completed the Catquest-9SF questionnaire before corneal transplantation and a second independent group (n = 199) completed the questionnaire 2 years after surgery. All patients were recorded in the Swedish Cornea Registry, which provided clinical and demographic data for the study. Winsteps software v.3.91.0 (Winsteps.com, Beaverton, OR) was used to assess the fit of the Catquest-9SF data to the Rasch model. Rasch analysis showed that Catquest-9SF applied to corneal transplant patients was unidimensional (infit range, 0.73-1.32; outfit range, 0.81-1.35), and therefore, measured a single underlying construct (visual disability). The Rasch model explained 68.5% of raw variance. The response categories of the 9-item questionnaire were ordered, and the category thresholds were well defined. Item difficulty matched the level of patients' ability (0.36 logit difference between the means). Precision in terms of person separation (3.09) and person reliability (0.91) was good. Differential item functioning was notable for only 1 item (satisfaction with vision), which had a differential item functioning contrast of 1.08 logit. Rasch analysis showed that Catquest-9SF is a valid instrument for measuring visual disability in patients who have undergone corneal transplantation primarily to improve vision.

  1. Validation of Spherically Symmetric Inversion by Use of a Tomographically Reconstructed Three-Dimensional Electron Density of the Solar Corona

    Science.gov (United States)

    Wang, Tongjiang; Davila, Joseph M.

    2014-01-01

    Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.

  2. Validation of exposure visualization and audible distance emission for navigated temporal bone drilling in phantoms.

    Directory of Open Access Journals (Sweden)

    Eduard H J Voormolen

    Full Text Available BACKGROUND: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. METHODOLOGY/PRINCIPAL FINDINGS: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. CONCLUSIONS/SIGNIFICANCE: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling.

  3. A Visual Analog Scale to assess anxiety in children during anesthesia induction (VAS-I): Results supporting its validity in a sample of day care surgery patients.

    Science.gov (United States)

    Berghmans, Johan M; Poley, Marten J; van der Ende, Jan; Weber, Frank; Van de Velde, Marc; Adriaenssens, Peter; Himpe, Dirk; Verhulst, Frank C; Utens, Elisabeth

    2017-09-01

    The modified Yale Preoperative Anxiety Scale is widely used to assess children's anxiety during induction of anesthesia, but requires training and its administration is time-consuming. A Visual Analog Scale, in contrast, requires no training, is easy-to-use and quickly completed. The aim of this study was to evaluate a Visual Analog Scale as a tool to assess anxiety during induction of anesthesia and to determine cut-offs to distinguish between anxious and nonanxious children. Four hundred and one children (1.5-16 years) scheduled for daytime surgery were included. Children's anxiety during induction was rated by parents and anesthesiologists on a Visual Analog Scale and by a trained observer on the modified Yale Preoperative Anxiety Scale. Psychometric properties assessed were: (i) concurrent validity (correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores); (ii) construct validity (differences between subgroups according to the children's age and the parents' anxiety as assessed by the State-Trait Anxiety Inventory); (iii) cross-informant agreement using Bland-Altman analysis; (iv) cut-offs to distinguish between anxious and nonanxious children (reference: modified Yale Preoperative Anxiety Scale ≥30). Correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores were strong (0.68 and 0.73, respectively). Visual Analog Scale scores were higher for children ≤5 years compared to children aged ≥6. Visual Analog Scale scores of children of high-anxious parents were higher than those of low-anxious parents. The mean difference between parents' and anesthesiologists' Visual Analog Scale scores was 3.6, with 95% limits of agreement (-56.1 to 63.3). To classify anxious children, cut-offs for parents (≥37 mm) and anesthesiologists (≥30 mm) were established. The present data provide preliminary data for the validity of a Visual

  4. Validation of PSF-based 3D reconstruction for myocardial blood flow measurements with Rb-82 PET

    DEFF Research Database (Denmark)

    Tolbod, Lars Poulsen; Christensen, Nana Louise; Møller, Lone W.

    images, filtered backprojection (FBP). Furthermore, since myocardial segmentation might be affected by image quality, two different approaches to segmentation implemented in standard software (Carimas (Turku PET Centre) and QPET (Cedar Sinai)) are utilized. Method:14 dynamic rest-stress Rb-82 patient......-scans performed on a GE Discovery 690 PET/CT were included. Images were reconstructed in an isotropic matrix (3.27x3.27x3.27 mm) using PSF (SharpIR: 3 iterations and 21 subsets) and FBP (FORE FBP) with the same edge-preserving filter (3D Butterworth: cut-off 10 mm, power 10). Analysis: The dynamic PET......Aim:The use of PSF-based 3D reconstruction algorithms (PSF) is desirable in most clinical PET-exams due to their superior image quality. Rb-82 cardiac PET is inherently noisy due to short half-life and prompt gammas and would presumably benefit from PSF. However, the quantitative behavior of PSF...

  5. Validation of an iPad visual analogue rating system for assessing appetite and satiety.

    Science.gov (United States)

    Brunger, Louise; Smith, Adam; Re, Roberta; Wickham, Martin; Philippides, Andrew; Watten, Phil; Yeomans, Martin R

    2015-01-01

    The study aimed to validate appetite ratings made on a new electronic device, the Apple iPad Mini, against an existing but now obsolete electronic device (Hewlett Packard iPAQ). Healthy volunteers (9 men and 9 women) rated their appetite before and 0, 30, 60, 90 and 120 minutes after consuming both a low energy (LE: 77 kcal) and high energy (HE: 274 kcal) beverage at breakfast on 2 non-consecutive days in counter-balanced order. Rated hunger, desire to eat and how much participants could consume was significantly lower after HE than LE on both devices, although there was better overall differentiation between HE and LE for ratings on iPad. Rated satiation and fullness, and a composite measure combining all five ratings, was significantly higher after HE than LE on both devices. There was also evidence that differences between conditions were more significant when analysed at each time point than using an overall area under the curve (AUC) measure. Overall, these data confirm that appetite ratings made using iPad are at least as sensitive as those on iPAQ, and offer a new platform for researchers to collect appetite data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. GLAM: Glycogen-derived Lactate Absorption Map for visual analysis of dense and sparse surface reconstructions of rodent brain structures on desktop systems and virtual environments

    KAUST Repository

    Agus, Marco; Boges, Daniya; Gagnon, Nicolas; Magistretti, Pierre J.; Hadwiger, Markus; Cali, Corrado

    2018-01-01

    Human brain accounts for about one hundred billion neurons, but they cannot work properly without ultrastructural and metabolic support. For this reason, mammalian brains host another type of cells called “glial cells”, whose role is to maintain proper conditions for efficient neuronal function. One type of glial cell, astrocytes, are involved in particular in the metabolic support of neurons, by feeding them with lactate, one byproduct of glucose metabolism that they can take up from blood vessels, and store it under another form, glycogen granules. These energy-storage molecules, whose morphology resembles to spheres with a diameter ranging 10–80 nanometers roughly, can be easily recognized using electron microscopy, the only technique whose resolution is high enough to resolve them. Understanding and quantifying their distribution is of particular relevance for neuroscientists, in order to understand where and when neurons use energy under this form. To answer this question, we developed a visualization technique, dubbed GLAM (Glycogen-derived Lactate Absorption Map), and customized for the analysis of the interaction of astrocytic glycogen on surrounding neurites in order to formulate hypotheses on the energy absorption mechanisms. The method integrates high-resolution surface reconstruction of neurites, astrocytes, and the energy sources in form of glycogen granules from different automated serial electron microscopy methods, like focused ion beam scanning electron microscopy (FIB-SEM) or serial block face electron microscopy (SBEM), together with an absorption map computed as a radiance transfer mechanism. The resulting visual representation provides an immediate and comprehensible illustration of the areas in which the probability of lactate shuttling is higher. The computed dataset can be then explored and quantified in a 3D space, either using 3D modeling software or virtual reality environments. Domain scientists have evaluated the technique by

  7. GLAM: Glycogen-derived Lactate Absorption Map for visual analysis of dense and sparse surface reconstructions of rodent brain structures on desktop systems and virtual environments

    KAUST Repository

    Agus, Marco

    2018-05-21

    Human brain accounts for about one hundred billion neurons, but they cannot work properly without ultrastructural and metabolic support. For this reason, mammalian brains host another type of cells called “glial cells”, whose role is to maintain proper conditions for efficient neuronal function. One type of glial cell, astrocytes, are involved in particular in the metabolic support of neurons, by feeding them with lactate, one byproduct of glucose metabolism that they can take up from blood vessels, and store it under another form, glycogen granules. These energy-storage molecules, whose morphology resembles to spheres with a diameter ranging 10–80 nanometers roughly, can be easily recognized using electron microscopy, the only technique whose resolution is high enough to resolve them. Understanding and quantifying their distribution is of particular relevance for neuroscientists, in order to understand where and when neurons use energy under this form. To answer this question, we developed a visualization technique, dubbed GLAM (Glycogen-derived Lactate Absorption Map), and customized for the analysis of the interaction of astrocytic glycogen on surrounding neurites in order to formulate hypotheses on the energy absorption mechanisms. The method integrates high-resolution surface reconstruction of neurites, astrocytes, and the energy sources in form of glycogen granules from different automated serial electron microscopy methods, like focused ion beam scanning electron microscopy (FIB-SEM) or serial block face electron microscopy (SBEM), together with an absorption map computed as a radiance transfer mechanism. The resulting visual representation provides an immediate and comprehensible illustration of the areas in which the probability of lactate shuttling is higher. The computed dataset can be then explored and quantified in a 3D space, either using 3D modeling software or virtual reality environments. Domain scientists have evaluated the technique by

  8. Mining environmental high-throughput sequence data sets to identify divergent amplicon clusters for phylogenetic reconstruction and morphotype visualization.

    Science.gov (United States)

    Gimmler, Anna; Stoeck, Thorsten

    2015-08-01

    Environmental high-throughput sequencing (envHTS) is a very powerful tool, which in protistan ecology is predominantly used for the exploration of diversity and its geographic and local patterns. We here used a pyrosequenced V4-SSU rDNA data set from a solar saltern pond as test case to exploit such massive protistan amplicon data sets beyond this descriptive purpose. Therefore, we combined a Swarm-based blastn network including 11 579 ciliate V4 amplicons to identify divergent amplicon clusters with targeted polymerase chain reaction (PCR) primer design for full-length small subunit of the ribosomal DNA retrieval and probe design for fluorescence in situ hybridization (FISH). This powerful strategy allows to benefit from envHTS data sets to (i) reveal the phylogenetic position of the taxon behind divergent amplicons; (ii) improve phylogenetic resolution and evolutionary history of specific taxon groups; (iii) solidly assess an amplicons (species') degree of similarity to its closest described relative; (iv) visualize the morphotype behind a divergent amplicons cluster; (v) rapidly FISH screen many environmental samples for geographic/habitat distribution and abundances of the respective organism and (vi) to monitor the success of enrichment strategies in live samples for cultivation and isolation of the respective organisms. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  9. Three-dimensional visualization and characterization of bone structure using reconstructed in-vitro μCT images: A pilot study for bone microarchitecture analysis

    Energy Technology Data Exchange (ETDEWEB)

    Latief, Fourier Dzar Eljabbar, E-mail: fourier@fi.itb.ac.id [Physics of Earth and Complex Systems, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Dewi, Dyah Ekashanti Octorina [2Biomedical Engineering Research Division, School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Shari, Mohd Aliff Bin Mohd [Faculty of Electrical Engineering, Universiti Teknologi MARA Malaysia, 40000 Shah Alam, Selangor (Malaysia)

    2014-03-24

    Micro Computed Tomography (μCT) has been largely used to perform micrometer scale imaging of specimens, bone biopsies and small animals for the study of porous or cavity-containing objects. One of its favored applications is for assessing structural properties of bone. In this research, we perform a pilot study to visualize and characterize bone structure of a chicken bone thigh, as well as to delineate its cortical and trabecular bone regions. We utilize an In-Vitro μCT scanner Skyscan 1173 to acquire a three dimensional image data of a chicken bone thigh. The thigh was scanned using X-ray voltage of 45 kV and current of 150 μA. The reconstructed images have spatial resolution of 142.50 μm/pixel. Using image processing and analysis e.i segmentation by thresholding the gray values (which represent the pseudo density) and binarizing the images, we were able to visualize each part of the bone, i.e., the cortical and trabecular regions. Total volume of the bone is 4663.63 mm{sup 3}, and the surface area of the bone is 7913.42 mm{sup 2}. The volume of the cortical is approximately 1988.62 mm{sup 3} which is nearly 42.64% of the total bone volume. This pilot study has confirmed that the μCT is capable of quantifying 3D bone structural properties and defining its regions separately. For further development, these results can be improved for understanding the pathophysiology of bone abnormality, testing the efficacy of pharmaceutical intervention, or estimating bone biomechanical properties.

  10. Optimisation and validation of a 3D reconstruction algorithm for single photon emission computed tomography by means of GATE simulation platform; Optimisation et validation d'un algorithme de reconstruction 3D en Tomographie d'Emission Monophotonique a l'aide de la plate forme de simulation GATE

    Energy Technology Data Exchange (ETDEWEB)

    El Bitar, Ziad [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R de Recherches Scientifiques et Techniques, 34, avenue Carnot - BP 185, 63006 Clermont-Ferrand Cedex (France); Laboratoire de Physique Corpusculaire, CNRS/IN2P3, 63177 Aubiere (France)

    2006-12-15

    Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)

  11. Validation of the Gate simulation platform in single photon emission computed tomography and application to the development of a complete 3-dimensional reconstruction algorithm; Validation de la plate-forme de simulation GATE en tomographie a emission monophotonique et application au developpement d'un algorithme de reconstruction 3D complete

    Energy Technology Data Exchange (ETDEWEB)

    Lazaro, D

    2003-10-01

    Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)

  12. Test of Gross Motor Development-3 (TGMD-3) with the Use of Visual Supports for Children with Autism Spectrum Disorder: Validity and Reliability

    Science.gov (United States)

    Allen, K. A.; Bredero, B.; Van Damme, T.; Ulrich, D. A.; Simons, J.

    2017-01-01

    The validity and reliability of the Test of Gross Motor Development-3 (TGMD-3) were measured, taking into consideration the preference for visual learning of children with autism spectrum disorder (ASD). The TGMD-3 was administered to 14 children with ASD (4-10 years) and 21 age-matched typically developing children under two conditions: TGMD-3…

  13. 'The surface management system' (SuMS) database: a surface-based database to aid cortical surface reconstruction, visualization and analysis

    Science.gov (United States)

    Dickson, J.; Drury, H.; Van Essen, D. C.

    2001-01-01

    Surface reconstructions of the cerebral cortex are increasingly widely used in the analysis and visualization of cortical structure, function and connectivity. From a neuroinformatics perspective, dealing with surface-related data poses a number of challenges. These include the multiplicity of configurations in which surfaces are routinely viewed (e.g. inflated maps, spheres and flat maps), plus the diversity of experimental data that can be represented on any given surface. To address these challenges, we have developed a surface management system (SuMS) that allows automated storage and retrieval of complex surface-related datasets. SuMS provides a systematic framework for the classification, storage and retrieval of many types of surface-related data and associated volume data. Within this classification framework, it serves as a version-control system capable of handling large numbers of surface and volume datasets. With built-in database management system support, SuMS provides rapid search and retrieval capabilities across all the datasets, while also incorporating multiple security levels to regulate access. SuMS is implemented in Java and can be accessed via a Web interface (WebSuMS) or using downloaded client software. Thus, SuMS is well positioned to act as a multiplatform, multi-user 'surface request broker' for the neuroscience community.

  14. Reconstruction of MODIS total suspended matter time series maps by DINEOF and validation with autonomous platform data

    Science.gov (United States)

    Nechad, Bouchra; Alvera-Azcaràte, Aida; Ruddick, Kevin; Greenwood, Naomi

    2011-08-01

    In situ measurements of total suspended matter (TSM) over the period 2003-2006, collected with two autonomous platforms from the Centre for Environment, Fisheries and Aquatic Sciences (Cefas) measuring the optical backscatter (OBS) in the southern North Sea, are used to assess the accuracy of TSM time series extracted from satellite data. Since there are gaps in the remote sensing (RS) data, due mainly to cloud cover, the Data Interpolating Empirical Orthogonal Functions (DINEOF) is used to fill in the TSM time series and build a continuous daily "recoloured" dataset. The RS datasets consist of TSM maps derived from MODIS imagery using the bio-optical model of Nechad et al. (Rem Sens Environ 114: 854-866, 2010). In this study, the DINEOF time series are compared to the in situ OBS measured in moderately to very turbid waters respectively in West Gabbard and Warp Anchorage, in the southern North Sea. The discrepancies between instantaneous RS, DINEOF-filled RS data and Cefas data are analysed in terms of TSM algorithm uncertainties, space-time variability and DINEOF reconstruction uncertainty.

  15. Validity and reliability of the Rosenberg Self-Esteem Scale-Thai version as compared to the Self-Esteem Visual Analog Scale.

    Science.gov (United States)

    Piyavhatkul, Nawanant; Aroonpongpaisal, Suwanna; Patjanasoontorn, Niramol; Rongbutsri, Somchit; Maneeganondh, Somchit; Pimpanit, Wijitra

    2011-07-01

    To compare the validity and reliability of the Thai version of the Rosenberg Self-Esteem Scale with the Self-Esteem Visual Analog Scale. The Rosenberg Self-Esteem Scale was translated into Thai and its content-validity checked by bacA translation. The reliability of the Rosenberg Self-Esteem Scale compared with the Self-Esteem Visual Analog Scale was ther tested between February and March 2008 on 270 volunteers, including 135 patients with psychiatric illness and 135 normal volunteers. The authors analyzed the internal consistency and factor structure of the Rosenberg Self-Esteem Scale-Thai version and the correlation between it and the Visual Analog Scale. The Cronbach's Alpha for the Rosenberg Self-Esteem scale-Thai version was 0.849 and the Pearson's correlation between it and the Self-Esteem Visual Analog Scale 0.618 (p = 0.01). Two factors, viz, the positively and negatively framea items, from the Rosenberg Self-Esteem Scale-Thai version accounted for 44.04% and 12.10% of the variance, respectively. The Rosenberg Self-Esteem Scale-Thai version has acceptable reliability. The Self-Esteem Visual Analog Scale provides an effective measure of self-esteem.

  16. Three-dimensional inversion recovery manganese-enhanced MRI of mouse brain using super-resolution reconstruction to visualize nuclei involved in higher brain function.

    Science.gov (United States)

    Poole, Dana S; Plenge, Esben; Poot, Dirk H J; Lakke, Egbert A J F; Niessen, Wiro J; Meijering, Erik; van der Weerd, Louise

    2014-07-01

    The visualization of activity in mouse brain using inversion recovery spin echo (IR-SE) manganese-enhanced MRI (MEMRI) provides unique contrast, but suffers from poor resolution in the slice-encoding direction. Super-resolution reconstruction (SRR) is a resolution-enhancing post-processing technique in which multiple low-resolution slice stacks are combined into a single volume of high isotropic resolution using computational methods. In this study, we investigated, first, whether SRR can improve the three-dimensional resolution of IR-SE MEMRI in the slice selection direction, whilst maintaining or improving the contrast-to-noise ratio of the two-dimensional slice stacks. Second, the contrast-to-noise ratio of SRR IR-SE MEMRI was compared with a conventional three-dimensional gradient echo (GE) acquisition. Quantitative experiments were performed on a phantom containing compartments of various manganese concentrations. The results showed that, with comparable scan times, the signal-to-noise ratio of three-dimensional GE acquisition is higher than that of SRR IR-SE MEMRI. However, the contrast-to-noise ratio between different compartments can be superior with SRR IR-SE MEMRI, depending on the chosen inversion time. In vivo experiments were performed in mice receiving manganese using an implanted osmotic pump. The results showed that SRR works well as a resolution-enhancing technique in IR-SE MEMRI experiments. In addition, the SRR image also shows a number of brain structures that are more clearly discernible from the surrounding tissues than in three-dimensional GE acquisition, including a number of nuclei with specific higher brain functions, such as memory, stress, anxiety and reward behavior. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Development and validation of a numerical method for computing two-phase flows without interface reconstruction. Application to Taylor bubbles dynamics

    International Nuclear Information System (INIS)

    Benkenida, Adlene

    1999-01-01

    This work is devoted to the development and the use of a numerical code aimed to compute complex two-phase flows in which the topology of the interfaces evolves in time. The solution strategy makes use of a fixed grid on which interfaces evolve freely. The governing equations of the model (one-fluid model) are obtained by adding the local, instantaneous Navier-Stokes equations of each phase after a spatial filtering. The use of an Eulerian approach yields difficulties in estimating several of the two-phase quantities, especially the viscous stress tensor. This problem is overcome by deriving and validating an expression of the stress tensor valid for any Eulerian treatment and whatever the orientation of the interfaces with respect to the grid. To simplify the governing equations of the model, it is assumed that no phase change occurs, that no local slip exists between both phases, and that no small-scale turbulence is present. The possibility to remove some of these hypotheses is discussed, especially with the future aim of developing a large-eddy simulation approach of two-phase flows in which the motion and the effects of small-scale two-phase structures could be taken into account. Interface transport is performed by using a FCT front capturing method without any interface reconstruction procedure. It is shown through several tests that the version of Zalesak's (1979) algorithm in which each direction is treated independently yields the best results, even though a tendency for interfacial regions to thicken artificially is observed in regions with high stretching rates.The code is validated by performing simulations on some simple two-phase flows and by comparing numerical results with available analytical solutions, experiments, or previous computations. Among the results of these tests, those concerning the bouncing of a bubble on a rigid wall are the most original and shed new light on this phenomenon, especially by revealing the time evolution of the

  18. Differences in the validity of a visual estimation method for determining patients' meal intake between various meal types and supplied food items.

    Science.gov (United States)

    Kawasaki, Yui; Akamatsu, Rie; Tamaura, Yuki; Sakai, Masashi; Fujiwara, Keiko; Tsutsuura, Satomi

    2018-02-12

    The aim of this study was to examine differences in the validity of a visual estimation method for determining patients' meal intake between various meal types and supplied food items in hospitals and to find factors influencing the validity of a visual estimation method. There are two procedures by which we obtained the information on dietary intake of the patients in these hospitals. These are both by visual assessment from the meal trays at the time of their clearing, by the attending nursing staff and by weighing conducted by researchers. The following criteria are set for the target trays: A) standard or therapeutic meals, which are monitored by a doctor, for energy and/or protein and/or sodium; B) regular, bite-sized, minced and pureed meal texture, and C) half-portion meals. Visual assessment results were tested for their validity by comparing with the corresponding results of weighing. Differences between these two methods indicated the estimated and absolute values of nutrient intake. A total of 255 (76.1%) trays were included in the analysis out of the 335 possible trays and the results indicated that the energy consumption estimates by visual or weighing procedures are not significantly different (412 ± 173 kcal, p = 0.15). However, the mean protein consumption was significantly different (16.3 ± 6.7 g/tray, p food items were significantly misestimated for energy intake (66 ± 58 kcal/tray) compared to trays with no additions (32 ± 39 kcal/tray, p food items were significantly associated with increased odds of a difference between the two methods (OR: 3.84; 95% confidence interval [CI]: 1.07-13.85). There were high correlations between the visual estimation method and the weighing method measuring patients' dietary intake for various meal types and textures, except for meals with added supplied food items. Nursing staff need to be attentive to supplied food items. Copyright © 2018 Elsevier Ltd and European Society for Clinical

  19. Calibration and Validation of a Detailed Architectural Canopy Model Reconstruction for the Simulation of Synthetic Hemispherical Images and Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Magnus Bremer

    2017-02-01

    Full Text Available Canopy density measures such as the Leaf Area Index (LAI have become standardized mapping products derived from airborne and terrestrial Light Detection And Ranging (aLiDAR and tLiDAR, respectively data. A specific application of LiDAR point clouds is their integration into radiative transfer models (RTM of varying complexity. Using, e.g., ray tracing, this allows flexible simulations of sub-canopy light condition and the simulation of various sensors such as virtual hemispherical images or waveform LiDAR on a virtual forest plot. However, the direct use of LiDAR data in RTMs shows some limitations in the handling of noise, the derivation of surface areas per LiDAR point and the discrimination of solid and porous canopy elements. In order to address these issues, a strategy upgrading tLiDAR and Digital Hemispherical Photographs (DHP into plausible 3D architectural canopy models is suggested. The presented reconstruction workflow creates an almost unbiased virtual 3D representation of branch and leaf surface distributions, minimizing systematic errors due to the object–sensor relationship. The models are calibrated and validated using DHPs. Using the 3D models for simulations, their capabilities for the description of leaf density distributions and the simulation of aLiDAR and DHP signatures are shown. At an experimental test site, the suitability of the models, in order to systematically simulate and evaluate aLiDAR based LAI predictions under various scan settings is proven. This strategy makes it possible to show the importance of laser point sampling density, but also the diversity of scan angles and their quantitative effect onto error margins.

  20. Three-dimensional ICT reconstruction

    International Nuclear Information System (INIS)

    Zhang Aidong; Li Ju; Chen Fa; Sun Lingxia

    2005-01-01

    The three-dimensional ICT reconstruction method is the hot topic of recent ICT technology research. In the context, qualified visual three-dimensional ICT pictures are achieved through multi-piece two-dimensional images accumulation by, combining with thresholding method and linear interpolation. Different direction and different position images of the reconstructed pictures are got by rotation and interception respectively. The convenient and quick method is significantly instructive to more complicated three-dimensional reconstruction of ICT images. (authors)

  1. Three-dimensional ICT reconstruction

    International Nuclear Information System (INIS)

    Zhang Aidong; Li Ju; Chen Fa; Sun Lingxia

    2004-01-01

    The three-dimensional ICT reconstruction method is the hot topic of recent ICT technology research. In the context qualified visual three-dimensional ICT pictures are achieved through multi-piece two-dimensional images accumulation by order, combining with thresholding method and linear interpolation. Different direction and different position images of the reconstructed pictures are got by rotation and interception respectively. The convenient and quick method is significantly instructive to more complicated three-dimensional reconstruction of ICT images. (authors)

  2. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Stambaugh, Cassandra; Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo G.; Nelms, Benjamin E.

    2014-01-01

    Background and purpose: Delta 4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta 4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta 4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta 4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  3. [Conception and Content Validation of a Questionnaire Relating to the Potential Need for Information of Visually Impaired Persons with Regard to Services and Contact Persons].

    Science.gov (United States)

    Hahn, U; Hechler, T; Witt, U; Krummenauer, F

    2015-12-01

    A questionnaire was drafted to identify the needs of visually impaired persons and to optimize their access to non-medical support and services. Subjects had to rate a list of 15 everyday activities that are typically affected by visual impairment (for example, being able to orient themselves in the home environment), by indicating the degree to which they perceive each activity to be affected, using a four-stage scale. They had to evaluate these aspects by means of a relevance assessment. The needs profile derived from this is then correlated with individualized information for assistance and support. The questionnaire shall be made available for use by subjects through advisers in some ophthalmic practices and via the internet. The validity of the content of the proposed tool was evaluated on the basis of a survey of 59 experts in the fields of medical, optical and psychological care and of persons involved in training initiatives. The experts were asked to rate the activities by relevance and clarity of the wording and to propose methods to further develop and optimize the content. The validity of the content was quantified according to a process adopted in the literature, based on the parameters Interrater Agreement (IRA) and Content Validity Index (CVI). The results of all responses (n = 19) and the sub-group analysis suggest that the questionnaire adequately reflects the potential needs profile of visually impaired persons. Overall, there was at least 80% agreement among the 19 experts for 93% of the proposed parameterisation of the activities relating to the relevance and clarity of the wording. Individual proposals for optimization of the design of the questionnaire were adopted. Georg Thieme Verlag KG Stuttgart · New York.

  4. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  5. A Multi-Data Source and Multi-Sensor Approach for the 3D Reconstruction and Web Visualization of a Complex Archaelogical Site: The Case Study of “Tolmo De Minateda”

    Directory of Open Access Journals (Sweden)

    Jose Alberto Torres-Martínez

    2016-06-01

    Full Text Available The complexity of archaeological sites hinders creation of an integral model using the current Geomatic techniques (i.e., aerial, close-range photogrammetry and terrestrial laser scanner individually. A multi-sensor approach is therefore proposed as the optimal solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial datasets must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. The proposed multi-data source and multi-sensor approach is applied to the study case of the “Tolmo de Minateda” archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike, an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. Finally, a mobile device (e.g., tablet or smartphone has been used to integrate, optimize and visualize all this information, providing added value to archaeologists and heritage managers who want to use an efficient tool for their works at the site, and even for non-expert users who just want to know more about the archaeological settlement.

  6. Separate visualization of endolymphatic space, perilymphatic space and bone by a single pulse sequence; 3D-inversion recovery imaging utilizing real reconstruction after intratympanic Gd-DTPA administration at 3 tesla

    International Nuclear Information System (INIS)

    Naganawa, Shinji; Satake, Hiroko; Kawamura, Minako; Fukatsu, Hiroshi; Sone, Michihiko; Nakashima, Tsutomu

    2008-01-01

    Twenty-four hours after intratympanic administration of gadolinium contrast material (Gd), the Gd was distributed mainly in the perilymphatic space. Three-dimensional FLAIR can differentiate endolymphatic space from perilymphatic space, but not from surrounding bone. The purpose of this study was to evaluate whether 3D inversion-recovery turbo spin echo (3D-IR TSE) with real reconstruction could separate the signals of perilymphatic space (positive value), endolymphatic space (negative value) and bone (near zero) by setting the inversion time between the null point of Gd-containing perilymph fluid and that of the endolymph fluid without Gd. Thirteen patients with clinically suspected endolymphatic hydrops underwent intratympanic Gd injection and were scanned at 3 T. A 3D FLAIR and 3D-IR TSE with real reconstruction were obtained. In all patients, low signal of endolymphatic space in the labyrinth on 3D FLAIR was observed in the anatomically appropriate position, and it showed negative signal on 3D-IR TSE. The low signal area of surrounding bone on 3D FLAIR showed near zero signal on 3D-IR TSE. Gd-containing perilymphatic space showed high signal on 3D-IR TSE. In conclusion, by optimizing the inversion time, endolymphatic space, perilymphatic space and surrounding bone can be separately visualized on a single image using a 3D-IR TSE with real reconstruction. (orig.)

  7. 3D visibility analysis as a tool to validate ancient theatre reconstructions: the case of the large Roman theatre of Gortyn

    Directory of Open Access Journals (Sweden)

    Maria Cristina Manzetti

    2016-11-01

    Full Text Available With the diffusion of Virtual Archaeology, many projects in the field of Cultural Heritage attempt to virtually reconstruct historical buildings of different types. Unfortunately, some of these 3D reconstructions still have as principal aim to impress the external users, while the correct interpretation of the buildings modeled is much more important in the domain of archaeological research. Still more critical is the situation when we have to encounter a reconstruction of a monument which is not visible anymore, or which consists only of few architectural remains. The main purpose of this paper is to introduce an innovative methodology to verify hypothetical scenarios of 3D architectural reconstructions, specifically for ancient theatres. In very recent time 3D visibility analysis applied to archaeological context using ArcGIS has been developed, in particular about social-urban studies. In this paper, visibility analysis in 3D contexts is used as an additional instrument to correctly reconstruct architectural elements of the large Roman theatre of Gortyn, in Crete. The results indicate that the level of visibility of the stage, and consequently of the presumed actors, from some of the more representative sectors of the cavea, is of crucial importance for leading to a right reconstruction model of the theatre.

  8. Sexing sirenians: validation of visual and molecular sex determination in both wild dugongs (Dugong dugon) and Florida manatees (Trichechus manatus latirostris). Aquatic Mammals 35(2):187-192.

    Science.gov (United States)

    Bonde, Robert K.; Lanyon, J.; Sneath, H.; Ovenden, J.; Broderick, D.

    2009-01-01

    Sexing wild marine mammals that show little to no sexual dimorphism is challenging. For sirenians that are difficult to catch or approach closely, molecular sexing from tissue biopsies offers an alternative method to visual discrimination. This paper reports the results of a field study to validate the use of two sexing methods: (1) visual discrimination of sex vs (2) molecular sexing based on a multiplex PCR assay which amplifies the male-specific SRY gene and differentiates ZFX and ZFY gametologues. Skin samples from 628 dugongs (Dugong dugon) and 100 Florida manatees (Trichechus manatus latirostris) were analysed and assigned as male or female based on molecular sex. These individuals were also assigned a sex based on either direct observation of the genitalia and/or the association of the individual with a calf. Individuals of both species showed 93 to 96% congruence between visual and molecular sexing. For the remaining 4 to 7%, the discrepancies could be explained by human error. To mitigate this error rate, we recommend using both of these robust techniques, with routine inclusion of sex primers into microsatellite panels employed for identity, along with trained field observers and stringent sample handling.

  9. Sexing sirenians: Validation of visual and molecular sex determination in both wild dugongs (Dugong dugon) and Florida manatees (Trichechus manatus latirostris)

    Science.gov (United States)

    Lanyon, J.M.; Sneath, H.L.; Ovenden, J.R.; Broderick, D.; Bonde, R.K.

    2009-01-01

    Sexing wild marine mammals that show little to no sexual dimorphism is challenging. For sirenians that are difficult to catch or approach closely, molecular sexing from tissue biopsies offers an alternative method to visual discrimination. This paper reports the results of a field study to validate the use of two sexing methods: (1) visual discrimination of sex vs (2) molecular sexing based on a multiplex PCR assay which amplifies the male-specific SRY gene and differentiates ZFX and ZFY gametologues. Skin samples from 628 dugongs (Dugong dugon) and 100 Florida manatees (Trichechus manatus latirostris) were analysed and assigned as male or female based on molecular sex. These individuals were also assigned a sex based on either direct observation of the genitalia and/or the association of the individual with a calf. Individuals of both species showed 93 to 96% congruence between visual and molecular sexing. For the remaining 4 to 7%, the discrepancies could be explained by human error. To mitigate this error rate, we recommend using both of these robust techniques, with routine inclusion of sex primers into microsatellite panels employed for identity, along with trained field observers and stringent sample handling.

  10. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Version 3.0, 10074-Vtr-3.0-00

    International Nuclear Information System (INIS)

    Gillespie, S.

    2000-01-01

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M and O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequent calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M and O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with

  11. Validation of Fourier decomposition MRI with dynamic contrast-enhanced MRI using visual and automated scoring of pulmonary perfusion in young cystic fibrosis patients

    International Nuclear Information System (INIS)

    Bauman, Grzegorz; Puderbach, Michael; Heimann, Tobias; Kopp-Schneider, Annette; Fritzsching, Eva; Mall, Marcus A.; Eichinger, Monika

    2013-01-01

    Purpose: To validate Fourier decomposition (FD) magnetic resonance (MR) imaging in cystic fibrosis (CF) patients with dynamic contrast-enhanced (DCE) MR imaging. Materials and methods: Thirty-four CF patients (median age 4.08 years; range 0.16–30) were examined on a 1.5-T MR imager. For FD MR imaging, sets of lung images were acquired using an untriggered two-dimensional balanced steady-state free precession sequence. Perfusion-weighted images were obtained after correction of the breathing displacement and Fourier analysis of the cardiac frequency from the time-resolved data sets. DCE data sets were acquired with a three-dimensional gradient echo sequence. The FD and DCE images were visually assessed for perfusion defects by two readers independently (R1, R2) using a field based scoring system (0–12). Software was used for perfusion impairment evaluation (R3) of segmented lung images using an automated threshold. Both imaging and evaluation methods were compared for agreement and tested for concordance between FD and DCE imaging. Results: Good or acceptable intra-reader agreement was found between FD and DCE for visual and automated scoring: R1 upper and lower limits of agreement (ULA, LLA): 2.72, −2.5; R2: ULA, LLA: ±2.5; R3: ULA: 1.5, LLA: −2. A high concordance was found between visual and automated scoring (FD: 70–80%, DCE: 73–84%). Conclusions: FD MR imaging provides equivalent diagnostic information to DCE MR imaging in CF patients. Automated assessment of regional perfusion defects using FD and DCE MR imaging is comparable to visual scoring but allows for percentage-based analysis

  12. Validation of Fourier decomposition MRI with dynamic contrast-enhanced MRI using visual and automated scoring of pulmonary perfusion in young cystic fibrosis patients

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, Grzegorz, E-mail: g.bauman@dkfz.de [German Cancer Research Center, Division of Medical Physics in Radiology, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Puderbach, Michael, E-mail: m.puderbach@dkfz.de [Chest Clinics at the University of Heidelberg, Clinics for Interventional and Diagnostic Radiology, Amalienstr. 5, 69126 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); Heimann, Tobias, E-mail: t.heimann@dkfz.de [German Cancer Research Center, Division of Medical and Biological Informatics, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Kopp-Schneider, Annette, E-mail: kopp@dkfz.de [German Cancer Research Center, Division of Biostatistics, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Fritzsching, Eva, E-mail: eva.fritzsching@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Translational Pulmonology and Division of Pediatric Pulmonology and Allergy and Cystic Fibrosis Center, Im Neuenheimer Feld 430, Heidelberg (Germany); Mall, Marcus A., E-mail: marcus.mall@med.uni-heidelberg.de [Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); University Hospital Heidelberg, Department of Translational Pulmonology and Division of Pediatric Pulmonology and Allergy and Cystic Fibrosis Center, Im Neuenheimer Feld 430, Heidelberg (Germany); Eichinger, Monika, E-mail: m.eichinger@dkfz.de [Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); German Cancer Research Center, Division of Radiology, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany)

    2013-12-01

    Purpose: To validate Fourier decomposition (FD) magnetic resonance (MR) imaging in cystic fibrosis (CF) patients with dynamic contrast-enhanced (DCE) MR imaging. Materials and methods: Thirty-four CF patients (median age 4.08 years; range 0.16–30) were examined on a 1.5-T MR imager. For FD MR imaging, sets of lung images were acquired using an untriggered two-dimensional balanced steady-state free precession sequence. Perfusion-weighted images were obtained after correction of the breathing displacement and Fourier analysis of the cardiac frequency from the time-resolved data sets. DCE data sets were acquired with a three-dimensional gradient echo sequence. The FD and DCE images were visually assessed for perfusion defects by two readers independently (R1, R2) using a field based scoring system (0–12). Software was used for perfusion impairment evaluation (R3) of segmented lung images using an automated threshold. Both imaging and evaluation methods were compared for agreement and tested for concordance between FD and DCE imaging. Results: Good or acceptable intra-reader agreement was found between FD and DCE for visual and automated scoring: R1 upper and lower limits of agreement (ULA, LLA): 2.72, −2.5; R2: ULA, LLA: ±2.5; R3: ULA: 1.5, LLA: −2. A high concordance was found between visual and automated scoring (FD: 70–80%, DCE: 73–84%). Conclusions: FD MR imaging provides equivalent diagnostic information to DCE MR imaging in CF patients. Automated assessment of regional perfusion defects using FD and DCE MR imaging is comparable to visual scoring but allows for percentage-based analysis.

  13. The pedicled omentoplasty and split skin graft (POSSG) for reconstruction of large chest wall defects. A validity study of 34 patients

    NARCIS (Netherlands)

    C.M.E. Contant; A.N. van Geel (Albert); B. van der Holt (Bronno); T. Wiggers (Theo)

    1996-01-01

    textabstractThe aim of this study was to evaluate retrospectively the results of pedicled omentoplasty and split skin graft (POSSG) in reconstructing (full thickness) chest wall defects, and to define its role as a palliative procedure for local symptom control. Thirty-four patients with recurrent

  14. Designing and Evaluation of Reliability and Validity of Visual Cue-Induced Craving Assessment Task for Methamphetamine Smokers

    Directory of Open Access Journals (Sweden)

    Hamed Ekhtiari

    2010-08-01

    Full Text Available A B S T R A C TIntroduction: Craving to methamphetamine is a significant health concern and exposure to methamphetamine cues in laboratory can induce craving. In this study, a task designing procedure for evaluating methamphetamine cue-induced craving in laboratory conditions is examined. Methods: First a series of visual cues which could induce craving was identified by 5 discussion sessions between expert clinicians and 10 methamphetamine smokers. Cues were categorized in 4 main clusters and photos were taken for each cue in studio, then 60 most evocative photos were selected and 10 neutral photos were added. In this phase, 50 subjects with methamphetamine dependence, had exposure to cues and rated craving intensity induced by the 72 cues (60 active evocative photos + 10 neutral photos on self report Visual Analogue Scale (ranging from 0-100. In this way, 50 photos with high levels of evocative potency (CICT 50 and 10 photos with the most evocative potency (CICT 10 were obtained and subsequently, the task was designed. Results: The task reliability (internal consistency was measured by Cronbach’s alpha which was 91% for (CICT 50 and 71% for (CICT 10. The most craving induced was reported for category Drug use procedure (66.27±30.32 and least report for category Cues associated with drug use (31.38±32.96. Difference in cue-induced craving in (CICT 50 and (CICT 10 were not associated with age, education, income, marital status, employment and sexual activity in the past 30 days prior to study entry. Family living condition was marginally correlated with higher scores in (CICT 50. Age of onset for (opioids, cocaine and methamphetamine was negatively correlated with (CICT 50 and (CICT 10 and age of first opiate use was negatively correlated with (CICT 50. Discussion: Cue-induced craving for methamphetamine may be reliably measured by tasks designed in laboratory and designed assessment tasks can be used in cue reactivity paradigm, and

  15. Improved coronary in-stent visualization using a combined high-resolution kernel and a hybrid iterative reconstruction technique at 256-slice cardiac CT—Pilot study

    International Nuclear Information System (INIS)

    Oda, Seitaro; Utsunomiya, Daisuke; Funama, Yoshinori; Takaoka, Hiroko; Katahira, Kazuhiro; Honda, Keiichi; Noda, Katsuo; Oshima, Shuichi; Yamashita, Yasuyuki

    2013-01-01

    Objectives: To investigate the diagnostic performance of 256-slice cardiac CT for the evaluation of the in-stent lumen by using a hybrid iterative reconstruction (HIR) algorithm combined with a high-resolution kernel. Methods: This study included 28 patients with 28 stents who underwent cardiac CT. Three different reconstruction images were obtained with: (1) a standard filtered back projection (FBP) algorithm with a standard cardiac kernel (CB), (2) an FBP algorithm with a high-resolution cardiac kernel (CD), and (3) an HIR algorithm with the CD kernel. We measured image noise and kurtosis and used receiver operating characteristics analysis to evaluate observer performance in the detection of in-stent stenosis. Results: Image noise with FBP plus the CD kernel (80.2 ± 15.5 HU) was significantly higher than with FBP plus the CB kernel (28.8 ± 4.6 HU) and HIR plus the CD kernel (36.1 ± 6.4 HU). There was no significant difference in the image noise between FBP plus the CB kernel and HIR plus the CD kernel. Kurtosis was significantly better with the CD- than the CB kernel. The kurtosis values obtained with the CD kernel were not significantly different between the FBP- and HIR reconstruction algorithms. The areas under the receiver operating characteristics curves with HIR plus the CD kernel were significantly higher than with FBP plus the CB- or the CD kernel. The difference between FBP plus the CB- or the CD kernel was not significant. The average sensitivity, specificity, and positive and negative predictive value for the detection of in-stent stenosis were 83.3, 50.0, 33.3, and 91.6% for FBP plus the CB kernel, 100, 29.6, 40.0, and 100% for FBP plus the CD kernel, and 100, 54.5, 40.0, and 100% for HIR plus the CD kernel. Conclusions: The HIR algorithm combined with the high-resolution kernel significantly improved diagnostic performance in the detection of in-stent stenosis

  16. Validation of visualized transgenic zebrafish as a high throughput model to assay bradycardia related cardio toxicity risk candidates.

    Science.gov (United States)

    Wen, Dingsheng; Liu, Aiming; Chen, Feng; Yang, Julin; Dai, Renke

    2012-10-01

    Drug-induced QT prolongation usually leads to torsade de pointes (TdP), thus for drugs in the early phase of development this risk should be evaluated. In the present study, we demonstrated a visualized transgenic zebrafish as an in vivo high-throughput model to assay the risk of drug-induced QT prolongation. Zebrafish larvae 48 h post-fertilization expressing green fluorescent protein in myocardium were incubated with compounds reported to induce QT prolongation or block the human ether-a-go-go-related gene (hERG) K⁺ current. The compounds sotalol, indapaminde, erythromycin, ofoxacin, levofloxacin, sparfloxacin and roxithromycin were additionally administrated by microinjection into the larvae yolk sac. The ventricle heart rate was recorded using the automatic monitoring system after incubation or microinjection. As a result, 14 out of 16 compounds inducing dog QT prolongation caused bradycardia in zebrafish. A similar result was observed with 21 out of 26 compounds which block hERG current. Among the 30 compounds which induced human QT prolongation, 25 caused bradycardia in this model. Thus, the risk of compounds causing bradycardia in this transgenic zebrafish correlated with that causing QT prolongation and hERG K⁺ current blockage in established models. The tendency that high logP values lead to high risk of QT prolongation in this model was indicated, and non-sensitivity of this model to antibacterial agents was revealed. These data suggest application of this transgenic zebrafish as a high-throughput model to screen QT prolongation-related cardio toxicity of the drug candidates. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Industrial dynamic tomographic reconstruction

    International Nuclear Information System (INIS)

    Oliveira, Eric Ferreira de

    2016-01-01

    The state of the art methods applied to industrial processes is currently based on the principles of classical tomographic reconstructions developed for tomographic patterns of static distributions, or is limited to cases of low variability of the density distribution function of the tomographed object. Noise and motion artifacts are the main problems caused by a mismatch in the data from views acquired in different instants. All of these add to the known fact that using a limited amount of data can result in the presence of noise, artifacts and some inconsistencies with the distribution under study. One of the objectives of the present work is to discuss the difficulties that arise from implementing reconstruction algorithms in dynamic tomography that were originally developed for static distributions. Another objective is to propose solutions that aim at reducing a temporal type of information loss caused by employing regular acquisition systems to dynamic processes. With respect to dynamic image reconstruction it was conducted a comparison between different static reconstruction methods, like MART and FBP, when used for dynamic scenarios. This comparison was based on a MCNPx simulation as well as an analytical setup of an aluminum cylinder that moves along the section of a riser during the process of acquisition, and also based on cross section images from CFD techniques. As for the adaptation of current tomographic acquisition systems for dynamic processes, this work established a sequence of tomographic views in a just-in-time fashion for visualization purposes, a form of visually disposing density information as soon as it becomes amenable to image reconstruction. A third contribution was to take advantage of the triple color channel necessary to display colored images in most displays, so that, by appropriately scaling the acquired values of each view in the linear system of the reconstruction, it was possible to imprint a temporal trace into the regularly

  18. Clinical Validation of a Pixon-Based Reconstruction Method Allowing a Twofold Reduction in Planar Images Time of 111In-Pentetreotide Somatostatin Receptor Scintigraphy

    Directory of Open Access Journals (Sweden)

    Philippe Thuillier

    2017-08-01

    Full Text Available ObjectiveThe objective of this study was to evaluate the diagnostic efficacy of Pixon-based reconstruction method on planar somatostatin receptor scintigraphy (SRS.MethodsAll patients with neuroendocrine tumors (NETs disease who were referred for SRS to our department during 1-year period from January to December 2015 were consecutively included. Three nuclear physicians independently reviewed all the data sets of images which included conventional images (CI; 15 min/view and processed images (PI obtained by reconstructing the first 450 s extracted data using Oncoflash® software package. Image analysis using a 3-point rating scale for abnormal uptake of 111 Indium-DTPA-Phe-octreotide in any lesion or organ was interpreted as positive, uncertain, or negative for the evidence of NET disease. A maximum grade uptake of the radiotracer in the lesion was assessed by the Krenning scale method. The results of image interpretation by the two methods were considered significantly discordant when the difference in organ involvement assessment was negative vs. positive or in lesion uptake was ≥2 grades. Agreement between the results of two methods and by different scan observers was evaluated using Cohen κ coefficients.ResultsThere was no significant (p = 0.403 correlation between data acquisition protocol and quality image. The rates of significant discrepancies for exam interpretation and organs involvement assessment were 2.8 and 2.6%, respectively. Mean κ values revealed a good agreement for concordance between CI and PI interpretation without difference of agreement for inter/intra-observer analysis.ConclusionOur results suggest the feasibility to use a Pixon-based reconstruction method for SRS planar images allowing a twofold reduction of acquisition time and without significant alteration of image quality or on image interpretation.

  19. Oblique reconstructions in tomosynthesis. II. Super-resolution

    International Nuclear Information System (INIS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes.Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system.Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  20. Oblique reconstructions in tomosynthesis. II. Super-resolution

    Science.gov (United States)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  1. Hypogeal geological survey in the "Grotta del Re Tiberio" natural cave (Apennines, Italy): a valid tool for reconstructing the structural setting

    Science.gov (United States)

    Ghiselli, Alice; Merazzi, Marzio; Strini, Andrea; Margutti, Roberto; Mercuriali, Michele

    2011-06-01

    As karst systems are natural windows to the underground, speleology, combined with geological surveys, can be useful tools for helping understand the geological evolution of karst areas. In order to enhance the reconstruction of the structural setting in a gypsum karst area (Vena del Gesso, Romagna Apennines), a detailed analysis has been carried out on hypogeal data. Structural features (faults, fractures, tectonic foliations, bedding) have been mapped in the "Grotta del Re Tiberio" cave, in the nearby gypsum quarry tunnels and open pit benches. Five fracture systems and six fault systems have been identified. The fault systems have been further analyzed through stereographic projections and geometric-kinematic evaluations in order to reconstruct the relative chronology of these structures. This analysis led to the detection of two deformation phases. The results permitted linking of the hypogeal data with the surface data both at a local and regional scale. At the local scale, fracture data collected in the underground have been compared with previous authors' surface data coming from the quarry area. The two data sets show a very good correspondence, as every underground fracture system matches with one of the surface fracture system. Moreover, in the cave, a larger number of fractures belonging to each system could be mapped. At the regional scale, the two deformation phases detected can be integrated in the structural setting of the study area, thereby enhancing the tectonic interpretation of the area ( e.g., structures belonging to a new deformation phase, not reported before, have been identified underground). The structural detailed hypogeal survey has, thus, provided very useful data, both by integrating the existing information and revealing new data not detected at the surface. In particular, some small structures ( e.g., displacement markers and short fractures) are better preserved in the hypogeal environment than on the surface where the outcropping

  2. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    International Nuclear Information System (INIS)

    Hoffman, F.O.

    1995-01-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains

  3. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O. [Senes Oak Ridge, Inc., TN (United States). Center for Risk Analysis

    1995-04-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains.

  4. Climate Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...

  5. SU-F-T-549: Validation of a Method for in Vivo 3D Dose Reconstruction for SBRT Using a New Transmission Detector

    Energy Technology Data Exchange (ETDEWEB)

    Nakaguchi, Y; Shimohigashi, Y; Onizuka, R; Ohno, T [Kumamoto University Hospital, Kumamoto, Kumamoto (Japan)

    2016-06-15

    Purpose: Recently, there has been increased clinical use of stereotactic body radiation therapy (SBRT). SBRT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. In vivo dose measurements, a commercially available quality assurance platform which is able to correlate the delivered dose to the patient’s anatomy and take into account tissue inhomogeneity, is the COMPASS system (IBA Dosimetry, Germany) using a new transmission detector (Dolphin, IBA Dosimetry). In this work, we evaluate a method for in vivo 3D dose reconstruction for SBRT using a new transmission detector, which was developed for in vivo dose verification for intensity-modulated radiation therapy (IMRT). Methods: We evaluated the accuracy of measurement for SBRT using simple small fields (2×2−10×10 cm2), a multileaf collimator (MLC) test pattern, and clinical cases. The dose distributions from the COMPASS were compared with those of EDR2 films (Kodak, USA) and the Monte Carlo simulations (MC). For clinical cases, we compared MC using dose-volume-histograms (DVHs) and dose profiles. Results: The dose profiles from the COMPASS for small fields and the complicated MLC test pattern agreed with those of EDR2 films, and MC within 3%. This showed the COMPASS with Dolphin system showed good spatial resolution and can measure small fields which are required for SBRT. Those results also suggest that COMPASS with Dolphin is able to detect MLC leaf position errors for SBRT. In clinical cases, the COMPASS with Dolphin agreed well with MC. The Dolphin detector, which consists of ionization chambers, provided stable measurement. Conclusion: COMPASS with Dolphin detector showed a useful in vivo 3D dose reconstruction for SBRT. The accuracy of the results indicates that this approach is suitable for clinical implementation.

  6. SU-F-T-549: Validation of a Method for in Vivo 3D Dose Reconstruction for SBRT Using a New Transmission Detector

    International Nuclear Information System (INIS)

    Nakaguchi, Y; Shimohigashi, Y; Onizuka, R; Ohno, T

    2016-01-01

    Purpose: Recently, there has been increased clinical use of stereotactic body radiation therapy (SBRT). SBRT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. In vivo dose measurements, a commercially available quality assurance platform which is able to correlate the delivered dose to the patient’s anatomy and take into account tissue inhomogeneity, is the COMPASS system (IBA Dosimetry, Germany) using a new transmission detector (Dolphin, IBA Dosimetry). In this work, we evaluate a method for in vivo 3D dose reconstruction for SBRT using a new transmission detector, which was developed for in vivo dose verification for intensity-modulated radiation therapy (IMRT). Methods: We evaluated the accuracy of measurement for SBRT using simple small fields (2×2−10×10 cm2), a multileaf collimator (MLC) test pattern, and clinical cases. The dose distributions from the COMPASS were compared with those of EDR2 films (Kodak, USA) and the Monte Carlo simulations (MC). For clinical cases, we compared MC using dose-volume-histograms (DVHs) and dose profiles. Results: The dose profiles from the COMPASS for small fields and the complicated MLC test pattern agreed with those of EDR2 films, and MC within 3%. This showed the COMPASS with Dolphin system showed good spatial resolution and can measure small fields which are required for SBRT. Those results also suggest that COMPASS with Dolphin is able to detect MLC leaf position errors for SBRT. In clinical cases, the COMPASS with Dolphin agreed well with MC. The Dolphin detector, which consists of ionization chambers, provided stable measurement. Conclusion: COMPASS with Dolphin detector showed a useful in vivo 3D dose reconstruction for SBRT. The accuracy of the results indicates that this approach is suitable for clinical implementation.

  7. Archeointensity study on baked clay samples taken from the reconstructed ancient kiln: implication for validity of the Tsunakawa-Shaw paleointensity method

    Science.gov (United States)

    Yamamoto, Yuhji; Torii, Masayuki; Natsuhara, Nobuyoshi

    2015-05-01

    In 1972, a reconstruction experiment of a kiln had been done to reproduce an excavated kiln of the seventh century in Japan. Baked clay samples were taken from the floor surface and -20 cm level, and they have been stored after determinations of the paleomagnetic directions by partial alternating field demagnetizations. We recently applied the Tsunakawa-Shaw method to the samples to assess how reliable archeointensity results are obtained from the samples. A suite of the rock magnetic experiments and the scanning electron microscope observations elucidate that dominant magnetic carriers of the floor surface samples are Ti-poor titanomagnetite grains in approximately 10 nm size with single-domain and/or super-paramagnetic states, whereas contributions of multi-domain grains seem to be relatively large for the -20-cm level samples. From the floor surface samples, six out of the eight successful results were obtained and they give an average of 47.3 μT with a standard deviation of 2.2 μT. This is fairly consistent with the in situ geomagnetic field of 46.4 μT at the time of the reconstruction. They are obtained with a built-in anisotropy correction using anhysteretic remanent magnetization and without any cooling rate corrections. In contrast, only one out of four was successful from the -20-cm level samples. It yields an archeointensity of 31.6 μT, which is inconsistent with the in situ geomagnetic field. Considering from the in situ temperature record during the firing of the kiln and the unblocking temperature spectra of the samples, the floor surface samples acquired full thermoremanent magnetizations (TRMs) as their natural remanent magnetizations whereas the -20-cm level samples only acquired partial TRMs, and these differences probably cause the difference in the archeointensity results between the two sample groups. For archeointensity researches, baked clay samples from a kiln floor are considered to be ideal materials.

  8. The validity of using ROC software for analysing visual grading characteristics data: an investigation based on the novel software VGC analyzer

    International Nuclear Information System (INIS)

    Hansson, Jonny; Maansson, Lars Gunnar; Baath, Magnus

    2016-01-01

    The purpose of the present work was to investigate the validity of using single-reader-adapted receiver operating characteristics (ROC) software for analysis of visual grading characteristics (VGC) data. VGC data from four published VGC studies on optimisation of X-ray examinations, previously analysed using ROCFIT, were reanalysed using a recently developed software dedicated to VGC analysis (VGC Analyzer), and the outcomes [the mean and 95 % confidence interval (CI) of the area under the VGC curve (AUC VGC ) and the p-value] were compared. The studies included both paired and non-paired data and were reanalysed both for the fixed-reader and the random-reader situations. The results showed good agreement between the softwares for the mean AUC VGC . For non-paired data, wider CIs were obtained with VGC Analyzer than previously reported, whereas for paired data, the previously reported CIs were similar or even broader. Similar observations were made for the p-values. The results indicate that the use of single-reader-adapted ROC software such as ROCFIT for analysing non-paired VGC data may lead to an increased risk of committing Type I errors, especially in the random-reader situation. On the other hand, the use of ROC software for analysis of paired VGC data may lead to an increased risk of committing Type II errors, especially in the fixed-reader situation. (authors)

  9. Experimental validation of radial reconstructed pin-power distributions in full-scale BWR fuel assemblies with and without control blade

    Energy Technology Data Exchange (ETDEWEB)

    Giust, Flavio, E-mail: flavio.giust@axpo.c [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Axpo Kernenergie AG, Parkstrasse 23, CH-5401 Baden (Switzerland); Grimm, Peter [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Chawla, Rakesh [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland)

    2010-12-15

    Total fission rate measurements have been performed on full-size BWR fuel assemblies of type SVEA-96+ in the zero power reactor PROTEUS at the Paul Scherrer Institute. This paper presents comparisons of reconstructed 2D pin fission rates from nodal diffusion calculations to the experimental results in two configurations: one 'regular' (I-1A) and the other 'controlled' (I-2A). Both configurations consist of an array of 3 x 3 SVEA-96+ fuel assemblies moderated with light water at 20 {sup o}C. In configuration I-2A, an L-shaped hafnium control blade (half of a real cruciform blade) is inserted adjacent to the north-west corner of the central fuel assembly. To minimise the impact of the surroundings, all measurements were done in fuel pins belonging to the central assembly. The 3 x 3 experimental configuration (test zone) was modelled using the core monitoring and design tools that are applied at the Leibstadt Nuclear Power Plant (KKL). These are the 2D transport code HELIOS, used for the cross-section generation, and the 3D, 2-group nodal diffusion code PRESTO-2. The exterior is represented, in the axial and radial directions, by 2-group partial current ratios (PCRs) calculated at the test zone boundary using a 3D Monte Carlo (MCNPX) model of the whole PROTEUS reactor. Sensitivity cases are analysed to show the impact of changes in the 2D lattice modelling on the calculated fission rate distribution and reactivity. Further, the effects of variations in the test zone boundary PCRs and their behaviour in energy are investigated. For the test zone configuration without control blade, the pin-power reconstruction methodology delivers the same level of accuracy as the 2D transport calculations. On the other hand, larger deviations that are inherent to the use of reflected geometry in the lattice calculations are observed for the configuration with the control blade inserted. In the basic (reference) simulation cases, the calculated-to-experimental (C

  10. Equilibrium Reconstruction in EAST Tokamak

    International Nuclear Information System (INIS)

    Qian Jinping; Wan Baonian; Shen Biao; Sun Youwen; Liu Dongmei; Xiao Bingjia; Ren Qilong; Gong Xianzu; Li Jiangang; Lao, L. L.; Sabbagh, S. A.

    2009-01-01

    Reconstruction of experimental axisymmetric equilibria is an important part of tokamak data analysis. Fourier expansion is applied to reconstruct the vessel current distribution in EFIT code. Benchmarking and testing calculations are performed to evaluate and validate this algorithm. Two cases for circular and non-circular plasma discharges are presented. Fourier expansion used to fit the eddy current is a robust method and the real time EFIT can be introduced to the plasma control system in the coming campaign. (magnetically confined plasma)

  11. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel; Kuester, Falko

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual

  12. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  13. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  14. Developing a Validated Long-Term Satellite-Based Albedo Record in the Central Alaska Range to Improve Regional Hydroclimate Reconstructions

    Science.gov (United States)

    Kreutz, K. J.; Godaire, T. P.; Burakowski, E. A.; Winski, D.; Campbell, S. W.; Wang, Z.; Sun, Q.; Hamilton, G. S.; Birkel, S. D.; Wake, C. P.; Osterberg, E. C.; Schaaf, C.

    2015-12-01

    Mountain glaciers around the world, particularly in Alaska, are experiencing significant surface mass loss from rapid climatic shifts and constitute a large proportion of the cryosphere's contribution to sea level rise. Surface albedo acts as a primary control on a glacier's mass balance, yet it is difficult to measure and quantify spatially and temporally in steep, mountainous settings. During our 2013 field campaign in Denali National Park to recover two surface to bedrock ice cores, we used an Analytical Spectral Devices (ASD) FieldSpec4 Standard Resolution spectroradiometer to measure incoming solar radiation, outgoing surface reflectance and optical grain size on the Kahiltna Glacier and at the Kahiltna Base Camp. A Campbell Scientific automatic weather station was installed on Mount Hunter (3900m) in June 2013, complementing a longer-term (2008-present) station installed at Kahiltna Base Camp (2100m). Use of our in situ data aids in the validation of surface albedo values derived from Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat satellite imagery. Comparisons are made between ASD FieldSpec4 ground measurements and 500m MODIS imagery to assess the ability of MODIS to capture the variability of surface albedo across the glacier surface. The MODIS MCD43A3 BRDF/Albedo Product performs well at Kahiltna Base Camp (albedo (10-28% relative to ASD data) appear to occur along the Kahiltna Glacier due to the snow-free valley walls being captured in the 500m MODIS footprint. Incorporating Landsat imagery will strengthen our interpretations and has the potential to produce a long-term (1982-present) validated satellite albedo record for steep and mountainous terrain. Once validation is complete, we will compare the satellite-derived albedo record to the Denali ice core accumulation rate, aerosol records (i.e. volcanics and biomass burning), and glacier mass balance data. This research will ultimately contribute to an improved understanding of the

  15. Genomic prediction using different estimation methodology, blending and cross-validation techniques for growth traits and visual scores in Hereford and Braford cattle.

    Science.gov (United States)

    Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F

    2018-05-07

    The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in

  16. Prediction of skull fracture risk for children 0-9 months old through validated parametric finite element model and cadaver test reconstruction.

    Science.gov (United States)

    Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen

    2015-09-01

    Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.

  17. Experimental validation of 3D reconstructed pin-power distributions in full-scale BWR fuel assemblies with partial length rods

    Energy Technology Data Exchange (ETDEWEB)

    Giust, F. D. [Axpo Kernenergie, Parkstrasse 23, CH-5401 Baden (Switzerland); Swiss Federal Inst. of Technology EPFL, CH-1015 Lausanne (Switzerland); Grimm, P. [Paul Scherrer Inst., CH-5232 Villigen (Switzerland); Chawla, R. [Paul Scherrer Inst., CH-5232 Villigen (Switzerland); Swiss Federal Inst. of Technology (EPFL), CH-1015 Lausanne (Switzerland)

    2012-07-01

    Total fission rate measurements have been performed on full-size BWR fuel assemblies of type SVEA-96 Optima2 in the framework of Phase III of the LWR-PROTEUS experimental program at the Paul Scherrer Inst.. This paper presents comparisons of calculated, nodal reconstructed, pin-wise total-fission rate distributions with experimental results. Radial comparisons have been performed for the three sections of the assembly (96, 92 and 84 fuel pins), while three-dimensional effects have been investigated at pellet-level for the two transition regions, i.e. the tips of the short (1/3) and long (2/3) partial length rods. The test zone has been modeled using two different code systems: HELIOS/PRESTO-2 and CASMO-5/SIMULATE-5. The first is presently used for core monitoring and design at the Leibstadt Nuclear Power Plant (KKL). The second represents the most recent generation of the widely applied CASMO/SIMULATE system. For representing the PROTEUS test-zone boundaries, Partial Current Ratios (PCRs) - derived from a 3D MCNPX model of the entire reactor - have been applied to the PRESTO-2 and SIMULATE-5 models in the form of 2- and 5-group diagonal albedo matrices, respectively. The MCNPX results have also served as a reference, high-order transport solution in the calculation/experiment comparisons. It is shown that the performance of the nodal methodologies in predicting the global distribution of the total-fission rate is very satisfactory. Considering the various radial comparisons, the standard deviations of the calculated/experimental (C/E) distributions do not exceed 1.9% for any of the three methodologies - PRESTO-2, SIMULATE-5 and MCNPX. For the three-dimensional comparisons at pellet-level, the corresponding standard deviations are 2.7%, 2.0% and 2.1%, respectively. (authors)

  18. Photometric Lunar Surface Reconstruction

    Science.gov (United States)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  19. Vaginal reconstruction

    International Nuclear Information System (INIS)

    Lesavoy, M.A.

    1985-01-01

    Vaginal reconstruction can be an uncomplicated and straightforward procedure when attention to detail is maintained. The Abbe-McIndoe procedure of lining the neovaginal canal with split-thickness skin grafts has become standard. The use of the inflatable Heyer-Schulte vaginal stent provides comfort to the patient and ease to the surgeon in maintaining approximation of the skin graft. For large vaginal and perineal defects, myocutaneous flaps such as the gracilis island have been extremely useful for correction of radiation-damaged tissue of the perineum or for the reconstruction of large ablative defects. Minimal morbidity and scarring ensue because the donor site can be closed primarily. With all vaginal reconstruction, a compliant patient is a necessity. The patient must wear a vaginal obturator for a minimum of 3 to 6 months postoperatively and is encouraged to use intercourse as an excellent obturator. In general, vaginal reconstruction can be an extremely gratifying procedure for both the functional and emotional well-being of patients

  20. ACL Reconstruction

    Science.gov (United States)

    ... in moderate exercise and recreational activities, or play sports that put less stress on the knees. ACL reconstruction is generally recommended if: You're an athlete and want to continue in your sport, especially if the sport involves jumping, cutting or ...

  1. Visualization analysis and design

    CERN Document Server

    Munzner, Tamara

    2015-01-01

    Visualization Analysis and Design provides a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. The book breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It walks readers through the use of space and color to visually encode data in a view, the trade-offs between changing a single view and using multiple linked views, and the ways to reduce the amount of data shown in each view. The book concludes with six case stu...

  2. Validation of 2 noninvasive, markerless reconstruction techniques in biplane high-speed fluoroscopy for 3-dimensional research of bovine distal limb kinematics.

    Science.gov (United States)

    Weiss, M; Reich, E; Grund, S; Mülling, C K W; Geiger, S M

    2017-10-01

    Lameness severely impairs cattle's locomotion, and it is among the most important threats to animal welfare, performance, and productivity in the modern dairy industry. However, insight into the pathological alterations of claw biomechanics leading to lameness and an understanding of the biomechanics behind development of claw lesions causing lameness are limited. Biplane high-speed fluoroscopic kinematography is a new approach for the analysis of skeletal motion. Biplane high-speed videos in combination with bone scans can be used for 3-dimensional (3D) animations of bones moving in 3D space. The gold standard, marker-based animation, requires implantation of radio-opaque markers into bones, which impairs the practicability for lameness research in live animals. Therefore, the purpose of this study was to evaluate the comparative accuracy of 2 noninvasive, markerless animation techniques (semi-automatic and manual) in 3D animation of the bovine distal limb. Tantalum markers were implanted into each of the distal, middle, and proximal phalanges of 5 isolated bovine distal forelimbs, and biplane high-speed x-ray videos of each limb were recorded to capture the simulation of one step. The limbs were scanned by computed tomography to create bone models of the 6 digital bones, and 3D animation of the bones' movements were subsequently reconstructed using the marker-based, the semi-automatic, and the manual animation techniques. Manual animation translational bias and precision varied from 0.63 ± 0.26 mm to 0.80 ± 0.49 mm, and rotational bias and precision ranged from 2.41 ± 1.43° to 6.75 ± 4.67°. Semi-automatic translational values for bias and precision ranged from 1.26 ± 1.28 mm to 2.75 ± 2.17 mm, and rotational values varied from 3.81 ± 2.78° to 11.7 ± 8.11°. In our study, we demonstrated the successful application of biplane high-speed fluoroscopic kinematography to gait analysis of bovine distal limb. Using the manual animation technique, kinematics

  3. Maxillary reconstruction

    Directory of Open Access Journals (Sweden)

    Brown James

    2007-12-01

    Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.

  4. Feasibility of perflutren microsphere contrast transthoracic echocardiography in the visualization of ventricular endocardium during venovenous extracorporeal membrane oxygenation in a validated ovine model.

    Science.gov (United States)

    Platts, David G; Diab, Sara; Dunster, Kimble R; Shekar, Kiran; Burstow, Darryl J; Sim, Beatrice; Tunbridge, Matthew; McDonald, Charles; Chemonges, Saul; Chan, Jonathan; Fraser, John F

    2015-03-01

    Transthoracic echocardiography (TTE) during extra corporeal membrane oxygenation (ECMO) is important but can be technically challenging. Contrast-specific TTE can improve imaging in suboptimal studies. These contrast microspheres are hydrodynamically labile structures. This study assessed the feasibility of contrast echocardiography (CE) during venovenous (VV) ECMO in a validated ovine model. Twenty-four sheep were commenced on VV ECMO. Parasternal long-axis (Plax) and short-axis (Psax) views were obtained pre- and postcontrast while on VV ECMO. Endocardial definition scores (EDS) per segment were graded: 1 = good, 2 = suboptimal 3 = not seen. Endocardial border definition score index (EBDSI) was calculated for each view. Endocardial length (EL) in the Plax view for the left ventricle (LV) and right ventricle (RV) was measured. Summation EDS data for the LV and RV for unenhanced TTE (UE) versus CE TTE imaging: EDS 1 = 289 versus 346, EDS 2 = 38 versus 10, EDS 3 = 33 versus 4, respectively. Wilcoxon matched-pairs rank-sign tests showed a significant ranking difference (improvement) pre- and postcontrast for the LV (P < 0.0001), RV (P < 0.0001) and combined ventricular data (P < 0.0001). EBDSI for CE TTE was significantly lower than UE TTE for the LV (1.05 ± 0.17 vs. 1.22 ± 0.38, P = 0.0004) and RV (1.06 ± 0.22 vs. 1.42 ± 0.47, P = 0.0.0006) respectively. Visualized EL was significantly longer in CE versus UE for both the LV (58.6 ± 11.0 mm vs. 47.4 ± 11.7 mm, P < 0.0001) and the RV (52.3 ± 8.6 mm vs. 36.0 ± 13.1 mm, P < 0.0001), respectively. Despite exposure to destructive hydrodynamic forces, CE is a feasible technique in an ovine ECMO model. CE results in significantly improved EDS and increased EL. © 2014, Wiley Periodicals, Inc.

  5. Hypothenar hammer syndrome: long-term results of vascular reconstruction.

    Science.gov (United States)

    Endress, Ryan D; Johnson, Craig H; Bishop, Allen T; Shin, Alexander Y

    2015-04-01

    To evaluate long-term patency rates and related outcomes after vascular reconstruction of hypothenar hammer syndrome and identify patient- or treatment-related factors that may contribute to differences in outcome. We used color flow ultrasound to determine the patency of 18 vein graft reconstructions of the ulnar artery at the wrist in 16 patients. Validated questionnaires evaluated patients' functional disability with the Disabilities of the Arm, Shoulder, and Hand score, pain with the visual analog scale, and cold intolerance with the Cold Intolerance Symptom Severity survey. Patient demographics, clinical data, and surgical factors were analyzed for association with graft failure. Patients were asked to grade the result of treatment on a scale of 0 to 10. Of 18 grafts, 14 (78%) were occluded at a mean of 118 months postoperatively. Patients with patent grafts had significantly less disability related to cold intolerance according to the Cold Intolerance Symptom Severity survey in addition to significantly less pain on the visual analog scale. There was no statistical difference in Disabilities of the Arm, Shoulder, and Hand scores between patients with patent or occluded grafts. Patients graded the result significantly higher in patent reconstructions. We noted a higher incidence of graft occlusion than previously reported at a mean follow-up of 9.8 years, which represents a long-duration follow-up study of surgical treatment of hypothenar hammer syndrome. Despite a high percentage of occlusion, overall, patients remained satisfied with low functional disability and all would recommend surgical reconstruction. This study suggests that improved outcomes may result from patent grafts in the long term. Prognostic IV. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  6. Development and validation of an interactive efficient dose rates distribution calculation program ARShield for visualization of radiation field in nuclear power plants

    International Nuclear Information System (INIS)

    He, Shuxiang; Zhang, Han; Wang, Mengqi; Zang, Qiyong; Zhang, Jingyu; Chen, Yixue

    2017-01-01

    Point kernel integration (PKI) method is widely used in the visualization of radiation field in engineering applications because of the features of quickly dealing with large-scale complicated geometry space problems. But the traditional PKI programs have a lot of restrictions, such as complicated modeling, complicated source setting, 3D fine mesh results statistics and large-scale computing efficiency. To break the traditional restrictions for visualization of radiation field, ARShield was developed successfully. The results show that ARShield can deal with complicated plant radiation shielding problems for visualization of radiation field. Compared with SuperMC and QAD, it can be seen that the program is reliable and efficient. Also, ARShield can meet the demands of calculation speediness and interactive operations of modeling and displaying 3D geometries on a graphical user interface, avoiding error modeling in calculation and visualization. (authors)

  7. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States.

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-02-01

    Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. A set of command line-based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion.

  8. Colour reconstruction of underwater images

    OpenAIRE

    Hoth, Julian; Kowalczyk, Wojciech

    2017-01-01

    Objects look very different in the underwater environment compared to their appearance in sunlight. Images with correct colouring simplify the detection of underwater objects and may allow the use of visual SLAM algorithms developed for land-based robots underwater. Hence, image processing is required. Current algorithms focus on the colour reconstruction of scenery at diving depth where different colours can still be distinguished. At greater depth this is not the case. In this study it is i...

  9. A noise-optimized virtual monochromatic reconstruction algorithm improves stent visualization and diagnostic accuracy for detection of in-stent re-stenosis in lower extremity run-off CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); De Cecco, Carlo N.; Yamada, Ricardo T.; Varga-Szemes, Akos; Stubenrauch, Andrew C.; Fuller, Stephen R. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Caruso, Damiano [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Radiological Sciences, Oncology and Pathology, Rome (Italy); Vogl, Thomas J.; Wichmann, Julian L. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Nikolaou, Konstantin [Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Todoran, Thomas M. [Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States)

    2016-12-15

    To evaluate the impact of noise-optimized virtual monochromatic imaging (VMI+) on stent visualization and accuracy for in-stent re-stenosis at lower extremity dual-energy CT angiography (DE-CTA). We evaluated third-generation dual-source DE-CTA studies in 31 patients with prior stent placement. Images were reconstructed with linear blending (F{sub 0}.5) and VMI+ at 40-150 keV. In-stent luminal diameter was measured and contrast-to-noise ratio (CNR) calculated. Diagnostic confidence was determined using a five-point scale. In 21 patients with invasive catheter angiography, accuracy for significant re-stenosis (≥50 %) was assessed at F{sub 0}.5 and 80 keV-VMI+ chosen as the optimal energy level based on image-quality analysis. At CTA, 45 stents were present. DSA was available for 28 stents whereas 12 stents showed significant re-stenosis. CNR was significantly higher with ≤80 keV-VMI+ (17.9 ± 6.4-33.7 ± 12.3) compared to F{sub 0}.5 (16.9 ± 4.8; all p < 0.0463); luminal stent diameters were increased at ≥70 keV (5.41 ± 1.8-5.92 ± 1.7 vs. 5.27 ± 1.8, all p < 0.001) and diagnostic confidence was highest at 70-80 keV-VMI+ (4.90 ± 0.48-4.88 ± 0.63 vs. 4.60 ± 0.66, p = 0.001, 0.0042). Sensitivity, negative predictive value and accuracy for re-stenosis were higher with 80 keV-VMI+ (100, 100, 96.4 %) than F{sub 0}.5 (90.9, 94.1, 89.3 %). 80 keV-VMI+ improves image quality, diagnostic confidence and accuracy for stent evaluation at lower extremity DE-CTA. (orig.)

  10. 3D reconstruction of cystoscopy videos for comprehensive bladder records.

    Science.gov (United States)

    Lurie, Kristen L; Angst, Roland; Zlatev, Dimitar V; Liao, Joseph C; Ellerbee Bowden, Audrey K

    2017-04-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research.

  11. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  12. PET reconstruction

    International Nuclear Information System (INIS)

    O'Sullivan, F.; Pawitan, Y.; Harrison, R.L.; Lewellen, T.K.

    1990-01-01

    In statistical terms, filtered backprojection can be viewed as smoothed Least Squares (LS). In this paper, the authors report on improvement in LS resolution by: incorporating locally adaptive smoothers, imposing positivity and using statistical methods for optimal selection of the resolution parameter. The resulting algorithm has high computational efficiency relative to more elaborate Maximum Likelihood (ML) type techniques (i.e. EM with sieves). Practical aspects of the procedure are discussed in the context of PET and illustrations with computer simulated and real tomograph data are presented. The relative recovery coefficients for a 9mm sphere in a computer simulated hot-spot phantom range from .3 to .6 when the number of counts ranges from 10,000 to 640,000 respectively. The authors will also present results illustrating the relative efficacy of ML and LS reconstruction techniques

  13. Comparison of visual scoring and quantitative planimetry methods for estimation of global infarct size on delayed enhanced cardiac MRI and validation with myocardial enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Mewton, Nathan, E-mail: nmewton@gmail.com [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France); Revel, Didier [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France); Bonnefoy, Eric [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); Ovize, Michel [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); INSERM Unite 886 (France); Croisille, Pierre [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France)

    2011-04-15

    Purpose: Although delayed enhanced CMR has become a reference method for infarct size quantification, there is no ideal method to quantify total infarct size in a routine clinical practice. In a prospective study we compared the performance and post-processing time of a global visual scoring method to standard quantitative planimetry and we compared both methods to the peak values of myocardial biomarkers. Materials and methods: This study had local ethics committee approval; all patients gave written informed consent. One hundred and three patients admitted with reperfused AMI to our intensive care unit had a complete CMR study with gadolinium-contrast injection 4 {+-} 2 days after admission. A global visual score was defined on a 17-segment model and compared with the quantitative planimetric evaluation of hyperenhancement. The peak values of serum Troponin I (TnI) and creatine kinase (CK) release were measured in each patient. Results: The mean percentage of total left ventricular myocardium with hyperenhancement determined by the quantitative planimetry method was (20.1 {+-} 14.6) with a range of 1-68%. There was an excellent correlation between quantitative planimetry and visual global scoring for the hyperenhancement extent's measurement (r = 0.94; y = 1.093x + 0.87; SEE = 1.2; P < 0.001) The Bland-Altman plot showed a good concordance between the two approaches (mean of the differences = 1.9% with a standard deviation of 4.7). Mean post-processing time for quantitative planimetry was significantly longer than visual scoring post-processing time (23.7 {+-} 5.7 min vs 5.0 {+-} 1.1 min respectively, P < 0.001). Correlation between peak CK and quantitative planimetry was r = 0.82 (P < 0.001) and r = 0.83 (P < 0.001) with visual global scoring. Correlation between peak Troponin I and quantitative planimetry was r = 0.86 (P < 0.001) and r = 0.85 (P < 0.001) with visual global scoring. Conclusion: A visual approach based on a 17-segment model allows a rapid

  14. Comparison of visual scoring and quantitative planimetry methods for estimation of global infarct size on delayed enhanced cardiac MRI and validation with myocardial enzymes

    International Nuclear Information System (INIS)

    Mewton, Nathan; Revel, Didier; Bonnefoy, Eric; Ovize, Michel; Croisille, Pierre

    2011-01-01

    Purpose: Although delayed enhanced CMR has become a reference method for infarct size quantification, there is no ideal method to quantify total infarct size in a routine clinical practice. In a prospective study we compared the performance and post-processing time of a global visual scoring method to standard quantitative planimetry and we compared both methods to the peak values of myocardial biomarkers. Materials and methods: This study had local ethics committee approval; all patients gave written informed consent. One hundred and three patients admitted with reperfused AMI to our intensive care unit had a complete CMR study with gadolinium-contrast injection 4 ± 2 days after admission. A global visual score was defined on a 17-segment model and compared with the quantitative planimetric evaluation of hyperenhancement. The peak values of serum Troponin I (TnI) and creatine kinase (CK) release were measured in each patient. Results: The mean percentage of total left ventricular myocardium with hyperenhancement determined by the quantitative planimetry method was (20.1 ± 14.6) with a range of 1-68%. There was an excellent correlation between quantitative planimetry and visual global scoring for the hyperenhancement extent's measurement (r = 0.94; y = 1.093x + 0.87; SEE = 1.2; P < 0.001) The Bland-Altman plot showed a good concordance between the two approaches (mean of the differences = 1.9% with a standard deviation of 4.7). Mean post-processing time for quantitative planimetry was significantly longer than visual scoring post-processing time (23.7 ± 5.7 min vs 5.0 ± 1.1 min respectively, P < 0.001). Correlation between peak CK and quantitative planimetry was r = 0.82 (P < 0.001) and r = 0.83 (P < 0.001) with visual global scoring. Correlation between peak Troponin I and quantitative planimetry was r = 0.86 (P < 0.001) and r = 0.85 (P < 0.001) with visual global scoring. Conclusion: A visual approach based on a 17-segment model allows a rapid and accurate

  15. Breast Reconstruction After Mastectomy

    Science.gov (United States)

    ... Cancer Prevention Genetics of Breast & Gynecologic Cancers Breast Cancer Screening Research Breast Reconstruction After Mastectomy On This Page What is breast reconstruction? How do surgeons use implants to reconstruct a woman’s breast? How do surgeons ...

  16. Breast reconstruction - implants

    Science.gov (United States)

    Breast implants surgery; Mastectomy - breast reconstruction with implants; Breast cancer - breast reconstruction with implants ... harder to find a tumor if your breast cancer comes back. Getting breast implants does not take as long as breast reconstruction ...

  17. Visualization of Tooth for Non-Destructive Evaluation from CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Hui; Chae, Ok Sam [Kyung Hee University, Seoul (Korea, Republic of)

    2009-06-15

    This paper reports an effort to develop 3D tooth visualization system from CT sequence images as a part of the non-destructive evaluation suitable for the simulation of endodontics, orthodontics and other dental treatments. We focus on the segmentation and visualization for the individual tooth. In dental CT images teeth are touching the adjacent teeth or surrounded by the alveolar bones with similar intensity. We propose an improved level set method with shape prior to separate a tooth from other teeth as well as the alveolar bones. Reconstructed 3D model of individual tooth based on the segmentation results indicates that our technique is a very conducive tool for tooth visualization, evaluation and diagnosis. Some comparative visualization results validate the non-destructive function of our method.

  18. Jini service to reconstruct tomographic data

    Science.gov (United States)

    Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.

    2002-06-01

    A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.

  19. 4D-visualization of the orbit based on dynamic MRI with special focus on the extra-ocular muscles and the optic nerves

    International Nuclear Information System (INIS)

    Kober, C.; Boerner, B.I.; Buitrago, C.; Klarhoefer, M.; Scheffler, K.; Kunz, C.; Zeilhofer, H.F.

    2007-01-01

    By recording time dependent patients' behaviour, dynamic radiology is dedicated to capturing functional anatomy. Dynamic ''quasi-continuous'' MRI data of lateral eye movements of a healthy volunteer were acquired using SE imaging sequence (Siemens, 1.5 T). By means of combined application of several image processing and visualization techniques, namely shaded and transparent surface reconstruction as well as direct volume rendering, 4D-visualization of the dynamics of the extra ocular muscles was possible. Though the original MRI data were quite coarse vascular structures could be recognized to some extent. For the sake of 4D-visualization of the optic nerve, the optic cavity was opened by axial clipping of the visualization. Superimposition of the original MRI slices to the visualization, either transparently or opaque, served as validation and comparison to conventional diagnosis. For facilitation of the analysis of the visualization results, stereoscopic rendering was rated as quite significant especially in the clinical setting. (orig.)

  20. Three-way ROC validation of rs-fMRI visual information propagation transfer functions used to differentiate between RRMS and CIS optic neuritis patients.

    Science.gov (United States)

    Farahani, Ehsan Shahrabi; Choudhury, Samiul H; Cortese, Filomeno; Costello, Fiona; Goodyear, Bradley; Smith, Michael R

    2017-07-01

    Resting-state fMRI (rs-fMRI) measures the temporal synchrony between different brain regions while the subject is at rest. We present an investigation using visual information propagation transfer functions as potential optic neuritis (ON) markers for the pathways between the lateral geniculate nuclei, the primary visual cortex, the lateral occipital cortex and the superior parietal cortex. We investigate marker reliability in differentiating between healthy controls and ON patients with clinically isolated syndrome (CIS), and relapsing-remitting multiple sclerosis (RRMS) using a three-way receiver operating characteristics analysis. We identify useful and reliable three-way ON related metrics in the rs-fMRI low-frequency band 0.0 Hz to 0.1 Hz, with potential markers associated with the higher frequency harmonics of these signals in the 0.1 Hz to 0.2 Hz and 0.2 Hz to 0.3 Hz bands.

  1. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  2. Shaping the breast in secondary microsurgical breast reconstruction: single- vs. two-esthetic unit reconstruction.

    Science.gov (United States)

    Gravvanis, Andreas; Smith, Roger W

    2010-10-01

    The esthetic outcome is dictated essentially not only by the position, size, and shape of the reconstructed breast, but also by the extra scaring involved. In the present study, we conducted a visual analog scale survey to compare the esthetic outcome in delayed autologous breast reconstruction following two different abdominal flaps inset. Twenty-five patients had their reconstruction using the Single-esthetic Unit principle and were compared with 25 patients that their breast was reconstructed using the Two-Esthetic Unit principle. Photographic images were formulated to a PowerPoint presentation and cosmetic outcomes were assessed from 30 physicians, by means of a Questionnaire and a visual analog scale. Our data showed that the single-esthetic unit breast reconstruction presents significant advantages over the traditional two-esthetic units, due to inconspicuous flap reconstruction, better position of the inframammary fold, and more natural transition from native and reconstructed tissues. Moreover, patient self-evaluation of esthetic outcome and quality of life showed that single-esthetic unit reconstruction is associated with higher patient satisfaction, therefore should be considered the method of choice. © 2010 Wiley-Liss, Inc.

  3. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    International Nuclear Information System (INIS)

    Gao, H

    2016-01-01

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).

  4. Adaptive algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Lu Wenkai; Yin Fangfang

    2004-01-01

    Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency

  5. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  6. Reconstruction of blood propagation in three-dimensional rotational X-ray angiography (3D-RA).

    Science.gov (United States)

    Schmitt, Holger; Grass, Michael; Suurmond, Rolf; Köhler, Thomas; Rasche, Volker; Hähnel, Stefan; Heiland, Sabine

    2005-10-01

    This paper presents a framework of non-interactive algorithms for the mapping of blood flow information to vessels in 3D-RA images. With the presented method, mapping of flow information to 3D-RA images is done automatically without user interaction. So far, radiologists had to perform this task by extensive image comparisons and did not obtain visualizations of the results. In our approach, flow information is reconstructed by forward projection of vessel pieces in a 3D-RA image to a two-dimensional projection series capturing the propagation of a short additional contrast agent bolus. For accurate 2D-3D image registration, an efficient patient motion compensation technique is introduced. As an exemplary flow-related quantity, bolus arrival times are reconstructed for the vessel pieces by matching of intensity-time curves. A plausibility check framework was developed which handles projection ambiguities and corrects for noisy flow reconstruction results. It is based on a linear programming approach to model the feeding structure of the vessel. The flow reconstruction method was applied to 12 cases of cerebral stenoses, AVMs and aneurysms, and it proved to be feasible in the clinical environment. The propagation of the injected contrast agent was reconstructed and visualized in three-dimensional images. The flow reconstruction method was able to visualize different types of useful information. In cases of stenosis of the middle cerebral artery (MCA), flow reconstruction can reveal impeded blood flow depending on the severeness of the stenosis. With cases of AVMs, flow reconstruction can clarify the feeding structure. The presented methods handle the problems imposed by clinical demands such as non-interactive algorithms, patient motion compensation, short reconstruction times, and technical requirements such as correction of noisy bolus arrival times and handling of overlapping vessel pieces. Problems occurred mainly in the reconstruction and segmentation of 3D

  7. C-plane Reconstructions from Sheaf Acquisition for Ultrasound Electrode Vibration Elastography.

    Science.gov (United States)

    Ingle, Atul; Varghese, Tomy

    2014-09-03

    This paper presents a novel algorithm for reconstructing and visualizing ablated volumes using radiofrequency ultrasound echo data acquired with the electrode vibration elastography approach. The ablation needle is vibrated using an actuator to generate shear wave pulses that are tracked in the ultrasound image plane at different locations away from the needle. This data is used for reconstructing shear wave velocity maps for each imaging plane. A C-plane reconstruction algorithm is proposed which estimates shear wave velocity values on a collection of transverse planes that are perpendicular to the imaging planes. The algorithm utilizes shear wave velocity maps from different imaging planes that share a common axis of intersection. These C-planes can be used to generate a 3D visualization of the ablated region. Experimental validation of this approach was carried out using data from a tissue mimicking phantom. The shear wave velocity estimates were within 20% of those obtained from a clinical scanner, and a contrast of over 4 dB was obtained between the stiff and soft regions of the phantom.

  8. Creativity, visualization abilities, and visual cognitive style.

    Science.gov (United States)

    Kozhevnikov, Maria; Kozhevnikov, Michael; Yu, Chen Jiao; Blazhenkova, Olesya

    2013-06-01

    Despite the recent evidence for a multi-component nature of both visual imagery and creativity, there have been no systematic studies on how the different dimensions of creativity and imagery might interrelate. The main goal of this study was to investigate the relationship between different dimensions of creativity (artistic and scientific) and dimensions of visualization abilities and styles (object and spatial). In addition, we compared the contributions of object and spatial visualization abilities versus corresponding styles to scientific and artistic dimensions of creativity. Twenty-four undergraduate students (12 females) were recruited for the first study, and 75 additional participants (36 females) were recruited for an additional experiment. Participants were administered a number of object and spatial visualization abilities and style assessments as well as a number of artistic and scientific creativity tests. The results show that object visualization relates to artistic creativity and spatial visualization relates to scientific creativity, while both are distinct from verbal creativity. Furthermore, our findings demonstrate that style predicts corresponding dimension of creativity even after removing shared variance between style and visualization ability. The results suggest that styles might be a more ecologically valid construct in predicting real-life creative behaviour, such as performance in different professional domains. © 2013 The British Psychological Society.

  9. Stability indicators in network reconstruction.

    Directory of Open Access Journals (Sweden)

    Michele Filosi

    Full Text Available The number of available algorithms to infer a biological network from a dataset of high-throughput measurements is overwhelming and keeps growing. However, evaluating their performance is unfeasible unless a 'gold standard' is available to measure how close the reconstructed network is to the ground truth. One measure of this is the stability of these predictions to data resampling approaches. We introduce NetSI, a family of Network Stability Indicators, to assess quantitatively the stability of a reconstructed network in terms of inference variability due to data subsampling. In order to evaluate network stability, the main NetSI methods use a global/local network metric in combination with a resampling (bootstrap or cross-validation procedure. In addition, we provide two normalized variability scores over data resampling to measure edge weight stability and node degree stability, and then introduce a stability ranking for edges and nodes. A complete implementation of the NetSI indicators, including the Hamming-Ipsen-Mikhailov (HIM network distance adopted in this paper is available with the R package nettools. We demonstrate the use of the NetSI family by measuring network stability on four datasets against alternative network reconstruction methods. First, the effect of sample size on stability of inferred networks is studied in a gold standard framework on yeast-like data from the Gene Net Weaver simulator. We also consider the impact of varying modularity on a set of structurally different networks (50 nodes, from 2 to 10 modules, and then of complex feature covariance structure, showing the different behaviours of standard reconstruction methods based on Pearson correlation, Maximum Information Coefficient (MIC and False Discovery Rate (FDR strategy. Finally, we demonstrate a strong combined effect of different reconstruction methods and phenotype subgroups on a hepatocellular carcinoma miRNA microarray dataset (240 subjects, and we

  10. Art care: A multi-modality coronary 3D reconstruction and hemodynamic status assessment software.

    Science.gov (United States)

    Siogkas, Panagiotis K; Stefanou, Kostas A; Athanasiou, Lambros S; Papafaklis, Michail I; Michalis, Lampros K; Fotiadis, Dimitrios I

    2018-01-01

    Due to the incremental increase of clinical interest in the development of software that allows the 3-dimensional (3D) reconstruction and the functional assessment of the coronary vasculature, several software packages have been developed and are available today. Taking this into consideration, we have developed an innovative suite of software modules that perform 3D reconstruction of coronary arterial segments using different coronary imaging modalities such as IntraVascular UltraSound (IVUS) and invasive coronary angiography images (ICA), Optical Coherence Tomography (OCT) and ICA images, or plain ICA images and can safely and accurately assess the hemodynamic status of the artery of interest. The user can perform automated or manual segmentation of the IVUS or OCT images, visualize in 3D the reconstructed vessel and export it to formats, which are compatible with other Computer Aided Design (CAD) software systems. We employ finite elements to provide the capability to assess the hemodynamic functionality of the reconstructed vessels by calculating the virtual functional assessment index (vFAI), an index that corresponds and has been shown to correlate well to the actual fractional flow reserve (FFR) value. All the modules of the proposed system have been thoroughly validated. In brief, the 3D-QCA module, compared to a successful commercial software of the same genre, presented very good correlation using several validation metrics, with a Pearson's correlation coefficient (R) for the calculated volumes, vFAI, length and minimum lumen diameter of 0.99, 0.99, 0.99 and 0.88, respectively. Moreover, the automatic lumen detection modules for IVUS and OCT presented very high accuracy compared to the annotations by medical experts with the Pearson's correlation coefficient reaching the values of 0.94 and 0.99, respectively. In this study, we have presented a user-friendly software for the 3D reconstruction of coronary arterial segments and the accurate hemodynamic

  11. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  12. An open-source, self-explanatory touch screen in routine care. Validity of filling in the Bath measures on Ankylosing Spondylitis Disease Activity Index, Function Index, the Health Assessment Questionnaire and Visual Analogue Scales in comparison with paper versions.

    Science.gov (United States)

    Schefte, David B; Hetland, Merete L

    2010-01-01

    The Danish DANBIO registry has developed open-source software for touch screens in the waiting room. The objective was to assess the validity of outcomes from self-explanatory patient questionnaires on touch screen in comparison with the traditional paper form in routine clinical care. Fifty-two AS patients and 59 RA patients completed Visual Analogue Scales (VASs) for pain, fatigue and global health, and Bath measures on Ankylosing Spondylitis Disease Activity Index (BASDAI) and Function Index (BASFI) (AS patients) or HAQs (RA patients) on touch screen and paper form in random order with a 1-h interval. Intra-class correlation coefficients (ICCs), 95% CIs and smallest detectable differences (SDDs) were calculated. ICC ranged from 0.922 to 0.988 (P health when compared with the traditional paper form. Implementation of touch screens in clinical practice is feasible and patients need no instruction.

  13. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  14. Practical use of visual medial temporal lobe atrophy cut-off scores in Alzheimer's disease: Validation in a large memory clinic population

    International Nuclear Information System (INIS)

    Claus, Jules J.; Holl, Dana C.; Roorda, Jelmen J.; Staekenborg, Salka S.; Schuur, Jacqueline; Koster, Pieter; Tielkes, Caroline E.M.; Scheltens, Philip

    2017-01-01

    To provide age-specific medial temporal lobe atrophy (MTA) cut-off scores for routine clinical practice as marker for Alzheimer's disease (AD). Patients with AD (n = 832, mean age 81.8 years) were compared with patients with subjective cognitive impairment (n = 333, mean age 71.8 years) in a large single-centre memory clinic. Mean of right and left MTA scores was determined with visual rating (Scheltens scale) using CT (0, no atrophy to 4, severe atrophy). Relationships between age and MTA scores were analysed with regression analysis. For various MTA cut-off scores, decade-specific sensitivity and specificity and area under the curve (AUC) values, computed with receiver operator characteristic curves, were determined. MTA strongly increased with age in both groups to a similar degree. Optimal MTA cut-off values for the age ranges <65, 65-74, 75-84 and ≥85 were: ≥1.0, ≥1.5, ≥ 2.0 and ≥2.0. Corresponding values of sensitivity and specificity were 83.3% and 86.4%; 73.7% and 84.6%; 73.7% and 76.2%; and 84.0% and 62.5%. From this large unique memory clinic cohort we suggest decade-specific MTA cut-off scores for clinical use. After age 85 years, however, the practical usefulness of the MTA cut-off is limited. (orig.)

  15. Practical use of visual medial temporal lobe atrophy cut-off scores in Alzheimer's disease: Validation in a large memory clinic population

    Energy Technology Data Exchange (ETDEWEB)

    Claus, Jules J.; Holl, Dana C.; Roorda, Jelmen J. [Tergooi Hospital, Department of Neurology, Blaricum (Netherlands); Staekenborg, Salka S. [Tergooi Hospital, Department of Neurology, Blaricum (Netherlands); VU University Medical Center, Department of Neurology, Alzheimer Center, Amsterdam (Netherlands); Schuur, Jacqueline [Tergooi Hospital, Department of Geriatrics, Blaricum (Netherlands); Koster, Pieter [Tergooi Hospital, Department of Radiology, Blaricum (Netherlands); Tielkes, Caroline E.M. [Tergooi Hospital, Department of Medical Psychology, Blaricum (Netherlands); Scheltens, Philip [VU University Medical Center, Department of Neurology, Alzheimer Center, Amsterdam (Netherlands)

    2017-08-15

    To provide age-specific medial temporal lobe atrophy (MTA) cut-off scores for routine clinical practice as marker for Alzheimer's disease (AD). Patients with AD (n = 832, mean age 81.8 years) were compared with patients with subjective cognitive impairment (n = 333, mean age 71.8 years) in a large single-centre memory clinic. Mean of right and left MTA scores was determined with visual rating (Scheltens scale) using CT (0, no atrophy to 4, severe atrophy). Relationships between age and MTA scores were analysed with regression analysis. For various MTA cut-off scores, decade-specific sensitivity and specificity and area under the curve (AUC) values, computed with receiver operator characteristic curves, were determined. MTA strongly increased with age in both groups to a similar degree. Optimal MTA cut-off values for the age ranges <65, 65-74, 75-84 and ≥85 were: ≥1.0, ≥1.5, ≥ 2.0 and ≥2.0. Corresponding values of sensitivity and specificity were 83.3% and 86.4%; 73.7% and 84.6%; 73.7% and 76.2%; and 84.0% and 62.5%. From this large unique memory clinic cohort we suggest decade-specific MTA cut-off scores for clinical use. After age 85 years, however, the practical usefulness of the MTA cut-off is limited. (orig.)

  16. Ptolemy's Britain and Ireland: A New Digital Reconstruction

    Science.gov (United States)

    Abshire, Corey; Durham, Anthony; Gusev, Dmitri A.; Stafeyev, Sergey K.

    2018-05-01

    In this paper, we expand application of our mathematical methods for translating ancient coordinates from the classical Geography by Claudius Ptolemy into modern coordinates from India and Arabia to Britain and Ireland, historically important islands on the periphery of the ancient Roman Empire. The methods include triangulation and flocking with subsequent Bayesian correction. The results of our work can be conveniently visualized in modern GIS tools, such as ArcGIS, QGIS, and Google Earth. The enhancements we have made include a novel technique for handling tentatively identified points. We compare the precision of reconstruction achieved for Ptolemy's Britain and Ireland with the precisions that we had computed earlier for his India before the Ganges and three provinces of Arabia. We also provide improved validation and comparison amongst the methods applied. We compare our results with the prior work, while utilizing knowledge from such important ancient sources as the Antonine Itinerary, Tabula Peutingeriana, and the Ravenna Cosmography. The new digital reconstruction of Claudius Ptolemy's Britain and Ireland presented in this paper, along with the accompanying linguistic analysis of ancient toponyms, contributes to improvement of understanding of our cultural cartographic heritage by making it easier to study the ancient world using the popular and accessible GIS programs.

  17. Breast reconstruction - natural tissue

    Science.gov (United States)

    ... flap; TRAM; Latissimus muscle flap with a breast implant; DIEP flap; DIEAP flap; Gluteal free flap; Transverse upper gracilis flap; TUG; Mastectomy - breast reconstruction with natural tissue; Breast cancer - breast reconstruction with natural tissue

  18. Breast reconstruction after mastectomy

    Directory of Open Access Journals (Sweden)

    Daniel eSchmauss

    2016-01-01

    Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.

  19. Math for visualization, visualizing math

    NARCIS (Netherlands)

    Wijk, van J.J.; Hart, G.; Sarhangi, R.

    2013-01-01

    I present an overview of our work in visualization, and reflect on the role of mathematics therein. First, mathematics can be used as a tool to produce visualizations, which is illustrated with examples from information visualization, flow visualization, and cartography. Second, mathematics itself

  20. Visual art and visual perception

    NARCIS (Netherlands)

    Koenderink, Jan J.

    2015-01-01

    Visual art and visual perception ‘Visual art’ has become a minor cul-de-sac orthogonal to THE ART of the museum directors and billionaire collectors. THE ART is conceptual, instead of visual. Among its cherished items are the tins of artist’s shit (Piero Manzoni, 1961, Merda d’Artista) “worth their

  1. Surface reconstruction, figure-ground modulation, and border-ownership

    NARCIS (Netherlands)

    Jeurissen, D.; Self, M.W.; Roelfsema, P.R.

    2013-01-01

    The Differentiation-Integration for Surface Completion (DISC) model aims to explain the reconstruction of visual surfaces. We find the model a valuable contribution to our understanding of figure-ground organization. We point out that, next to border-ownership, neurons in visual cortex code whether

  2. Surface reconstruction, figure-ground modulation, and border-ownership

    NARCIS (Netherlands)

    Jeurissen, Danique; Self, Matthew W.; Roelfsema, Pieter R.

    2013-01-01

    Abstract The Differentiation-Integration for Surface Completion (DISC) model aims to explain the reconstruction of visual surfaces. We find the model a valuable contribution to our understanding of figure-ground organization. We point out that, next to border-ownership, neurons in visual cortex code

  3. Evaluating the effect of a third-party implementation of resolution recovery on the quality of SPECT bone scan imaging using visual grading regression.

    Science.gov (United States)

    Hay, Peter D; Smith, Julie; O'Connor, Richard A

    2016-02-01

    The aim of this study was to evaluate the benefits to SPECT bone scan image quality when applying resolution recovery (RR) during image reconstruction using software provided by a third-party supplier. Bone SPECT data from 90 clinical studies were reconstructed retrospectively using software supplied independent of the gamma camera manufacturer. The current clinical datasets contain 120×10 s projections and are reconstructed using an iterative method with a Butterworth postfilter. Five further reconstructions were created with the following characteristics: 10 s projections with a Butterworth postfilter (to assess intraobserver variation); 10 s projections with a Gaussian postfilter with and without RR; and 5 s projections with a Gaussian postfilter with and without RR. Two expert observers were asked to rate image quality on a five-point scale relative to our current clinical reconstruction. Datasets were anonymized and presented in random order. The benefits of RR on image scores were evaluated using ordinal logistic regression (visual grading regression). The application of RR during reconstruction increased the probability of both observers of scoring image quality as better than the current clinical reconstruction even where the dataset contained half the normal counts. Type of reconstruction and observer were both statistically significant variables in the ordinal logistic regression model. Visual grading regression was found to be a useful method for validating the local introduction of technological developments in nuclear medicine imaging. RR, as implemented by the independent software supplier, improved bone SPECT image quality when applied during image reconstruction. In the majority of clinical cases, acquisition times for bone SPECT intended for the purposes of localization can safely be halved (from 10 s projections to 5 s) when RR is applied.

  4. A multicenter study on the validation of the Burnout Battery: a new visual analog scale to screen job burnout in oncology professionals.

    Science.gov (United States)

    Deng, Yao-Tiao; Liu, Jie; Zhang, Jie; Huang, Bo-Yan; Yi, Ting-Wu; Wang, Yu-Qing; Zheng, Bo; Luo, Di; Du, Pei-Xin; Jiang, Yu

    2017-08-01

    The objective of the study is to develop a novel tool-the Burnout Battery-for briefly screening burnout among oncology professionals in China and assessing its validity. A multicenter study was conducted in doctors and nurses of the oncology departments in China from November 2014 to May 2015. The Burnout Battery was administered with the Maslach Burnout Inventory-Human Services Survey (MBI-HSS) and the Doctors' Job Burnout Questionnaire. Of 538 oncology doctors and nurses who completed all the survey, using MBI-HSS as the standard tool for measuring burnout, 52% had emotional exhaustion, 39.4% had depersonalization, and 59.3% had a low sense of personal accomplishment. Receiver operating characteristic curve analyses showed that the best cut-off of the Burnout Battery was the battery with 3 bars, which yielded best sensitivity and specificity against all the 3 subscales of MBI-HSS. With this cut-off, nearly half of Chinese oncology professionals (46.8%) had burnout. The Burnout Battery correlated significantly with subscales of the MBI-HSS and the Doctors' Job Burnout Questionnaire. In multiple logistic regression analysis, those who worked more than 60 hours per week and who thought clinical work was the most stressful part of their job were more likely to experience burnout. Chinese oncology professionals exhibit high levels of burnout. The Burnout Battery appears to be a simple and useful tool for screening burnout. Working long hours and perceiving clinical work as the most stressful part of the job were the main factors associated with burnout. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Flow visualization

    CERN Document Server

    Merzkirch, Wolfgang

    1974-01-01

    Flow Visualization describes the most widely used methods for visualizing flows. Flow visualization evaluates certain properties of a flow field directly accessible to visual perception. Organized into five chapters, this book first presents the methods that create a visible flow pattern that could be investigated by visual inspection, such as simple dye and density-sensitive visualization methods. It then deals with the application of electron beams and streaming birefringence. Optical methods for compressible flows, hydraulic analogy, and high-speed photography are discussed in other cha

  6. Orbital floor reconstruction with free flaps after maxillectomy.

    Science.gov (United States)

    Sampathirao, Leela Mohan C S R; Thankappan, Krishnakumar; Duraisamy, Sriprakash; Hedne, Naveen; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2013-06-01

    Background The purpose of this study is to evaluate the outcome of orbital floor reconstruction with free flaps after maxillectomy. Methods This was a retrospective analysis of 34 consecutive patients who underwent maxillectomy with orbital floor removal for malignancies, reconstructed with free flaps. A cross-sectional survey to assess the functional and esthetic outcome was done in 28 patients who were alive and disease-free, with a minimum of 6 months of follow-up. Results Twenty-six patients had bony reconstruction, and eight had soft tissue reconstruction. Free fibula flap was the commonest flap used (n = 14). Visual acuity was normal in 86%. Eye movements were normal in 92%. Abnormal globe position resulted in nine patients. Esthetic satisfaction was good in 19 patients (68%). Though there was no statistically significant difference in outcome of visual acuity, eye movement, and patient esthetic satisfaction between patients with bony and soft tissue reconstruction, more patients without bony reconstruction had abnormal globe position (p = 0.040). Conclusion Free tissue transfer has improved the results of orbital floor reconstruction after total maxillectomy, preserving the eye. Good functional and esthetic outcome was achieved. Though our study favors a bony orbital reconstruction, a larger study with adequate power and equal distribution of patients among the groups would be needed to determine this. Free fibula flap remains the commonest choice when a bony reconstruction is contemplated.

  7. Anisotropic Diffusion based Brain MRI Segmentation and 3D Reconstruction

    OpenAIRE

    M. Arfan Jaffar; Sultan Zia; Ghaznafar Latif; AnwarM. Mirza; Irfan Mehmood; Naveed Ejaz; Sung Wook Baik

    2012-01-01

    In medical field visualization of the organs is very imperative for accurate diagnosis and treatment of any disease. Brain tumor diagnosis and surgery also required impressive 3D visualization of the brain to the radiologist. Detection and 3D reconstruction of brain tumors from MRI is a computationally time consuming and error-prone task. Proposed system detects and presents a 3D visualization model of the brain and tumor inside which greatly helps the radiologist to effectively diagnose and ...

  8. Visual field

    Science.gov (United States)

    ... your visual field. How the Test is Performed Confrontation visual field exam. This is a quick and ... to achieve this important distinction for online health information and services. Learn more about A.D.A. ...

  9. Adaptive semantics visualization

    CERN Document Server

    Nazemi, Kawa

    2016-01-01

    This book introduces a novel approach for intelligent visualizations that adapts the different visual variables and data processing to human’s behavior and given tasks. Thereby a number of new algorithms and methods are introduced to satisfy the human need of information and knowledge and enable a usable and attractive way of information acquisition. Each method and algorithm is illustrated in a replicable way to enable the reproduction of the entire “SemaVis” system or parts of it. The introduced evaluation is scientifically well-designed and performed with more than enough participants to validate the benefits of the methods. Beside the introduced new approaches and algorithms, readers may find a sophisticated literature review in Information Visualization and Visual Analytics, Semantics and information extraction, and intelligent and adaptive systems. This book is based on an awarded and distinguished doctoral thesis in computer science.

  10. Visual attention capacity

    DEFF Research Database (Denmark)

    Habekost, Thomas; Starrfelt, Randi

    2009-01-01

    Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity......, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines...... to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical...

  11. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual hull reconstruction, resulting in a first 3D approximation of the target, followed by a region-of-interest estimation, tasked with identifying features of interest, which in turn are used to locally refine the voxel grid and extract a higher-resolution surface representation for those regions. This approach is illustrated for the reconstruction of avatars for use in tele-immersion environments, where head and hand regions are of higher interest. To allow reproducability and direct comparison a publicly available data set for human visual hull reconstruction is used. This paper shows that region-of-interest reconstruction of the target is faster and visually comparable to higher resolution focused visual hull reconstructions. This approach reduces the amount of data generated through the reconstruction, allowing faster post processing, as rendering or networking of the surface voxels. Reconstruction speeds support smooth interactions between the avatar and the virtual environment, while the improved resolution of its facial region and hands creates a higher-degree of immersion and potentially impacts the perception of body language, facial expressions and eye-to-eye contact. Copyright © 2010 by the Association for Computing Machinery, Inc.

  12. Forensic Facial Reconstruction: The Final Frontier.

    Science.gov (United States)

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  13. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  14. Data visualization

    CERN Document Server

    Azzam, Tarek

    2013-01-01

    Do you communicate data and information to stakeholders? In Part 1, we introduce recent developments in the quantitative and qualitative data visualization field and provide a historical perspective on data visualization, its potential role in evaluation practice, and future directions. Part 2 delivers concrete suggestions for optimally using data visualization in evaluation, as well as suggestions for best practices in data visualization design. It focuses on specific quantitative and qualitative data visualization approaches that include data dashboards, graphic recording, and geographic information systems (GIS). Readers will get a step-by-step process for designing an effective data dashboard system for programs and organizations, and various suggestions to improve their utility.

  15. Reconstructed and analyzed X-ray computed tomography data of investment-cast and additive-manufactured aluminum foam for visualizing ligament failure mechanisms and regions of contact during a compression test

    Directory of Open Access Journals (Sweden)

    Kristoffer E. Matheson

    2018-02-01

    Full Text Available Three stochastic open-cell aluminum foam samples were incrementally compressed and imaged using X-ray Computed Tomography (CT. One of the samples was created using conventional investment casting methods and the other two were replicas of the same foam that were made using laser powder bed fusion. The reconstructed CT data were then examined in Paraview to identify and highlight the types of failure of individual ligaments. The accompanying sets of Paraview state files and STL files highlight the different ligament failure modes incrementally during compression for each foam. Ligament failure was classified as either “Fracture” (red or “Collapse” (blue. Also, regions of neighboring ligaments that came into contact that were not originally touching were colored yellow. For further interpretation and discussion of the data, please refer to Matheson et al. (2017 [1].

  16. Visual Literacy and Visual Thinking.

    Science.gov (United States)

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  17. Visual Literacy and Visual Culture.

    Science.gov (United States)

    Messaris, Paul

    Familiarity with specific images or sets of images plays a role in a culture's visual heritage. Two questions can be asked about this type of visual literacy: Is this a type of knowledge that is worth building into the formal educational curriculum of our schools? What are the educational implications of visual literacy? There is a three-part…

  18. Correction of head motion artifacts in SPECT with fully 3-D OS-EM reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.

    1998-01-01

    Full text: A method which relies on continuous monitoring of head position has been developed to correct for head motion in SPECT studies of the brain. Head position and orientation are monitored during data acquisition by an inexpensive head tracking system (ADL-1, Shooting Star Technology, Rosedale, British Colombia). Motion correction involves changing the projection geometry to compensate for motion (using data from the head tracker), and reconstructing with a fully 3-D OS-EM algorithm. The reconstruction algorithm can accommodate any number of movements and any projection geometry. A single iteration of 3-D OS-EM using all available projections provides a satisfactory 3-D reconstruction, essentially free of motion artifacts. The method has been validated in studies of the 3-D Hoffman brain phantom. Multiple 36- degree acquisitions, each with the phantom in a different position, were performed on a Trionix triple head camera. Movements were simulated by combining projections from the different acquisitions. Accuracy was assessed by comparison with a motion-free reconstruction, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. Three-dimensional reconstruction of the 128 x 128 x 128 data set took 2- minutes on a SUN Ultra 1 workstation. This motion correction technique can be retro-fitted to existing SPECT systems and could be incorporated in future SPECT camera designs. It appears to be applicable in PET as well as SPECT, to be able to correct for any head movements, and to have the potential to improve the accuracy of tomographic brain studies under clinical imaging conditions

  19. Methods of X-ray CT image reconstruction from few projections

    International Nuclear Information System (INIS)

    Wang, H.

    2011-01-01

    To improve the safety (low dose) and the productivity (fast acquisition) of a X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and suffers from artifacts. A new approach based on the recently developed 'Compressed Sensing' (CS) theory assumes that the unknown image is in some sense 'sparse' or 'compressible', and the reconstruction is formulated through a non linear optimization problem (TV/l1 minimization) by enhancing the sparsity. Using the pixel (or voxel in 3D) as basis, to apply the CS framework in CT one usually needs a 'sparsifying' transform, and combines it with the 'X-ray projector' which applies on the pixel image. In this thesis, we have adapted a 'CT-friendly' radial basis of Gaussian family called 'blob' to the CS-CT framework. The blob has better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multi-scale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. 2D simulations show that the existing TV and l1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel or wavelet basis. The new approach has also been validated on 2D experimental data, where we have observed that in general the number of projections can be reduced to about 50%, without compromising the image quality. (author) [fr

  20. An automatic virtual patient reconstruction from CT-scans for hepatic surgical planning.

    Science.gov (United States)

    Soler, L; Delingette, H; Malandain, G; Ayache, N; Koehl, C; Clément, J M; Dourthe, O; Marescaux, J

    2000-01-01

    PROBLEM/BACKGROUND: In order to help hepatic surgical planning we perfected automatic 3D reconstruction of patients from conventional CT-scan, and interactive visualization and virtual resection tools. From a conventional abdominal CT-scan, we have developed several methods allowing the automatic 3D reconstruction of skin, bones, kidneys, lung, liver, hepatic lesions, and vessels. These methods are based on deformable modeling or thresholding algorithms followed by the application of mathematical morphological operators. From these anatomical and pathological models, we have developed a new framework for translating anatomical knowledge into geometrical and topological constraints. More precisely, our approach allows to automatically delineate the hepatic and portal veins but also to label the portal vein and finally to build an anatomical segmentation of the liver based on Couinaud definition which is currently used by surgeons all over the world. Finally, we have developed a user friendly interface for the 3D visualization of anatomical and pathological structures, the accurate evaluation of volumes and distances and for the virtual hepatic resection along a user-defined cutting plane. A validation study on a 30 patients database gives 2 mm of precision for liver delineation and less than 1 mm for all other anatomical and pathological structures delineation. An in vivo validation performed during surgery also showed that anatomical segmentation is more precise than the delineation performed by a surgeon based on external landmarks. This surgery planning system has been routinely used by our medical partner, and this has resulted in an improvement of the planning and performance of hepatic surgery procedures. We have developed new tools for hepatic surgical planning allowing a better surgery through an automatic delineation and visualization of anatomical and pathological structures. These tools represent a first step towards the development of an augmented

  1. Repeatability of visual acuity measurement.

    Science.gov (United States)

    Raasch, T W; Bailey, I L; Bullimore, M A

    1998-05-01

    This study investigates features of visual acuity chart design and acuity testing scoring methods which affect the validity and repeatability of visual acuity measurements. Visual acuity was measured using the Sloan and British Standard letter series, and Landolt rings. Identifiability of the different letters as a function of size was estimated, and expressed in the form of frequency-of-seeing curves. These functions were then used to simulate acuity measurements with a variety of chart designs and scoring criteria. Systematic relationships exist between chart design parameters and acuity score, and acuity score repeatability. In particular, an important feature of a chart, that largely determines the repeatability of visual acuity measurement, is the amount of size change attributed to each letter. The methods used to score visual acuity performance also affect repeatability. It is possible to evaluate acuity score validity and repeatability using the statistical principles discussed here.

  2. Visual exploration of images

    Science.gov (United States)

    Suaste-Gomez, Ernesto; Leybon, Jaime I.; Rodriguez, D.

    1998-07-01

    Visual scanpath has been an important work applied in neuro- ophthalmic and psychological studies. This is because it has been working like a tool to validate some pathologies such as visual perception in color or black/white images; color blindness; etc. On the other hand, this tool has reached a big field of applications such as marketing. The scanpath over a specific picture, shows the observer interest in color, shapes, letter size, etc.; even tough the picture be among a group of images, this tool has demonstrated to be helpful to catch people interest over a specific advertisement.

  3. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States1

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-01-01

    Premise of the study: Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. Methods and Results: A set of command line–based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. Conclusions: WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion. PMID:26949580

  4. Assessment of visual disability using visual evoked potentials.

    Science.gov (United States)

    Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun

    2012-08-06

    The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real

  5. MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method

    International Nuclear Information System (INIS)

    Chen, Z; Qi, H; Wu, S; Xu, Y; Zhou, L

    2016-01-01

    Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotational invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74

  6. Image Reconstruction and Evaluation: Applications on Micro-Surfaces and Lenna Image Representation

    Directory of Open Access Journals (Sweden)

    Mohammad Mayyas

    2016-09-01

    Full Text Available This article develops algorithms for the characterization and the visualization of micro-scale features using a small number of sample points, with the goal of mitigating the measurement shortcomings, which are often destructive or time consuming. The popular measurement techniques that are used in imaging of micro-surfaces include the 3D stylus or interferometric profilometry and Scanning Electron Microscopy (SEM, where both could represent the micro-surface characteristics in terms of 3D dimensional topology and greyscale image, respectively. Such images could be highly dense; therefore, traditional image processing techniques might be computationally expensive. We implement the algorithms in several case studies to rapidly examine the microscopic features of micro-surface of Microelectromechanical System (MEMS, and then we validate the results using a popular greyscale image; i.e., “Lenna” image. The contributions of this research include: First, development of local and global algorithm based on modified Thin Plate Spline (TPS model to reconstruct high resolution images of the micro-surface’s topography, and its derivatives using low resolution images. Second, development of a bending energy algorithm from our modified TPS model for filtering out image defects. Finally, development of a computationally efficient technique, referred to as Windowing, which combines TPS and Linear Sequential Estimation (LSE methods, to enhance the visualization of images. The Windowing technique allows rapid image reconstruction based on the reduction of inverse problem.

  7. Multi-view 3D scene reconstruction using ant colony optimization techniques

    International Nuclear Information System (INIS)

    Chrysostomou, Dimitrios; Gasteratos, Antonios; Nalpantidis, Lazaros; Sirakoulis, Georgios C

    2012-01-01

    This paper presents a new method performing high-quality 3D object reconstruction of complex shapes derived from multiple, calibrated photographs of the same scene. The novelty of this research is found in two basic elements, namely: (i) a novel voxel dissimilarity measure, which accommodates the elimination of the lighting variations of the models and (ii) the use of an ant colony approach for further refinement of the final 3D models. The proposed reconstruction procedure employs a volumetric method based on a novel projection test for the production of a visual hull. While the presented algorithm shares certain aspects with the space carving algorithm, it is, nevertheless, first enhanced with the lightness compensating image comparison method, and then refined using ant colony optimization. The algorithm is fast, computationally simple and results in accurate representations of the input scenes. In addition, compared to previous publications, the particular nature of the proposed algorithm allows accurate 3D volumetric measurements under demanding lighting environmental conditions, due to the fact that it can cope with uneven light scenes, resulting from the characteristics of the voxel dissimilarity measure applied. Besides, the intelligent behavior of the ant colony framework provides the opportunity to formulate the process as a combinatorial optimization problem, which can then be solved by means of a colony of cooperating artificial ants, resulting in very promising results. The method is validated with several real datasets, along with qualitative comparisons with other state-of-the-art 3D reconstruction techniques, following the Middlebury benchmark. (paper)

  8. Incomplete-data image reconstructions in industrial x-ray computerized tomography

    International Nuclear Information System (INIS)

    Tam, K.C.; Eberhard, J.W.; Mitchell, K.W.

    1989-01-01

    In earlier works it was concluded that image reconstruction from incomplete data can be achieved through an iterative transform algorithm which utilizes the a priori information on the object to compensate for the missing data. The image is transformed back and forth between the object space and the projection space, being corrected by the a priori information on the object in the object space, and by the known projections in the projection space. The a priori information in the object space includes a boundary enclosing the object, and an upper bound and a lower bound of the object density. In this paper we report the results of testing the iterative transform algorithm on experimental data. X-ray sinogram data of the cross section of a F404 high-pressure turbine blade made of Ni-based superalloy were supplied to us by the Aircraft Engine Business Group of General Electric Company at Cincinnati, Ohio. From the data set we simulated two kinds of incomplete data situations, incomplete projection and limited-angle scanning, and applied the iterative transform algorithm to reconstruct the images. The results validated the practical value of the iterative transform algorithm in reconstructing images from incomplete x-ray data, both incomplete projections and limited-angle data. In all the cases tested there were significant improvements in the appearance of the images after iterations. The visual improvements are substantiated in a quantitative manner by the plots of errors in wall thickness measurements which in general decrease in magnitude with iterations

  9. THE RESEARCH OF SPECTRAL RECONSTRUCTION FOR LARGE APERTURE STATIC IMAGING SPECTROMETER

    Directory of Open Access Journals (Sweden)

    H. Lv

    2018-04-01

    Full Text Available Imaging spectrometer obtains or indirectly obtains the spectral information of the ground surface feature while obtaining the target image, which makes the imaging spectroscopy has a prominent advantage in fine characterization of terrain features, and is of great significance for the study of geoscience and other related disciplines. Since the interference data obtained by interferometric imaging spectrometer is intermediate data, which must be reconstructed to achieve the high quality spectral data and finally used by users. The difficulty to restrict the application of interferometric imaging spectroscopy is to reconstruct the spectrum accurately. Based on the original image acquired by Large Aperture Static Imaging Spectrometer as the input, this experiment selected the pixel that is identified as crop by artificial recognition, extract and preprocess the interferogram to recovery the corresponding spectrum of this pixel. The result shows that the restructured spectrum formed a small crest near the wavelength of 0.55 μm with obvious troughs on both sides. The relative reflection intensity of the restructured spectrum rises abruptly at the wavelength around 0.7 μm, forming a steep slope. All these characteristics are similar with the spectral reflection curve of healthy green plants. It can be concluded that the experimental result is consistent with the visual interpretation results, thus validating the effectiveness of the scheme for interferometric imaging spectrum reconstruction proposed in this paper.

  10. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  11. Visualization Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Evaluates and improves the operational effectiveness of existing and emerging electronic warfare systems. By analyzing and visualizing simulation results...

  12. Distributed Visualization

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...

  13. Validade da aferição da acuidade visual realizada pelo professor em escolares de 1ª à 4ª série de primeiro grau de uma escola pública do município de São Paulo, Brasil The validity of the visual acuity screening in school children carried oat by the teacher - comparative study of the visual acuity measurement by the teacher and the ophthalmologist from the city of S. Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Edméa Rita Temporini

    1977-06-01

    carried out by the ophthalmologist. Both of them make use of the Snellen optometrical table. 1352 first to fourth-grade children from a primary school in the county of S. Paulo in 1975 were tested. Results were concordant in 80.86% of the cases. In 122 cases (9.02% there was a difference of 2 lines in the results obtained by the teacher and the eye doctor; in 54 cases (3.99% there was a difference of 3 lines. As greater differences were checked, the number of cases progressively decreased. This fact occurred as regards both eyes, showing a failure of those children in responding to the test, probably due to their difficulty of interpretation. The application of the visual acuity screening by a well-trained teacher is valid, as one of the ways of detecting school children who need eye examination.

  14. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  15. Update on orbital reconstruction.

    Science.gov (United States)

    Chen, Chien-Tzung; Chen, Yu-Ray

    2010-08-01

    Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.

  16. Low dose reconstruction algorithm for differential phase contrast imaging.

    Science.gov (United States)

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  17. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    Science.gov (United States)

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  18. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  19. MR image reconstruction via guided filter.

    Science.gov (United States)

    Huang, Heyan; Yang, Hang; Wang, Kang

    2018-04-01

    Magnetic resonance imaging (MRI) reconstruction from the smallest possible set of Fourier samples has been a difficult problem in medical imaging field. In our paper, we present a new approach based on a guided filter for efficient MRI recovery algorithm. The guided filter is an edge-preserving smoothing operator and has better behaviors near edges than the bilateral filter. Our reconstruction method is consist of two steps. First, we propose two cost functions which could be computed efficiently and thus obtain two different images. Second, the guided filter is used with these two obtained images for efficient edge-preserving filtering, and one image is used as the guidance image, the other one is used as a filtered image in the guided filter. In our reconstruction algorithm, we can obtain more details by introducing guided filter. We compare our reconstruction algorithm with some competitive MRI reconstruction techniques in terms of PSNR and visual quality. Simulation results are given to show the performance of our new method.

  20. Distance weighting for improved tomographic reconstructions

    International Nuclear Information System (INIS)

    Koeppe, R.A.; Holden, J.E.

    1984-01-01

    An improved method for the reconstruction of emission computed axial tomography images has been developed. The method is a modification of filtered back-projection, where the back projected values are weighted to reflect the loss of formation, with distance from the camera, which is inherent in gamma camera imaging. This information loss is a result of: loss of spatial resolution with distance, attenuation, and scatter. The weighting scheme can best be described by considering the contributions of any two opposing views to the reconstruction image pixels. The weight applied to the projections of one view is set to equal the relative amount of the original activity that was initially received in that projection, assuming a uniform attenuating medium. This yields a weighting value which is a function of distance into the image with a value of one for pixels ''near the camera'', a value of .5 at the image center, and a value of zero on the opposite side. Tomographic reconstructions produced with this method show improved spatial resolution when compared to conventional 360 0 reconstructions. The improvement is in the tangential direction, where simulations have indicated a FWHM improvement of 1 to 1.5 millimeters. The resolution in the radial direction is essentially the same for both methods. Visual inspection of the reconstructed images show improved resolution and contrast

  1. Visual Impairment

    Science.gov (United States)

    ... site Sitio para adolescentes Body Mind Sexual Health Food & Fitness Diseases & Conditions Infections Drugs & Alcohol School & Jobs Sports Expert Answers (Q&A) Staying Safe Videos for Educators Search English Español Visual Impairment KidsHealth / For Teens / Visual Impairment What's in ...

  2. Visual attention

    NARCIS (Netherlands)

    Evans, K.K.; Horowitz, T.S.; Howe, P.; Pedersini, R.; Reijnen, E.; Pinto, Y.; Wolfe, J.M.

    2011-01-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, ‘visual attention’ describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act

  3. Visual Education

    DEFF Research Database (Denmark)

    Buhl, Mie; Flensborg, Ingelise

    2010-01-01

    The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating the functi......The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating...... to emerge in the interlocutory space of a global visual repertoire and diverse local interpretations. The two perspectives represent challenges for future visual education which require visual competences, not only within the arts but also within the subjects of natural sciences, social sciences, languages...

  4. Visual Inertial Navigation and Calibration

    OpenAIRE

    Skoglund, Martin A.

    2011-01-01

    Processing and interpretation of visual content is essential to many systems and applications. This requires knowledge of how the content is sensed and also what is sensed. Such knowledge is captured in models which, depending on the application, can be very advanced or simple. An application example is scene reconstruction using a camera; if a suitable model of the camera is known, then a model of the scene can be estimated from images acquired at different, unknown, locations, yet, the qual...

  5. Prevalence of Body Dysmorphic Disorder Among Patients Seeking Breast Reconstruction.

    Science.gov (United States)

    Metcalfe, Drew B; Duggal, Claire S; Gabriel, Allen; Nahabedian, Maurice Y; Carlson, Grant W; Losken, Albert

    2014-07-01

    Body dysmorphic disorder (BDD) is characterized by a preoccupation with a slight or imagined defect in physical appearance. It has significant implications for patients who desire breast reconstruction, because patient satisfaction with the aesthetic outcome is a substantial contributor to the success of the procedure. The authors estimated the prevalence of BDD among women seeking breast reconstruction by surveying patients with the previously validated Dysmorphic Concerns Questionnaire (DCQ). One hundred eighty-eight women who presented for immediate or delayed breast reconstruction completed the DCQ anonymously, during initial consultation with a plastic surgeon. Two groups of respondents were identified: those who desired immediate reconstruction and those who planned to undergo delayed reconstruction. The prevalence of BDD among breast reconstruction patients was compared between the 2 groups, and the overall prevalence was compared with published rates for the general public. Body dysmorphic disorder was significantly more prevalent in breast reconstruction patients than in the general population (17% vs 2%; P < .001). It also was much more common among patients who planned to undergo delayed (vs immediate) reconstruction (34% vs 13%; P = .004). Relative to the general public, significantly more women who sought breast reconstruction were diagnosed as having BDD. Awareness of the potential for BDD will enable clinicians to better understand their patients' perspectives and discuss realistic expectations at the initial consultation. Future studies are warranted to examine the implications of BDD on patient satisfaction with reconstructive surgery. 3. © 2014 The American Society for Aesthetic Plastic Surgery, Inc.

  6. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  7. Progress in reconstruction of orbital wall after fracture

    Directory of Open Access Journals (Sweden)

    Lu-Lu Xu

    2018-04-01

    Full Text Available At present, the orbital wall fracture is a very common facial trauma. The orbital contents are often incarcerated in the fracture cracks resulting in changes in the orbital eye position, then can bring a lifetime of diplopia and enophthalmos, which greatly affects the visual acuity and facial appearance. The purpose of repairing of orbital fracture is reconstructing orbital wall, repairing defect to correct eye position, avoiding enophthalmos and recovering visual function. The review will provide a comprehensive overview of orbital fracture reconstruction.

  8. Surface reconstruction, figure-ground modulation, and border-ownership.

    Science.gov (United States)

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2013-01-01

    The Differentiation-Integration for Surface Completion (DISC) model aims to explain the reconstruction of visual surfaces. We find the model a valuable contribution to our understanding of figure-ground organization. We point out that, next to border-ownership, neurons in visual cortex code whether surface elements belong to a figure or the background and that this is influenced by attention. We furthermore suggest that there must be strong links between object recognition and figure-ground assignment in order to resolve the status of interior contours. Incorporation of these factors in neurocomputational models will further improve our understanding of surface reconstruction, figure-ground organization, and border-ownership.

  9. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  10. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  11. Visual cognition

    Science.gov (United States)

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  12. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Auditory and visual capture during focused visual attention

    OpenAIRE

    Koelewijn, T.; Bronkhorst, A.W.; Theeuwes, J.

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, th...

  14. Spiral scan long object reconstruction through PI line reconstruction

    International Nuclear Information System (INIS)

    Tam, K C; Hu, J; Sourbelle, K

    2004-01-01

    The response of a point object in a cone beam (CB) spiral scan is analysed. Based on the result, a reconstruction algorithm for long object imaging in spiral scan cone beam CT is developed. A region-of-interest (ROI) of the long object is scanned with a detector smaller than the ROI, and a portion of it can be reconstructed without contamination from overlaying materials. The top and bottom surfaces of the ROI are defined by two sets of PI lines near the two ends of the spiral path. With this novel definition of the top and bottom ROI surfaces and through the use of projective geometry, it is straightforward to partition the cone beam image into regions corresponding to projections of the ROI, the overlaying objects or both. This also simplifies computation at source positions near the spiral ends, and makes it possible to reduce radiation exposure near the spiral ends substantially through simple hardware collimation. Simulation results to validate the algorithm are presented

  15. Learning Science Through Visualization

    Science.gov (United States)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  16. Confiabilidade e validade de um dinamômetro isométrico modificado na avaliação do desempenho muscular em indivíduos com reconstrução do ligamento cruzado anterior Reliability and validity of a modified isometric dynamometer in the assessment of muscular performance in individuals with anterior cruciate ligament reconstruction

    Directory of Open Access Journals (Sweden)

    Rodrigo Antunes de Vasconcelos

    2009-06-01

    Full Text Available OBJETIVO: Analisar a confiabilidade e validade de um dinamômetro isométrico modificado (DIM na avaliação dos déficits no desempenho muscular dos extensores e flexores do joelho em indivíduos normais e com reconstrução do LCA. MÉTODOS: Foram convidados 60 voluntários do sexo masculino a participar do estudo, divididos em três grupos de 20 indivíduos: grupo controle (GC, grupo com reconstrução do LCA com tendão patelar (GTP e grupo com reconstrução do LCA com tendões flexores (GTF. Todos os indivíduos realizaram teste isométrico dos extensores e flexores do joelho no DIM; os déficits de força muscular coletados foram comparados posteriormente com os testes realizados no Biodex System 3 operando no modo isométrico e isocinético nas velocidades de 60º/s e 180º /s. Foram realizados cálculos de correlação intraclasse ICC para avaliar a confiabilidade do DIM, cálculos da especificidade, sensibilidade e coeficiente de concordância Kappa, respectivamente, para avaliar a validade do DIM em detectar déficits musculares e comparações intragrupos e intergrupos na realização dos quatro testes de força utilizando-se do método ANOVA. RESULTADOS: O DIM demonstrou excelente confiabilidade teste-reteste e validade na avaliação do desempenho muscular dos extensores e flexores do joelho. Na comparação intergrupos. o GTP demonstrou déficits significativamente maiores dos extensores comparados com os grupos GC e GTF. CONCLUSÃO: Dinamômetros isométricos conectados em equipamentos de mecanoterapia podem ser uma alternativa para coletar dados referentes a déficits no desempenho muscular dos extensores e flexores do joelho em indivíduos com reconstrução do LCA.OBJECTIVES: The aim of this study was to evaluate the reliability and validity of a modified isometric dynamometer (MID in performance deficits of the knee extensor and flexor muscles in normal individuals and in those with ACL reconstructions. METHODS: Sixty male

  17. Overview of image reconstruction

    International Nuclear Information System (INIS)

    Marr, R.B.

    1980-04-01

    Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R/sup n/ is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references

  18. The evolving breast reconstruction

    DEFF Research Database (Denmark)

    Thomsen, Jørn Bo; Gunnarsson, Gudjon Leifur

    2014-01-01

    The aim of this editorial is to give an update on the use of the propeller thoracodorsal artery perforator flap (TAP/TDAP-flap) within the field of breast reconstruction. The TAP-flap can be dissected by a combined use of a monopolar cautery and a scalpel. Microsurgical instruments are generally...... not needed. The propeller TAP-flap can be designed in different ways, three of these have been published: (I) an oblique upwards design; (II) a horizontal design; (III) an oblique downward design. The latissimus dorsi-flap is a good and reliable option for breast reconstruction, but has been criticized...... for oncoplastic and reconstructive breast surgery and will certainly become an invaluable addition to breast reconstructive methods....

  19. Forging Provincial Reconstruction Teams

    National Research Council Canada - National Science Library

    Honore, Russel L; Boslego, David V

    2007-01-01

    The Provincial Reconstruction Team (PRT) training mission completed by First U.S. Army in April 2006 was a joint Service effort to meet a requirement from the combatant commander to support goals in Afghanistan...

  20. Breast Reconstruction with Implants

    Science.gov (United States)

    ... your surgical options and discuss the advantages and disadvantages of implant-based reconstruction, and may show you ... Policy Notice of Privacy Practices Notice of Nondiscrimination Advertising Mayo Clinic is a not-for-profit organization ...

  1. Visual cognition

    Energy Technology Data Exchange (ETDEWEB)

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  2. Wind reconstruction algorithm for Viking Lander 1

    Science.gov (United States)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  3. Wind reconstruction algorithm for Viking Lander 1

    Directory of Open Access Journals (Sweden)

    T. Kynkäänniemi

    2017-06-01

    Full Text Available The wind measurement sensors of Viking Lander 1 (VL1 were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  4. Baryon Acoustic Oscillations reconstruction with pixels

    Energy Technology Data Exchange (ETDEWEB)

    Obuljen, Andrej [SISSA—International School for Advanced Studies, Via Bonomea 265, 34136 Trieste (Italy); Villaescusa-Navarro, Francisco [Center for Computational Astrophysics, 160 5th Ave, New York, NY, 10010 (United States); Castorina, Emanuele [Berkeley Center for Cosmological Physics, University of California, Berkeley, CA 94720 (United States); Viel, Matteo, E-mail: aobuljen@sissa.it, E-mail: fvillaescusa@simonsfoundation.org, E-mail: ecastorina@berkeley.edu, E-mail: viel@oats.inaf.it [INAF, Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131 Trieste (Italy)

    2017-09-01

    Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixels becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.

  5. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  6. Subjective versus objective assessment of breast reconstruction.

    Science.gov (United States)

    Henseler, Helga; Smith, Joanna; Bowman, Adrian; Khambay, Balvinder S; Ju, Xiangyang; Ayoub, Ashraf; Ray, Arup K

    2013-05-01

    To date breast assessment has been conducted mainly subjectively. However lately validated objective three-dimensional (3D) imaging was developed. The study aimed to assess breast reconstruction subjectively and objectively and conduct a comparison. In forty-four patients after immediate unilateral breast reconstruction with solely the extended latissimus dorsi flap the breast was captured by validated 3D imaging method and standardized 2D photography. Breast symmetry was subjectively evaluated by six experts who applied the Harris score giving a mark of 1-4 for a poor to excellent result. An error study was conducted by examination of the intra and inter-observer agreement and agreement on controls. By Procrustes analysis an objective asymmetry score was obtained and compared to the subjective assessment. The subjective assessment showed that the inter-observer agreement was good or substantial (p-value: value: fair (p-values: 0.159, 0.134, 0.099) to substantial (p-value: 0.005) intra-observer agreement. The objective assessment revealed that the reconstructed breast showed a significantly smaller volume compared to the opposite side and that the average asymmetry score was 0.052, ranging from 0.019 to 0.136. When comparing the subjective and objective method the relationship between the two scores was highly significant. Subjective breast assessment lacked accuracy and reproducibility. This was the first error study of subjective breast assessment versus an objective validated 3D imaging method based on true 3D parameters. The substantial agreement between established subjective breast assessment and new validated objective method supported the value of the later and we expect its future role to expand. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. A technique system for the measurement, reconstruction and character extraction of rice plant architecture.

    Directory of Open Access Journals (Sweden)

    Xumeng Li

    Full Text Available This study developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures, which have challenged functional-structural plant modeling for decades and have become the foundation of the design of ideo-plant architectures. The system uses the location-separation-measurement method (LSMM for the collection of data on the canopy architecture and the analytic geometry method for the reconstruction and visualization of the three-dimensional (3D digital architecture of the rice plant. It also uses the virtual clipping method for extracting the key traits of the canopy architecture such as the leaf area, inclination, and azimuth distribution in spatial coordinates. To establish the technique system, we developed (i simple tools to measure the spatial position of the stem axis and azimuth of the leaf midrib and to capture images of tillers and leaves; (ii computer software programs for extracting data on stem diameter, leaf nodes, and leaf midrib curves from the tiller images and data on leaf length, width, and shape from the leaf images; (iii a database of digital architectures that stores the measured data and facilitates the reconstruction of the 3D visual architecture and the extraction of architectural traits; and (iv computation algorithms for virtual clipping to stratify the rice canopy, to extend the stratified surface from the horizontal plane to a general curved surface (including a cylindrical surface, and to implement in silico. Each component of the technique system was quantitatively validated and visually compared to images, and the sensitivity of the virtual clipping algorithms was analyzed. This technique is inexpensive and accurate and provides high throughput for the measurement, reconstruction, and trait extraction of rice canopy architectures. The technique provides a more practical method of data collection to serve functional-structural plant models of rice and for the

  8. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Audenaert, Emmanuel; Speleers, Bruno; Vercauteren, Tom; Mulliez, Thomas; Vandemaele, Pieter; Achten, Eric; Kerckaert, Ingrid; D'Herde, Katharina; De Neve, Wilfried; Van Hoof, Tom

    2013-01-01

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection

  9. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  10. Visualizing water

    Science.gov (United States)

    Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.

    2016-12-01

    A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.

  11. Apparatus and method for reconstructing data

    International Nuclear Information System (INIS)

    Pavkovich, J.M.

    1977-01-01

    The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)

  12. Research on reconstruction of steel tube section from few projections

    International Nuclear Information System (INIS)

    Peng Shuaijun; Wu Haifeng; Wang Kai

    2007-01-01

    Most parameters of steel tube can be acquired from CT image of the section so as to evaluate its quality. But large numbers of projections are needed in order to reconstruct the section image, so the collection and calculation of the projections consume lots of time. In order to solve the problem, reconstruction algorithms of steel tube from few projections are researched and the results are validated with simulation data in the paper. Three iterative algorithms, ART, MAP and OSEM, are attempted to reconstruct the section of steel tube by using the simulation model. Considering the prior information distributing of steel tube, we improve the algorithms and get better reconstruction images. The results of simulation experiment indicate that ART, MAP and OSEM can reconstruct accurate section images of steel tube from less than 20 projections and approximate images from 10 projections. (authors)

  13. Image interface in Java for tomographic reconstruction in nuclear medicine

    International Nuclear Information System (INIS)

    Andrade, M.A.; Silva, A.M. Marques da

    2004-01-01

    The aim of this study is to implement a software for tomographic reconstruction of SPECT data from Nuclear Medicine with a flexible interface design, cross-platform, written in Java. Validation tests were performed based on SPECT simulated data. The results showed that the implemented algorithms and filters agree with the theoretical context. We intend to extend the system by implementing additional tomographic reconstruction techniques and Java threads, in order to provide simultaneously image processing. (author)

  14. Stereo perception of reconstructions of digital holograms of real-world objects

    Energy Technology Data Exchange (ETDEWEB)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto [University of Oulu, Oulu Southern Institute, Ylivieska (Finland); Naughton, Thomas J, E-mail: taina.lehtimaki@oulu.f [Department of Computer Science, National University of Ireland Maynooth (Ireland)

    2010-02-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  15. Stereo perception of reconstructions of digital holograms of real-world objects

    International Nuclear Information System (INIS)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto; Naughton, Thomas J

    2010-01-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  16. Visual cognition

    Energy Technology Data Exchange (ETDEWEB)

    Pinker, S.

    1985-01-01

    This collection of research papers on visual cognition first appeared as a special issue of Cognition: International Journal of Cognitive Science. The study of visual cognition has seen enormous progress in the past decade, bringing important advances in our understanding of shape perception, visual imagery, and mental maps. Many of these discoveries are the result of converging investigations in different areas, such as cognitive and perceptual psychology, artificial intelligence, and neuropsychology. This volume is intended to highlight a sample of work at the cutting edge of this research area for the benefit of students and researchers in a variety of disciplines. The tutorial introduction that begins the volume is designed to help the nonspecialist reader bridge the gap between the contemporary research reported here and earlier textbook introductions or literature reviews.

  17. Visualizing Transformation

    DEFF Research Database (Denmark)

    Pedersen, Pia

    2012-01-01

    Transformation, defined as the step of extracting, arranging and simplifying data into visual form (M. Neurath, 1974), was developed in connection with ISOTYPE (International System Of TYpographic Picture Education) and might well be the most important legacy of Isotype to the field of graphic...... design. Recently transformation has attracted renewed interest because of the book The Transformer written by Robin Kinross and Marie Neurath. My on-going research project, summarized in this paper, identifies and depicts the essential principles of data visualization underlying the process...... of transformation with reference to Marie Neurath’s sketches on the Bilston Project. The material has been collected at the Otto and Marie Neurath Collection housed at the University of Reading, UK. By using data visualization as a research method to look directly into the process of transformation, the project...

  18. Alternative reconstruction after pancreaticoduodenectomy

    Directory of Open Access Journals (Sweden)

    Cooperman Avram M

    2008-01-01

    Full Text Available Abstract Background Pancreaticoduodenectomy is the procedure of choice for tumors of the head of the pancreas and periampulla. Despite advances in surgical technique and postoperative care, the procedure continues to carry a high morbidity rate. One of the most common morbidities is delayed gastric emptying with rates of 15%–40%. Following two prolonged cases of delayed gastric emptying, we altered our reconstruction to avoid this complication altogether. Subsequently, our patients underwent a classic pancreaticoduodenectomy with an undivided Roux-en-Y technique for reconstruction. Methods We reviewed the charts of our last 13 Whipple procedures evaluating them for complications, specifically delayed gastric emptying. We compared the outcomes of those patients to a control group of 15 patients who underwent the Whipple procedure with standard reconstruction. Results No instances of delayed gastric emptying occurred in patients who underwent an undivided Roux-en-Y technique for reconstruction. There was 1 wound infection (8%, 1 instance of pneumonia (8%, and 1 instance of bleeding from the gastrojejunal staple line (8%. There was no operative mortality. Conclusion Use of the undivided Roux-en-Y technique for reconstruction following the Whipple procedure may decrease the incidence of delayed gastric emptying. In addition, it has the added benefit of eliminating bile reflux gastritis. Future randomized control trials are recommended to further evaluate the efficacy of the procedure.

  19. Three-dimensional reconstructions in neuroanatomy

    International Nuclear Information System (INIS)

    Kretschmann, H.J.; Vogt, H.; Schuetz, T.; Gerke, M.; Riedel, A.; Buhmann, C.; Wesemann, M.; Mueller, D.

    1991-01-01

    Computer-aided 3D reconstructions of neurofunctional systems and structures are generated as a reference for neuroimaging (CT, MRI, PET). The clinical application of these 3D reconstructions requires a coordinate system and conditions resembling the intravital neuroanatomy as far as possible. In this paper the neuroanatomical reference system (NeuRef) of the Department of Neuroanatomy of Hannover Medical School is presented. This consists of methods to record brain structures from serial sections with minimal error (less than 1 mm) and to display 3D brain models derived from such a data base. In addition, NeuRef is able to generate sections through, for instance, the visual and pyramidal system and to transfer these data onto a corresponcing CT image. Therefore, this method can serve as a diagnostic aid in neuroradiology, in operation planning, and radiotherapy. It can also be used in PACS. (orig.) [de

  20. Iterative reconstruction of magnetic induction using Lorentz transmission electron tomography

    International Nuclear Information System (INIS)

    Phatak, C.; Gürsoy, D.

    2015-01-01

    Intense ongoing research on complex nanomagnetic structures requires a fundamental understanding of the 3D magnetization and the stray fields around the nano-objects. 3D visualization of such fields offers the best way to achieve this. Lorentz transmission electron microscopy provides a suitable combination of high resolution and ability to quantitatively visualize the magnetization vectors using phase retrieval methods. In this paper, we present a formalism to represent the magnetic phase shift of electrons as a Radon transform of the magnetic induction of the sample. Using this formalism, we then present the application of common tomographic methods particularly the iterative methods, to reconstruct the 3D components of the vector field. We present an analysis of the effect of missing wedge and the limited angular sampling as well as reconstruction of complex 3D magnetization in a nanowire using simulations. - Highlights: • We present a formalism to represent electron-optical magnetic phase shift as a Radon transform of the 3D magnetic induction of the nano-object. • We have analyzed four different tomographic reconstruction methods for vectorial data reconstruction. • Reconstruction methods were tested for varying experimental limitations such as limited tilt range and limited angular sampling. • The analysis showed that Gridrec and SIRT methods performed better with lower errors than other reconstruction methods

  1. Light field reconstruction robust to signal dependent noise

    Science.gov (United States)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  2. Real-time quasi-3D tomographic reconstruction

    Science.gov (United States)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  3. Reconstructing random media

    International Nuclear Information System (INIS)

    Yeong, C.L.; Torquato, S.

    1998-01-01

    We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society

  4. Delayed breast implant reconstruction

    DEFF Research Database (Denmark)

    Hvilsom, Gitte B.; Hölmich, Lisbet R.; Steding-Jessen, Marianne

    2012-01-01

    We evaluated the association between radiation therapy and severe capsular contracture or reoperation after 717 delayed breast implant reconstruction procedures (288 1- and 429 2-stage procedures) identified in the prospective database of the Danish Registry for Plastic Surgery of the Breast during...... of radiation therapy was associated with a non-significantly increased risk of reoperation after both 1-stage (HR = 1.4; 95% CI: 0.7-2.5) and 2-stage (HR = 1.6; 95% CI: 0.9-3.1) procedures. Reconstruction failure was highest (13.2%) in the 2-stage procedures with a history of radiation therapy. Breast...... reconstruction approaches other than implants should be seriously considered among women who have received radiation therapy....

  5. Evaluation of aortocoronary bypass graft patency by reconstructed CT image

    International Nuclear Information System (INIS)

    Kawakita, Seizaburo; Koide, Takashi; Saito, Yoshio; Yamamoto, Tadao; Iwasaki, Tadaaki

    1982-01-01

    Ten patients were examined in the period of three months from January to March 1981. The patients were operated from 1 month to 7 years before CT. A bypass to the left anterior descending artery (LAD) was grafted in 10 cases, 2 to the right coronary artery (RCA), 4 to an obtuse marginal artery (OM), and 1 to a diagonal artery. Image reconstruction was performed in 10 cases by using an image analytical computer Evaluskop. Appropriate planes for reconstruction were selected by trial and error methods upon observation of CT images. When gained picture of a graft course coincided with surgical records or angiography, the work of building images was concluded. On cross section, grafts to LAD were visualized in all 10 cases: 9 in the entire course and 1 in a proximal part of the graft. Two to RCA, 4 to OM and 1 to a diagonal were also successfully visualized. Reconstruction of graft images succeeded in 9 grafts of 6 cases. The course of a graft could be pursued from the proximal to the distal end adjacent to the cardiac chamber. The picture of a bypass to LAD was visualized in 6 of 10 grafts. Two bypass to RCA could be depicted, and 1 to OM was also found. However 3 to OM and 1 to a diagonal failed to be visualized throughout their courses in reconstructed images. I think that the causes of faillure mainly depended upon the course of the graft. When a graft was running arc-like surrounding the heart chamber, it was very difficult to depict its entire length in reconstructed images, though the graft could be detected in cross sections. These preliminary studies indicated that reconstruction of CT images had some benefits for the pursuit of graft courses. (J.P.N.)

  6. Visual attention.

    Science.gov (United States)

    Evans, Karla K; Horowitz, Todd S; Howe, Piers; Pedersini, Roccardo; Reijnen, Ester; Pinto, Yair; Kuzmova, Yoana; Wolfe, Jeremy M

    2011-09-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, 'visual attention' describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. This selection permits the reduction of complexity and informational overload. Selection can be determined both by the 'bottom-up' saliency of information from the environment and by the 'top-down' state and goals of the perceiver. Attentional effects can take the form of modulating or enhancing the selected information. A central role for selective attention is to enable the 'binding' of selected information into unified and coherent representations of objects in the outside world. In the overview on visual attention presented here we review the mechanisms and consequences of selection and inhibition over space and time. We examine theoretical, behavioral and neurophysiologic work done on visual attention. We also discuss the relations between attention and other cognitive processes such as automaticity and awareness. WIREs Cogni Sci 2011 2 503-514 DOI: 10.1002/wcs.127 For further resources related to this article, please visit the WIREs website. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Visualizing Series

    Science.gov (United States)

    Unal, Hasan

    2008-01-01

    The importance of visualisation and multiple representations in mathematics has been stressed, especially in a context of problem solving. Hanna and Sidoli comment that "Diagrams and other visual representations have long been welcomed as heuristic accompaniments to proof, where they not only facilitate the understanding of theorems and their…

  8. HEEL BONE RECONSTRUCTIVE OSTEOSYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. N. Svetashov

    2010-01-01

    Full Text Available To detect the most appropriate to heel bone injury severity variants of reconstructive osteosynthesis it was analyzed treatment results of 56 patients. In 15 (26.8% patients classic methods of surgical service were applied, in 41 (73.2% cases to restore the defect porous implants were used. Osteosynthesis without heel bone plastic restoration accomplishment was ineffective in 60% patients from control group. Reconstructive osteosynthesis method ensures long-term good functional effect of rehabilitation in 96.4% patients from the basic group.

  9. Vertex reconstruction in CMS

    International Nuclear Information System (INIS)

    Chabanat, E.; D'Hondt, J.; Estre, N.; Fruehwirth, R.; Prokofiev, K.; Speer, T.; Vanlaer, P.; Waltenberger, W.

    2005-01-01

    Due to the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ('vertex finding') and an estimation problem ('vertex fitting'). Starting from least-squares methods, robustifications of the classical algorithms are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels

  10. Vertex Reconstruction in CMS

    CERN Document Server

    Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W

    2005-01-01

    Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.

  11. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  12. Reconstruction dynamics of recorded holograms in photochromic glass

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, Mona; Pavel, Eugen; Nicolae, Vasile B.

    2011-06-20

    We have investigated the dynamics of the record-erase process of holograms in photochromic glass using continuum Nd:YVO{sub 4} laser radiation ({lambda}=532 nm). A bidimensional microgrid pattern was formed and visualized in photochromic glass, and its diffraction efficiency decay versus time (during reconstruction step) gave us information (D, {Delta}n) about the diffusion process inside the material. The recording and reconstruction processes were carried out in an off-axis setup, and the images of the reconstructed object were recorded by a CCD camera. Measurements realized on reconstructed object images using holograms recorded at a different incident power laser have shown a two-stage process involved in silver atom kinetics.

  13. Consistent reconstruction of 4D fetal heart ultrasound images to cope with fetal motion.

    Science.gov (United States)

    Tanner, Christine; Flach, Barbara; Eggenberger, Céline; Mattausch, Oliver; Bajka, Michael; Goksel, Orcun

    2017-08-01

    4D ultrasound imaging of the fetal heart relies on reconstructions from B-mode images. In the presence of fetal motion, current approaches suffer from artifacts, which are unrecoverable for single sweeps. We propose to use many sweeps and exploit the resulting redundancy to automatically recover from motion by reconstructing a 4D image which is consistent in phase, space, and time. An interactive visualization framework to view animated ultrasound slices from 4D reconstructions on arbitrary planes was developed using a magnetically tracked mock probe. We first quantified the performance of 10 4D reconstruction formulations on simulated data. Reconstructions of 14 in vivo sequences by a baseline, the current state-of-the-art, and the proposed approach were then visually ranked with respect to temporal quality on orthogonal views. Rankings from 5 observers showed that the proposed 4D reconstruction approach significantly improves temporal image quality in comparison with the baseline. The 4D reconstructions of the baseline and the proposed methods were then inspected interactively for accessibility to clinically important views and rated for their clinical usefulness by an ultrasound specialist in obstetrics and gynecology. The reconstructions by the proposed method were rated as 'very useful' in 71% and were statistically significantly more useful than the baseline reconstructions. Multi-sweep fetal heart ultrasound acquisitions in combination with consistent 4D image reconstruction improves quality as well as clinical usefulness of the resulting 4D images in the presence of fetal motion.

  14. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  15. Reconstructing Neutrino Mass Spectrum

    OpenAIRE

    Smirnov, A. Yu.

    1999-01-01

    Reconstruction of the neutrino mass spectrum and lepton mixing is one of the fundamental problems of particle physics. In this connection we consider two central topics: (i) the origin of large lepton mixing, (ii) possible existence of new (sterile) neutrino states. We discuss also possible relation between large mixing and existence of sterile neutrinos.

  16. Position reconstruction in LUX

    Science.gov (United States)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.

    2018-02-01

    The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessed with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β- with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.

  17. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    Science.gov (United States)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  18. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  19. An Intelligent Cooperative Visual Sensor Network for Urban Mobility.

    Science.gov (United States)

    Leone, Giuseppe Riccardo; Moroni, Davide; Pieri, Gabriele; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea; Marino, Francesco

    2017-11-10

    Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities.

  20. An Intelligent Cooperative Visual Sensor Network for Urban Mobility

    Directory of Open Access Journals (Sweden)

    Giuseppe Riccardo Leone

    2017-11-01

    Full Text Available Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities.

  1. An Intelligent Cooperative Visual Sensor Network for Urban Mobility

    Science.gov (United States)

    Leone, Giuseppe Riccardo; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea

    2017-01-01

    Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities. PMID:29125535

  2. Visual Storytelling

    OpenAIRE

    Ting-Hao; Huang; Ferraro, Francis; Mostafazadeh, Nasrin; Misra, Ishan; Agrawal, Aishwarya; Devlin, Jacob; Girshick, Ross; He, Xiaodong; Kohli, Pushmeet; Batra, Dhruv; Zitnick, C. Lawrence; Parikh, Devi; Vanderwende, Lucy; Galley, Michel

    2016-01-01

    We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as prov...

  3. Flow visualization

    International Nuclear Information System (INIS)

    Weinstein, L.M.

    1991-01-01

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities. 8 refs

  4. Bio-physically plausible visualization of highly scattering fluorescent neocortical models for in silico experimentation

    KAUST Repository

    Abdellah, Marwan

    2017-02-15

    Background We present a visualization pipeline capable of accurate rendering of highly scattering fluorescent neocortical neuronal models. The pipeline is mainly developed to serve the computational neurobiology community. It allows the scientists to visualize the results of their virtual experiments that are performed in computer simulations, or in silico. The impact of the presented pipeline opens novel avenues for assisting the neuroscientists to build biologically accurate models of the brain. These models result from computer simulations of physical experiments that use fluorescence imaging to understand the structural and functional aspects of the brain. Due to the limited capabilities of the current visualization workflows to handle fluorescent volumetric datasets, we propose a physically-based optical model that can accurately simulate light interaction with fluorescent-tagged scattering media based on the basic principles of geometric optics and Monte Carlo path tracing. We also develop an automated and efficient framework for generating dense fluorescent tissue blocks from a neocortical column model that is composed of approximately 31000 neurons. Results Our pipeline is used to visualize a virtual fluorescent tissue block of 50 μm3 that is reconstructed from the somatosensory cortex of juvenile rat. The fluorescence optical model is qualitatively analyzed and validated against experimental emission spectra of different fluorescent dyes from the Alexa Fluor family. Conclusion We discussed a scientific visualization pipeline for creating images of synthetic neocortical neuronal models that are tagged virtually with fluorescent labels on a physically-plausible basis. The pipeline is applied to analyze and validate simulation data generated from neuroscientific in silico experiments.

  5. Receptive Fields and the Reconstruction of Visual Information.

    Science.gov (United States)

    1985-09-01

    depending on the noise . Thus our model would suggest that the interpolation filters for deblurring are playing a role in Ii hyperacuity. This is novel...of additional precision in the information can be obtained by a process of deblurring , which could be relevant to hyperacuity. It also provides an... impulse of heat diffuses into increasingly larger Gaussian distributions as time proceeds. Mathematically, let f(x) denote the initial temperature

  6. Reconstruction, visualization and explorative analysis of human pluripotency network

    Directory of Open Access Journals (Sweden)

    Priyanka Narad

    2017-09-01

    Full Text Available Identification of genes/proteins involved in pluripotency and their inter-relationships is important for understanding the induction/loss and maintenance of pluripotency. With the availability of large volume of data on interaction/regulation of pluripotency scattered across a large number of biological databases and hundreds of scientific journals, it is required a systematic integration of data which will create a complete view of pluripotency network. Describing and interpreting such a network of interaction and regulation (i.e., stimulation and inhibition links are essential tasks of computational biology, an important first step in systems-level understanding of the underlying mechanisms of pluripotency. To address this, we have assembled a network of 166 molecular interactions, stimulations and inhibitions, based on a collection of research data from 147 publications, involving 122 human genes/proteins, all in a standard electronic format, enabling analyses by readily available software such as Cytoscape and its Apps (formerly called "Plugins". The network includes the core circuit of OCT4 (POU5F1, SOX2 and NANOG, its periphery (such as STAT3, KLF4, UTF1, ZIC3, and c-MYC, connections to upstream signaling pathways (such as ACTIVIN, WNT, FGF, and BMP, and epigenetic regulators (such as L1TD1, LSD1 and PRC2. We describe the general properties of the network and compare it with other literature-based networks. Gene Ontology (GO analysis is being performed to find out the over-represented GO terms in the network. We use several expression datasets to condense the network to a set of network links that identify the key players (genes/proteins and the pathways involved in transition from one state of pluripotency to other state (i.e., native to primed state, primed to non-pluripotent state and pluripotent to non-pluripotent state.

  7. REVEAL: Reconstruction, Enhancement, Visualization, and Ergonomic Assessment for Laparoscopy

    Science.gov (United States)

    2008-08-01

    2007) Ergonomic risk of assisting in minimally invasive surgery, Annual conference of SAGES 2008 Park AE, Meenaghan N, Lee TH, Seagull FJ, Lee G...of NOTES techniques: a study of physical and mental workload, body movement and posture Adrian Park, Gyusung Lee, Carlos Godinez, F Jacob Seagull

  8. Integration of intraoperative stereovision imaging for brain shift visualization during image-guided cranial procedures

    Science.gov (United States)

    Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Roberts, David W.; Paulsen, Keith D.; Simon, David A.

    2014-03-01

    Dartmouth and Medtronic Navigation have established an academic-industrial partnership to develop, validate, and evaluate a multi-modality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. A stereovision system has been developed and optimized for intraoperative use through integration with a surgical microscope and an image-guided surgery system. The microscope optics and stereovision CCD sensors are localized relative to the surgical field using optical tracking and can efficiently acquire stereo image pairs from which a localized 3D profile of the exposed surface is reconstructed. This paper reports the first demonstration of intraoperative acquisition, reconstruction and visualization of 3D stereovision surface data in the context of an industry-standard image-guided surgery system. The integrated system is capable of computing and presenting a stereovision-based update of the exposed cortical surface in less than one minute. Alternative methods for visualization of high-resolution, texture-mapped stereovision surface data are also investigated with the objective of determining the technical feasibility of direct incorporation of intraoperative stereo imaging into future iterations of Medtronic's navigation platform.

  9. Reconstructed coronal views of CT and isotopic images of the pancreas

    International Nuclear Information System (INIS)

    Kasuga, Toshio; Kobayashi, Toshio; Nakanishi, Fumiko

    1980-01-01

    To compare functional images of the pancreas by scintigraphy with morphological views of the pancreas by CT, CT coronal views of the pancreas were reconstructed. As CT coronal views were reconstructed from the routine scanning, there was a problem in longitudinal spatial resolution. However, almost satisfactory total images of the pancreas were obtained by improving images adequately. In 27 patients whose diseases had been confirmed, it was easy to compare pancreatic scintigrams with pancreatic CT images by using reconstructed CT coronal views, and information which had not been obtained by original CT images could be obtained by using reconstructed CT coronal views. Especially, defects on pancreatic images and the shape of pancreas which had not been visualized clearly by scintigraphy alone could be visualized by using reconstructed CT coronal views of the pancreas. (Tsunoda, M.)

  10. Automated comparison of Bayesian reconstructions of experimental profiles with physical models

    International Nuclear Information System (INIS)

    Irishkin, Maxim

    2014-01-01

    In this work we developed an expert system that carries out in an integrated and fully automated way i) a reconstruction of plasma profiles from the measurements, using Bayesian analysis ii) a prediction of the reconstructed quantities, according to some models and iii) an intelligent comparison of the first two steps. This system includes systematic checking of the internal consistency of the reconstructed quantities, enables automated model validation and, if a well-validated model is used, can be applied to help detecting interesting new physics in an experiment. The work shows three applications of this quite general system. The expert system can successfully detect failures in the automated plasma reconstruction and provide (on successful reconstruction cases) statistics of agreement of the models with the experimental data, i.e. information on the model validity. (author) [fr

  11. Algebraic reconstruction techniques for spectral reconstruction in diffuse optical tomography

    International Nuclear Information System (INIS)

    Brendel, Bernhard; Ziegler, Ronny; Nielsen, Tim

    2008-01-01

    Reconstruction in diffuse optical tomography (DOT) necessitates solving the diffusion equation, which is nonlinear with respect to the parameters that have to be reconstructed. Currently applied solving methods are based on the linearization of the equation. For spectral three-dimensional reconstruction, the emerging equation system is too large for direct inversion, but the application of iterative methods is feasible. Computational effort and speed of convergence of these iterative methods are crucial since they determine the computation time of the reconstruction. In this paper, the iterative methods algebraic reconstruction technique (ART) and conjugated gradients (CGs) as well as a new modified ART method are investigated for spectral DOT reconstruction. The aim of the modified ART scheme is to speed up the convergence by considering the specific conditions of spectral reconstruction. As a result, it converges much faster to favorable results than conventional ART and CG methods

  12. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  13. Success of Meniscal Repair at ACL Reconstruction

    Science.gov (United States)

    Toman, Charles; Spindler, Kurt P.; Dunn, Warren R.; Amendola, Annunziata; Andrish, Jack T.; Bergfeld, John A.; Flanigan, David; Jones, Morgan; Kaeding, Christopher C.; Marx, Robert G.; Matava, Matthew J.; McCarty, Eric C.; Parker, Richard D.; Wolcott, Michelle; Vidal, Armando; Wolf, Brian R.; Huston, Laura J.; Harrell, Frank E.; Wright, Rick W.

    2013-01-01

    Background Meniscal repair is performed in an attempt to prevent posttraumatic arthritis resulting from meniscal dysfunction after meniscal tears. The socioeconomic implications of premature arthritis are significant in the young patient population. Investigations and techniques focusing on meniscus preservation and healing are now at the forefront of orthopaedic sports medicine. Hypothesis Concomitant meniscal repair with ACL reconstruction is a durable and successful procedure at two year follow-up. Study Design Case Series; Level of evidence, 4. Methods All unilateral primary ACL reconstructions entered in 2002 in a prospective cohort who had meniscal repair at the time of ACLR were evaluated. Validated patient oriented outcome instruments were completed preoperatively and then again at the two-year postoperative time point. Reoperation after the index procedure was also documented and confirmed by operative reports. Results 437 unilateral primary ACL reconstructions were performed with 86 concomitant meniscal repairs (57 medial, 29 lateral) in 84 patients during the study period. Patient follow-up was obtained on 94% (81/86) of the meniscal repairs, allowing confirmation of meniscal repair success (defined as no repeat arthroscopic procedure) or failure. The overall success rate for meniscal repairs was 96% (76/79 patients) at two-year follow-up. Conclusions Meniscal repair is a successful procedure in conjunction with ACL reconstruction. When confronted with a “repairable” meniscal tear at the time of ACL reconstruction, orthopaedic surgeons can expect an estimated >90% clinical success rate at two-year follow-up using a variety of methods as shown in our study. PMID:19465734

  14. Arctic Sea Level Reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde

    Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...

  15. Reconstructing warm inflation

    Science.gov (United States)

    Herrera, Ramón

    2018-03-01

    The reconstruction of a warm inflationary universe model from the scalar spectral index n_S(N) and the tensor to scalar ratio r( N) as a function of the number of e-folds N is studied. Under a general formalism we find the effective potential and the dissipative coefficient in terms of the cosmological parameters n_S and r considering the weak and strong dissipative stages under the slow roll approximation. As a specific example, we study the attractors for the index n_S given by nS-1∝ N^{-1} and for the ratio r∝ N^{-2}, in order to reconstruct the model of warm inflation. Here, expressions for the effective potential V(φ ) and the dissipation coefficient Γ (φ ) are obtained.

  16. Jet Vertex Charge Reconstruction

    CERN Document Server

    Nektarijevic, Snezana; The ATLAS collaboration

    2015-01-01

    A newly developed algorithm called the jet vertex charge tagger, aimed at identifying the sign of the charge of jets containing $b$-hadrons, referred to as $b$-jets, is presented. In addition to the well established track-based jet charge determination, this algorithm introduces the so-called \\emph{jet vertex charge} reconstruction, which exploits the charge information associated to the displaced vertices within the jet. Furthermore, the charge of a soft muon contained in the jet is taken into account when available. All available information is combined into a multivariate discriminator. The algorithm has been developed on jets matched to generator level $b$-hadrons provided by $t\\bar{t}$ events simulated at $\\sqrt{s}$=13~TeV using the full ATLAS detector simulation and reconstruction.

  17. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  18. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  19. LHCb jet reconstruction

    International Nuclear Information System (INIS)

    Francisco, Oscar; Rangel, Murilo; Barter, William; Bursche, Albert; Potterat, Cedric; Coco, Victor

    2012-01-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10 32 cm -2 s -1 and the integrated luminosity reached the value of 1,02fb -1 on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space ηX φ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its η region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  20. LHCb jet reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Oscar; Rangel, Murilo [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Barter, William [University of Cambridge, Cambridge (United Kingdom); Bursche, Albert [Universitat Zurich, Zurich (Switzerland); Potterat, Cedric [Universitat de Barcelona, Barcelona (Spain); Coco, Victor [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands)

    2012-07-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10{sup 32} cm{sup -2}s{sup -1} and the integrated luminosity reached the value of 1,02fb{sup -1} on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space {eta}X {phi} and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its {eta} region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  1. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  2. Truth Seeded Reconstruction for Fast Simulation in the ATLAS Experiment

    CERN Document Server

    Jansky, Roland; Salzburger, Andreas

    The huge success of the ATLAS experiment for particle physics during Run 1 of the LHC would not have been possible without the production of vast amounts of simulated Monte Carlo data. However, the very detailed detector simulation is a highly CPU intensive task and thus resource shortages occurred. Motivated by this, great effort has been put into speeding up the simulation. As a result, other timeconsuming parts became visible. One of which is the track reconstruction. This thesis describes one potential solution to the CPU intensive reconstruction of simulated data: a newly designed truth seeded reconstruction. At its basics is the idea to skip the pattern recognition altogether, instead utilizing the available (truth) information from simulation to directly fit particle trajectories without searching for them. At the same time tracking effects of the standard reconstruction need to be emulated. This approach is validated thoroughly and no critical deviations of the results compared to the standard reconst...

  3. Surreal aroma's. (Reconstructing the volatile heritage of Marcel Duchamp

    Directory of Open Access Journals (Sweden)

    Caro Verbeek

    2016-06-01

    Full Text Available No ‘visual’ artist addressed the sense of smell as often as Marcel Duchamp did. Whereas his solid objects can still be studied visually and textually, the scents he used have by now evaporated, and a vocabulary to describe them is lacking until today. What we have left are nose witness reports and the possibility to smell olfactory reconstructions. Rereading canonical text with a more sensory gaze and inhaling these historical fragrances, such as cedar, erotic perfumes and coffee,  will enable us to reconstruct the olfactory dimension of our highly ocularcentric history of art.

  4. [Reconstructive methods after Fournier gangrene].

    Science.gov (United States)

    Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A

    2016-04-01

    Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.

  5. Visualizing phylogenetic tree landscapes.

    Science.gov (United States)

    Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

    2017-02-02

    Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D

  6. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  7. High resolution SPECT imaging for visualization of intratumoral heterogeneity using a SPECT/CT scanner dedicated for small animal imaging

    International Nuclear Information System (INIS)

    Umeda, Izumi O.; Tani, Kotaro; Tsuda, Keisuke

    2012-01-01

    Tumor interiors are never homogeneous and in vivo visualization of intratumoral heterogeneity would be an innovation that contributes to improved cancer therapy. But, conventional nuclear medicine tests have failed to visualize heterogeneity in vivo because of limited spatial resolution. Recently developed single photon emission computed tomographic (SPECT) scanners dedicated for small animal imaging are of interest due to their excellent spatial resolution of 111 In and simulations of actual small animal imaging. The optimal conditions obtained were validated by in vivo imaging of sarcoma 180-bearing mice. Larger number of counts must be obtained within limited acquisition time to visualize tumor heterogeneity in vivo in animal imaging, compared to cases that simply detect tumors. At an acquisition time of 30 min, better image quality was obtained with pinhole apertures diameter of 1.4 mm than of 1.0 mm. The obtained best spatial resolution was 1.3 mm, it was acceptable for our purpose, though a little worse than the best possible performance of the scanner (1.0 mm). Additionally, the reconstruction parameters, such as noise suppression, voxel size, and iteration/subset number, needed to be optimized under the limited conditions and were different from those found under the ideal condition. The minimal radioactivity concentration for visualization of heterogeneous tumor interiors was estimated to be as high as 0.2-0.5 MBq/mL. Liposomes containing 111 In met this requirement and were administered to tumor-bearing mice. SPECT imaging successfully showed heterogeneous 111 In distribution within the tumors in vivo with good spatial resolution. A threshold of 0.2 MBq/g for clear visualization of tumor heterogeneity was validated. Autoradiograms obtained ex vivo of excised tumors confirmed that the in vivo SPECT images accurately depicted the heterogeneous intratumoral accumulation of liposomes. Intratumoral heterogeneity was successfully visualized under the optimized

  8. Visual and Verbal Learning in a Genetic Metabolic Disorder

    Science.gov (United States)

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  9. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  10. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  11. Adaptive wavelet tight frame construction for accelerating MRI reconstruction

    Directory of Open Access Journals (Sweden)

    Genjiao Zhou

    2017-09-01

    Full Text Available The sparsity regularization approach, which assumes that the image of interest is likely to have sparse representation in some transform domain, has been an active research area in image processing and medical image reconstruction. Although various sparsifying transforms have been used in medical image reconstruction such as wavelet, contourlet, and total variation (TV etc., the efficiency of these transforms typically rely on the special structure of the underlying image. A better way to address this issue is to develop an overcomplete dictionary from the input data in order to get a better sparsifying transform for the underlying image. However, the general overcomplete dictionaries do not satisfy the so-called perfect reconstruction property which ensures that the given signal can be perfectly represented by its canonical coefficients in a manner similar to orthonormal bases, resulting in time consuming in the iterative image reconstruction. This work is to develop an adaptive wavelet tight frame method for magnetic resonance image reconstruction. The proposed scheme incorporates the adaptive wavelet tight frame approach into the magnetic resonance image reconstruction by solving a l0-regularized minimization problem. Numerical results show that the proposed approach provides significant time savings as compared to the over-complete dictionary based methods with comparable performance in terms of both peak signal-to-noise ratio and subjective visual quality.

  12. Treelink: data integration, clustering and visualization of phylogenetic trees.

    Science.gov (United States)

    Allende, Christian; Sohn, Erik; Little, Cedric

    2015-12-29

    Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .

  13. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  14. Deconstructing, Reconstructing, Preserving Paul E. Meehl's Legacy of Construct Validity

    Science.gov (United States)

    Maher, Brendan A.; Gottesman, Irving I.

    2005-01-01

    The question of the status of cause-and-effect explanations of human behavior that posit physically existing causative factors and those that, on the other hand, posit hypothetical entities in the form of "useful fictions" has a long history. The influence of the works of Jeremy Bentham and Hans Vaihinger, as well as the later influence of Francis…

  15. Visualization rhetoric: framing effects in narrative visualization.

    Science.gov (United States)

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  16. Penile surgery and reconstruction.

    Science.gov (United States)

    Perovic, Sava V; Djordjevic, Miroslav L J; Kekic, Zoran K; Djakovic, Nenad G

    2002-05-01

    This review will highlight recent advances in the field of penile reconstructive surgery in the paediatric and adult population. It is based on the work published during the year 2001. Besides the anatomical and histological studies of the penis, major contributions have been described in congenital and acquired penile anomalies. Also, a few new techniques and modifications of old procedures are described in order to improve the final functional and aesthetic outcome. The techniques for penile enlargement present a trend in the new millennium, but are still at the stage of investigation.

  17. Progressive Reconstruction: A Methodology for Stabilization and Reconstruction Operations

    National Research Council Canada - National Science Library

    Rohr, Karl C

    2006-01-01

    ... these nations in accordance with stated United States' goals. The argument follows closely current and developing United States military doctrine on stabilization, reconstruction, and counterinsurgency operations...

  18. An open-source, self-explanatory touch screen in routine care. Validity of filling in the Bath measures on Ankylosing Spondylitis Disease Activity Index, Function Index, the Health Assessment Questionnaire and Visual Analogue Scales in comparison with paper versions

    DEFF Research Database (Denmark)

    Schefte, David B; Hetland, Merete L

    2010-01-01

    The Danish DANBIO registry has developed open-source software for touch screens in the waiting room. The objective was to assess the validity of outcomes from self-explanatory patient questionnaires on touch screen in comparison with the traditional paper form in routine clinical care....

  19. Synchronized dynamic dose reconstruction

    International Nuclear Information System (INIS)

    Litzenberg, Dale W.; Hadley, Scott W.; Tyagi, Neelam; Balter, James M.; Ten Haken, Randall K.; Chetty, Indrin J.

    2007-01-01

    Variations in target volume position between and during treatment fractions can lead to measurable differences in the dose distribution delivered to each patient. Current methods to estimate the ongoing cumulative delivered dose distribution make idealized assumptions about individual patient motion based on average motions observed in a population of patients. In the delivery of intensity modulated radiation therapy (IMRT) with a multi-leaf collimator (MLC), errors are introduced in both the implementation and delivery processes. In addition, target motion and MLC motion can lead to dosimetric errors from interplay effects. All of these effects may be of clinical importance. Here we present a method to compute delivered dose distributions for each treatment beam and fraction, which explicitly incorporates synchronized real-time patient motion data and real-time fluence and machine configuration data. This synchronized dynamic dose reconstruction method properly accounts for the two primary classes of errors that arise from delivering IMRT with an MLC: (a) Interplay errors between target volume motion and MLC motion, and (b) Implementation errors, such as dropped segments, dose over/under shoot, faulty leaf motors, tongue-and-groove effect, rounded leaf ends, and communications delays. These reconstructed dose fractions can then be combined to produce high-quality determinations of the dose distribution actually received to date, from which individualized adaptive treatment strategies can be determined

  20. LHCb; LHCb Jet Reconstruction

    CERN Multimedia

    Augusto, O

    2012-01-01

    The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...

  1. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  2. Entropy and transverse section reconstruction

    International Nuclear Information System (INIS)

    Gullberg, G.T.

    1976-01-01

    A new approach to the reconstruction of a transverse section using projection data from multiple views incorporates the concept of maximum entropy. The principle of maximizing information entropy embodies the assurance of minimizing bias or prejudice in the reconstruction. Using maximum entropy is a necessary condition for the reconstructed image. This entropy criterion is most appropriate for 3-D reconstruction of objects from projections where the system is underdetermined or the data are limited statistically. This is the case in nuclear medicine time limitations in patient studies do not yield sufficient projections

  3. Engagement sensitive visual stimulation

    Directory of Open Access Journals (Sweden)

    Deepesh Kumar

    2016-06-01

    Full Text Available Stroke is one of leading cause of death and disability worldwide. Early detection during golden hour and treatment of individual neurological dysfunction in stroke using easy-to-access biomarkers based on a simple-to-use, cost-effective, clinically-valid screening tool can bring a paradigm shift in healthcare, both urban and rural. In our research we have designed a quantitative automatic home-based oculomotor assessment tool that can play an important complementary role in prognosis of neurological disorders like stroke for the neurologist. Once the patient has been screened for stroke, the next step is to design proper rehabilitation platform to alleviate the disability. In addition to the screening platform, in our research, we work in designing virtual reality based rehabilitation exercise platform that has the potential to deliver visual stimulation and in turn contribute to improving one’s performance.

  4. User Interface for the SMAC Traffic Accident Reconstruction Program

    Directory of Open Access Journals (Sweden)

    Rok Krulec

    2003-11-01

    Full Text Available This paper describes the development of the user interfacefor the traffic accident reconstruction program SMAC. Threebasic modules of software will be presented. Initial parametersinput and visualization, using graphics library for simulation of3D space, which form a graphical user interface, will be explainedin more detail. The modules have been developed usingdifferent technologies and programming approaches to increaseflexibility in further development and to take maximumadvantage of the currently accessible computer hardware, sothat module to module communication is also mentioned.

  5. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    OpenAIRE

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize ...

  6. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  7. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A [Sanford-Burnham Medical Research Institute; Novichkov, Pavel S [Lawrence Berkeley National Laboratory

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated in RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.

  8. Visual literacy in HCI

    NARCIS (Netherlands)

    Overton, K.; Sosa-Tzec, O.; Smith, N.; Blevis, E.; Odom, W.; Hauser, S.; Wakkary, R.L.

    2016-01-01

    The goal of this workshop is to develop ideas about and expand a research agenda for visual literacy in HCI. By visual literacy, we mean the competency (i) to understand visual materials, (ii) to create visuals materials, and (iii) to think visually [2]. There are three primary motivations for this

  9. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    Science.gov (United States)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  10. Prospects of linear reconstruction in atomic resolution electron holographic tomography

    International Nuclear Information System (INIS)

    Krehl, Jonas; Lubk, Axel

    2015-01-01

    Tomography commonly requires a linear relation between the measured signal and the underlying specimen property; for Electron Holographic Tomography this is given by the Phase Grating Approximation (PGA). While largely valid at medium resolution, discrepancies arise at high resolution imaging conditions. We set out to investigate the artefacts that are produced if the reconstruction still assumes the PGA even with an atomic resolution tilt series. To forego experimental difficulties the holographic tilt series was simulated. The reconstructed electric potential clearly shows peaks at the positions of the atoms. These peaks have characterisitic deformations, which can be traced back to the defocus a particular atom has in the holograms of the tilt series. Exchanging an atom for one of a different atomic number results in a significant change in the reconstructed potential that is well contained within the atom's peak. - Highlights: • We simulate a holographic tilt series of a nanocrystal with atomic resolution. • Using PGA-based Holographic Tomography we reconstruct the atomic structure. • The reconstruction shows characteristic artefacts, chiefly caused by defocus. • Changing one atom's Z produces a well localised in the reconstruction

  11. Prospects of linear reconstruction in atomic resolution electron holographic tomography

    Energy Technology Data Exchange (ETDEWEB)

    Krehl, Jonas, E-mail: Jonas.Krehl@triebenberg.de; Lubk, Axel

    2015-03-15

    Tomography commonly requires a linear relation between the measured signal and the underlying specimen property; for Electron Holographic Tomography this is given by the Phase Grating Approximation (PGA). While largely valid at medium resolution, discrepancies arise at high resolution imaging conditions. We set out to investigate the artefacts that are produced if the reconstruction still assumes the PGA even with an atomic resolution tilt series. To forego experimental difficulties the holographic tilt series was simulated. The reconstructed electric potential clearly shows peaks at the positions of the atoms. These peaks have characterisitic deformations, which can be traced back to the defocus a particular atom has in the holograms of the tilt series. Exchanging an atom for one of a different atomic number results in a significant change in the reconstructed potential that is well contained within the atom's peak. - Highlights: • We simulate a holographic tilt series of a nanocrystal with atomic resolution. • Using PGA-based Holographic Tomography we reconstruct the atomic structure. • The reconstruction shows characteristic artefacts, chiefly caused by defocus. • Changing one atom's Z produces a well localised in the reconstruction.

  12. Industrial dynamic tomographic reconstruction; Reconstrucao tomografica dinamica industrial

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Eric Ferreira de

    2016-07-01

    The state of the art methods applied to industrial processes is currently based on the principles of classical tomographic reconstructions developed for tomographic patterns of static distributions, or is limited to cases of low variability of the density distribution function of the tomographed object. Noise and motion artifacts are the main problems caused by a mismatch in the data from views acquired in different instants. All of these add to the known fact that using a limited amount of data can result in the presence of noise, artifacts and some inconsistencies with the distribution under study. One of the objectives of the present work is to discuss the difficulties that arise from implementing reconstruction algorithms in dynamic tomography that were originally developed for static distributions. Another objective is to propose solutions that aim at reducing a temporal type of information loss caused by employing regular acquisition systems to dynamic processes. With respect to dynamic image reconstruction it was conducted a comparison between different static reconstruction methods, like MART and FBP, when used for dynamic scenarios. This comparison was based on a MCNPx simulation as well as an analytical setup of an aluminum cylinder that moves along the section of a riser during the process of acquisition, and also based on cross section images from CFD techniques. As for the adaptation of current tomographic acquisition systems for dynamic processes, this work established a sequence of tomographic views in a just-in-time fashion for visualization purposes, a form of visually disposing density information as soon as it becomes amenable to image reconstruction. A third contribution was to take advantage of the triple color channel necessary to display colored images in most displays, so that, by appropriately scaling the acquired values of each view in the linear system of the reconstruction, it was possible to imprint a temporal trace into the regularly

  13. Breast reconstruction and post-mastectomy radiation practice

    International Nuclear Information System (INIS)

    Chen, Susie A; Hiley, Crispin; Nickleach, Dana; Petsuksiri, Janjira; Andic, Fundagul; Riesterer, Oliver; Switchenko, Jeffrey M; Torres, Mylin A

    2013-01-01

    cancer patients with reconstruction. Further research on the impact and delivery of radiation to a reconstructed breast may validate some of the observed practices, highlight the variability in treatment practice, and help create a treatment consensus

  14. Arctic sea-level reconstruction analysis using recent satellite altimetry

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2014-01-01

    We present a sea-level reconstruction for the Arctic Ocean using recent satellite altimetry data. The model, forced by historical tide gauge data, is based on empirical orthogonal functions (EOFs) from a calibration period; for this purpose, newly retracked satellite altimetry from ERS-1 and -2...... and Envisat has been used. Despite the limited coverage of these datasets, we have made a reconstruction up to 82 degrees north for the period 1950–2010. We place particular emphasis on determining appropriate preprocessing for the tide gauge data, and on validation of the model, including the ability...

  15. Optimal reconstruction angles

    International Nuclear Information System (INIS)

    Cook, G.O. Jr.; Knight, L.

    1979-07-01

    The question of optimal projection angles has recently become of interest in the field of reconstruction from projections. Here, studies are concentrated on the n x n pixel space, where literative algorithms such as ART and direct matrix techniques due to Katz are considered. The best angles are determined in a Gauss--Markov statistical sense as well as with respect to a function-theoretical error bound. The possibility of making photon intensity a function of angle is also examined. Finally, the best angles to use in an ART-like algorithm are studied. A certain set of unequally spaced angles was found to be preferred in several contexts. 15 figures, 6 tables

  16. The Perspective Structure of Visual Space

    Science.gov (United States)

    2015-01-01

    Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222

  17. Physics Validation of the LHC Software

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The LHC Software will be confronted to unprecedented challenges as soon as the LHC will turn on. We summarize the main Software requirements coming from the LHC detectors, triggers and physics, and we discuss several examples of Software components developed by the experiments and the LCG project (simulation, reconstruction, etc.), their validation, and their adequacy for LHC physics.

  18. Evaluation of digital breast tomosynthesis reconstruction algorithms using synchrotron radiation in standard geometry

    International Nuclear Information System (INIS)

    Bliznakova, K.; Kolitsi, Z.; Speller, R. D.; Horrocks, J. A.; Tromba, G.; Pallikarakis, N.

    2010-01-01

    Purpose: In this article, the image quality of reconstructed volumes by four algorithms for digital tomosynthesis, applied in the case of breast, is investigated using synchrotron radiation. Methods: An angular data set of 21 images of a complex phantom with heterogeneous tissue-mimicking background was obtained using the SYRMEP beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy. The irradiated part was reconstructed using the multiple projection algorithm (MPA) and the filtered backprojection with ramp followed by hamming windows (FBR-RH) and filtered backprojection with ramp (FBP-R). Additionally, an algorithm for reducing the noise in reconstructed planes based on noise mask subtraction from the planes of the originally reconstructed volume using MPA (MPA-NM) has been further developed. The reconstruction techniques were evaluated in terms of calculations and comparison of the contrast-to-noise ratio (CNR) and artifact spread function. Results: It was found that the MPA-NM resulted in higher CNR, comparable with the CNR of FBP-RH for high contrast details. Low contrast objects are well visualized and characterized by high CNR using the simple MPA and the MPA-NM. In addition, the image quality of the reconstructed features in terms of CNR and visual appearance as a function of the initial number of projection images and the reconstruction arc was carried out. Slices reconstructed with more input projection images result in less reconstruction artifacts and higher detail CNR, while those reconstructed from projection images acquired in reduced angular range causes pronounced streak artifacts. Conclusions: Of the reconstruction algorithms implemented, the MPA-NM and MPA are a good choice for detecting low contrast objects, while the FBP-RH, FBP-R, and MPA-NM provide high CNR and well outlined edges in case of microcalcifications.

  19. CRUCIATE LIGAMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. V. Korolev

    2016-01-01

    Full Text Available Purpose: To evaluate long-term results of meniscal repair during arthroscopic ACL reconstruction.Materials and methods: 45 patients who underwent meniscal repair during arthroscopic ACL reconstruction between 2007 and 2013 by the same surgeon were included in the study. In total, fifty meniscus were repaired (26 medial and 24 lateral. Procedures included use of one up to four Fast-Fix implants (Smith & Nephew. In five cases both medial and lateral meniscus were repaired. Cincinnati, IKDC and Lysholm scales were used for long-term outcome analysis.Results: 19 male and 26 female patients were included in the study aging from 15 to 59 years (mean age 33,2±1,5. Median time from injury to surgical procedure was zero months (ranging zero to one. Mean time from surgery to scale analysis was 55,9±3 months (ranged 20-102. Median Cincinnati score was 97 (ranged 90-100, with excellent results in 93% of cases (43 patients and good results in 7% (3 patients. Median IKDC score was 90,8 (ranged 86,2-95,4, with excellent outcomes in 51% of cases (23 patients, good in 33% (15 patients and satisfactory in 16% (7 patients. Median Lysholm score was 95 (ranged 90-100, with excellent outcomes in 76% of cases (34 patients and good in 24% (11 patients. Authors identified no statistical differences when comparing survey results in age, sex and time from trauma to surgery.Conclusions: Results of the present study match the data from orthopedic literature that prove meniscal repair as a safe and efficient procedure with good and excellent outcomes. All-inside meniscal repair can be used irrespectively of patients' age and is efficient even in case of delayed procedures.

  20. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    Science.gov (United States)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  1. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  2. Reconstruction of electric systems (ELE)

    International Nuclear Information System (INIS)

    Kohutovic, P.

    2001-01-01

    The original design of WWER-230 units consisted of a single common system EEPS (essential electric power supply system) per unit. The establishment of redundancy 2 x 100% EEPS was a global task. The task was started during the 'Small reconstruction' - MR V1, continued in 'Gradual reconstruction' and finished in the year 2000. (author)

  3. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  4. Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.

    Science.gov (United States)

    Orbán, Levente L; Chartier, Sylvain

    2015-01-01

    Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.

  5. The Power of Popular Education and Visual Arts for Trauma Survivors' Critical Consciousness and Collective Action

    Science.gov (United States)

    Escueta, Mok; Butterwick, Shauna

    2012-01-01

    How can visual arts and popular education pedagogy contribute to collective recovery from and reconstruction after trauma? This question framed the design and delivery of the Trauma Recovery and Reconstruction Group (TRRG), which consisted of 12 group sessions delivered to clients (trauma survivors) of the Centre for Concurrent Disorders (CCD) in…

  6. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  7. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  8. Method of reconstructing a moving pulse

    Energy Technology Data Exchange (ETDEWEB)

    Howard, S J; Horton, R D; Hwang, D Q; Evans, R W; Brockington, S J; Johnson, J [UC Davis Department of Applied Science, Livermore, CA, 94551 (United States)

    2007-11-15

    We present a method of analyzing a set of N time signals f{sub i}(t) that consist of local measurements of the same physical observable taken at N sequential locations Z{sub i} along the length of an experimental device. The result is an algorithm for reconstructing an approximation F(z,t) of the field f(z,t) in the inaccessible regions between the points of measurement. We also explore the conditions needed for this approximation to hold, and test the algorithm under a variety of conditions. We apply this method to analyze the magnetic field measurements taken on the Compact Toroid Injection eXperiment (CTIX) plasma accelerator; providing a direct means of visualizing experimental data, quantifying global properties, and benchmarking simulation.

  9. Online Plasma Shape Reconstruction for EAST Tokamak

    International Nuclear Information System (INIS)

    Luo Zhengping; Xiao Bingjia; Zhu Yingfei; Yang Fei

    2010-01-01

    An online plasma shape reconstruction, based on the offline version of the EFIT code and MPI library, can be carried out between two adjacent shots in EAST. It combines online data acquisition, parallel calculation, and data storage together. The program on the master node of the cluster detects the termination of the discharge promptly, reads diagnostic data from the EAST mdsplus server on the completion of data storing, and writes the results onto the EFIT mdsplus server after the calculation is finished. These processes run automatically on a nine-nodes IBM blade center. The total time elapsed is about 1 second to several minutes, depending on the duration of the shot. With the results stored in the mdsplus server, it is convenient for operators and physicists to analyze the behavior of plasma using visualization tools.

  10. Three-dimensional reconstruction of CT images

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Toshiaki; Kattoh, Keiichi; Kawakami, Genichiroh; Igami, Isao; Mariya, Yasushi; Nakamura, Yasuhiko; Saitoh, Yohko; Tamura, Koreroku; Shinozaki, Tatsuyo

    1986-09-01

    Computed tomography (CT) has the ability to provide sensitive visualization of organs and lesions. Owing to the nature of CT to be transaxial images, a structure which is greater than a certain size appears as several serial CT images. Consequently each observer must reconstruct those images into a three-dimensional (3-D) form mentally. It has been supposed to be of great use if such a 3-D form can be described as a definite figure. A new computer program has been developed which can produce 3-D figures from the profiles of organs and lesions on CT images using spline curves. The figures obtained through this method are regarded to have practical applications.

  11. Breast Reconstruction Following Cancer Treatment.

    Science.gov (United States)

    Gerber, Bernd; Marx, Mario; Untch, Michael; Faridi, Andree

    2015-08-31

    About 8000 breast reconstructions after mastectomy are per - formed in Germany each year. It has become more difficult to advise patients because of the wide variety of heterologous and autologous techniques that are now available and because of changes in the recommendations about radiotherapy. This article is based on a review of pertinent articles (2005-2014) that were retrieved by a selective search employing the search terms "mastectomy" and "breast reconstruction." The goal of reconstruction is to achieve an oncologically safe and aestically satisfactory result for the patient over the long term. Heterologous, i.e., implant-based, breast reconstruction (IBR) and autologous breast reconstruction (ABR) are complementary techniques. Immediate reconstruction preserves the skin of the breast and its natural form and prevents the psychological trauma associated with mastectomy. If post-mastectomy radiotherapy (PMRT) is not indicated, implant-based reconstruction with or without a net/acellular dermal matrix (ADM) is a common option. Complications such as seroma formation, infection, and explantation are significantly more common when an ADM is used (15.3% vs. 5.4% ). If PMRT is performed, then the complication rate of implant-based breast reconstruction is 1 to 48% ; in particular, Baker grade III/IV capsular fibrosis occurs in 7 to 22% of patients, and the prosthesis must be explanted in 9 to 41% . Primary or, preferably, secondary autologous reconstruction is an alternative. The results of ABR are more stable over the long term, but the operation is markedly more complex. Autologous breast reconstruction after PMRT does not increase the risk of serious complications (20.5% vs. 17.9% without radiotherapy). No randomized controlled trials have yet been conducted to compare the reconstructive techniques with each other. If radiotherapy will not be performed, immediate reconstruction with an implant is recommended. On the other hand, if post-mastectomy radiotherapy

  12. Fast in vivo volume dose reconstruction via reference dose perturbation

    International Nuclear Information System (INIS)

    Lu, Weiguo; Chen, Mingli; Mo, Xiaohu; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel

    2014-01-01

    Purpose: Accurate on-line reconstruction of in-vivo volume dose that accounts for both machine and patient discrepancy is not clinically available. We present a simple reference-dose-perturbation algorithm that reconstructs in-vivo volume dose fast and accurately. Methods: We modelled the volume dose as a function of the fluence map and density image. Machine (output variation, jaw/leaf position errors, etc.) and patient (setup error, weight loss, etc.) discrepancies between the plan and delivery were modelled as perturbation of the fluence map and density image, respectively. Delivered dose is modelled as perturbation of the reference dose due to change of the fluence map and density image. We used both simulated and clinical data to validate the algorithm. The planned dose was used as the reference. The reconstruction was perturbed from the reference and accounted for output-variations and the registered daily image. The reconstruction was compared with the ground truth via isodose lines and the Gamma Index. Results: For various plans and geometries, the volume doses were reconstructed in few seconds. The reconstruction generally matched well with the ground truth. For the 3%/3mm criteria, the Gamma pass rates were 98% for simulations and 95% for clinical data. The differences mainly appeared on the surface of the phantom/patient. Conclusions: A novel reference-dose-perturbation dose reconstruction model is presented. The model accounts for machine and patient discrepancy from planning. The algorithm is simple, fast, yet accurate, which makes online in-vivo 3D dose reconstruction clinically feasible.

  13. Optical image reconstruction using DC data: simulations and experiments

    International Nuclear Information System (INIS)

    Huabei Jiang; Paulsen, K.D.; Oesterberg, U.L.

    1996-01-01

    In this paper, we explore optical image formation using a diffusion approximation of light propagation in tissue which is modelled with a finite-element method for optically heterogeneous media. We demonstrate successful image reconstruction based on absolute experimental DC data obtained with a continuous wave 633 nm He-Ne laser system and a 751 nm diode laser system in laboratory phantoms having two optically distinct regions. The experimental systems used exploit a tomographic type of data collection scheme that provides information from which a spatially variable optical property map is deduced. Reconstruction of scattering coefficient only and simultaneous reconstruction of both scattering and absorption profiles in tissue-like phantoms are obtained from measured and simulated data. Images with different contrast levels between the heterogeneity and the background are also reported and the results show that although it is possible to obtain qualitative visual information on the location and size of a heterogeneity, it may not be possible to quantitatively resolve contrast levels or optical properties using reconstructions from DC data only. Sensitivity of image reconstruction to noise in the measurement data is investigated through simulations. The application of boundary constraints has also been addressed. (author)

  14. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Directory of Open Access Journals (Sweden)

    Ran Li

    2016-01-01

    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  15. A workflow to process 3D+time microscopy images of developing organisms and reconstruct their cell lineage

    Science.gov (United States)

    Faure, Emmanuel; Savy, Thierry; Rizzi, Barbara; Melani, Camilo; Stašová, Olga; Fabrèges, Dimitri; Špir, Róbert; Hammons, Mark; Čúnderlík, Róbert; Recher, Gaëlle; Lombardot, Benoît; Duloquin, Louise; Colin, Ingrid; Kollár, Jozef; Desnoulez, Sophie; Affaticati, Pierre; Maury, Benoît; Boyreau, Adeline; Nief, Jean-Yves; Calvat, Pascal; Vernier, Philippe; Frain, Monique; Lutfalla, Georges; Kergosien, Yannick; Suret, Pierre; Remešíková, Mariana; Doursat, René; Sarti, Alessandro; Mikula, Karol; Peyriéras, Nadine; Bourgine, Paul

    2016-01-01

    The quantitative and systematic analysis of embryonic cell dynamics from in vivo 3D+time image data sets is a major challenge at the forefront of developmental biology. Despite recent breakthroughs in the microscopy imaging of living systems, producing an accurate cell lineage tree for any developing organism remains a difficult task. We present here the BioEmergences workflow integrating all reconstruction steps from image acquisition and processing to the interactive visualization of reconstructed data. Original mathematical methods and algorithms underlie image filtering, nucleus centre detection, nucleus and membrane segmentation, and cell tracking. They are demonstrated on zebrafish, ascidian and sea urchin embryos with stained nuclei and membranes. Subsequent validation and annotations are carried out using Mov-IT, a custom-made graphical interface. Compared with eight other software tools, our workflow achieved the best lineage score. Delivered in standalone or web service mode, BioEmergences and Mov-IT offer a unique set of tools for in silico experimental embryology. PMID:26912388

  16. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  17. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio

    2014-01-01

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  18. A comparative study of three-dimensional reconstructive images of temporomandibular joint using computed tomogram

    International Nuclear Information System (INIS)

    Lim, Suk Young; Koh, Kwang Joon

    1993-01-01

    The purpose of this study was to clarify the spatial relationship of temporomandibular joint and to an aid in the diagnosis of temporomandibular disorder. For this study, three-dimensional images of normal temporomandibular joint were reconstructed by computer image analysis system and three-dimensional reconstructive program integrated in computed tomography. The obtained results were as follows : 1. Two-dimensional computed tomograms had the better resolution than three dimensional computed tomograms in the evaluation of bone structure and the disk of TMJ. 2. Direct sagittal computed tomograms and coronal computed tomograms had the better resolution in the evaluation of the disk of TMJ. 3. The positional relationship of the disk could be visualized, but the configuration of the disk could not be clearly visualized on three-dimensional reconstructive CT images. 4. Three-dimensional reconstructive CT images had the smoother margin than three-dimensional images reconstructed by computer image analysis system, but the images of the latter had the better perspective. 5. Three-dimensional reconstructive images had the better spatial relationship of the TMJ articulation, and the joint space were more clearly visualized on dissection images.

  19. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    Science.gov (United States)

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  20. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    Science.gov (United States)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  1. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    International Nuclear Information System (INIS)

    Boutchko, R; Gullberg, G T; Sitek, A

    2013-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio

  2. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  3. Few-view image reconstruction with dual dictionaries

    International Nuclear Information System (INIS)

    Lu Yang; Zhao Jun; Wang Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. (paper)

  4. Automatic Texture Optimization for 3D Urban Reconstruction

    Directory of Open Access Journals (Sweden)

    LI Ming

    2017-03-01

    Full Text Available In order to solve the problem of texture optimization in 3D city reconstruction by using multi-lens oblique images, the paper presents a method of seamless texture model reconstruction. At first, it corrects the radiation information of images by camera response functions and image dark channel. Then, according to the corresponding relevance between terrain triangular mesh surface model to image, implements occlusion detection by sparse triangulation method, and establishes the triangles' texture list of visible. Finally, combines with triangles' topology relationship in 3D triangular mesh surface model and means and variances of image, constructs a graph-cuts-based texture optimization algorithm under the framework of MRF(Markov random filed, to solve the discrete label problem of texture optimization selection and clustering, ensures the consistency of the adjacent triangles in texture mapping, achieves the seamless texture reconstruction of city. The experimental results verify the validity and superiority of our proposed method.

  5. Simulation and reconstruction of free-streaming data in CBM

    International Nuclear Information System (INIS)

    Friese, Volker

    2011-01-01

    The CBM experiment will investigate heavy-ion reactions at the FAIR facility at unprecedented interaction rates. This implies a novel read-out and data acquisition concept with self-triggered front-end electronics and free-streaming data. Event association must be performed in software on-line, and may require four-dimensional reconstruction routines. In order to study the problem of event association and to develop proper algorithms, simulations must be performed which go beyond the normal event-by-event processing as available from most experimental simulation frameworks. In this article, we discuss the challenges and concepts for the reconstruction of such free-streaming data and present first steps for a time-based simulation which is necessary for the development and validation of the reconstruction algorithms, and which requires modifications to the current software framework FAIRROOT as well as to the data model.

  6. An alternative to the crystallographic reconstruction of austenite in steels

    International Nuclear Information System (INIS)

    Bernier, Nicolas; Bracke, Lieven; Malet, Loïc; Godet, Stéphane

    2014-01-01

    An alternative crystallographic austenite reconstruction programme written in Matlab is developed by combining the best features of the existing models: the orientation relationship refinement, the local pixel-by-pixel analysis and the nuclei identification and spreading strategy. This programme can be directly applied to experimental electron backscatter diffraction mappings. Its applicability is demonstrated on both quenching and partitioning and as-quenched lath-martensite steels. - Highlights: • An alternative crystallographic austenite reconstruction program is developed. • The method combines a local analysis and a nuclei identification/spreading strategy. • The validity of the calculated orientation relationship is verified on a Q and P steel. • The accuracy of the reconstructed microtexture is investigated on a martensite steel

  7. Evidence-Based ACL Reconstruction

    Directory of Open Access Journals (Sweden)

    E. Carlos RODRIGUEZ-MERCHAN

    2015-01-01

    Full Text Available There is controversy in the literature regarding a number of topics related to anterior cruciate ligament (ACLreconstruction. The purpose of this article is to answer the following questions: 1 Bone patellar tendon bone (BPTB reconstruction or hamstring reconstruction (HR; 2 Double bundle or single bundle; 3 Allograft or authograft; 4 Early or late reconstruction; 5 Rate of return to sports after ACL reconstruction; 6 Rate of osteoarthritis after ACL reconstruction. A Cochrane Library and PubMed (MEDLINE search of systematic reviews and meta-analysis related to ACL reconstruction was performed. The key words were: ACL reconstruction, systematic reviews and meta-analysis. The main criteria for selection were that the articles were systematic reviews and meta-analysesfocused on the aforementioned questions. Sixty-nine articles were found, but only 26 were selected and reviewed because they had a high grade (I-II of evidence. BPTB-R was associated with better postoperative knee stability but with a higher rate of morbidity. However, the results of both procedures in terms of functional outcome in the long-term were similar. The double-bundle ACL reconstruction technique showed better outcomes in rotational laxity, although functional recovery was similar between single-bundle and double-bundle. Autograft yielded better results than allograft. There was no difference between early and delayed reconstruction. 82% of patients were able to return to some kind of sport participation. 28% of patients presented radiological signs of osteoarthritis with a follow-up of minimum 10 years.

  8. Reconstructing human evolution

    CERN Multimedia

    AUTHOR|(CDS)2074069

    1999-01-01

    One can reconstruct human evolution using modern genetic data and models based on the mathematical theory of evolution and its four major factors : mutation, natural selection, statistical fluctuations in finite populations (random genetic drift), and migration. Archaeology gives some help on the major dates and events of the process. Chances of studying ancient DNA are very limited but there have been a few successful results. Studying DNA instead of proteins, as was done until a few years ago, and in particular the DNA of mitochondria and of the Y chromosome which are transmitted, respectively, by the maternal line and the paternal line, has greatly simplified the analysis. It is now possible to carry the analysis on individuals, while earlier studies were of necessity based on populations. Also the evolution of ÒcultureÓ (i.e. what we learn from others), in particular that of languages, gives some help and can be greatly enlightened by genetic studies. Even though it is largely based on mechanisms of mut...

  9. Coronary artery plaques: Cardiac CT with model-based and adaptive-statistical iterative reconstruction technique

    International Nuclear Information System (INIS)

    Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo

    2012-01-01

    Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.

  10. Reconstructing Topological Graphs and Continua

    OpenAIRE

    Gartside, Paul; Pitz, Max F.; Suabedissen, Rolf

    2015-01-01

    The deck of a topological space $X$ is the set $\\mathcal{D}(X)=\\{[X \\setminus \\{x\\}] \\colon x \\in X\\}$, where $[Z]$ denotes the homeomorphism class of $Z$. A space $X$ is topologically reconstructible if whenever $\\mathcal{D}(X)=\\mathcal{D}(Y)$ then $X$ is homeomorphic to $Y$. It is shown that all metrizable compact connected spaces are reconstructible. It follows that all finite graphs, when viewed as a 1-dimensional cell-complex, are reconstructible in the topological sense, and more genera...

  11. Tomographic reconstruction of binary fields

    International Nuclear Information System (INIS)

    Roux, Stéphane; Leclerc, Hugo; Hild, François

    2012-01-01

    A novel algorithm is proposed for reconstructing binary images from their projection along a set of different orientations. Based on a nonlinear transformation of the projection data, classical back-projection procedures can be used iteratively to converge to the sought image. A multiscale implementation allows for a faster convergence. The algorithm is tested on images up to 1 Mb definition, and an error free reconstruction is achieved with a very limited number of projection data, saving a factor of about 100 on the number of projections required for classical reconstruction algorithms.

  12. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  13. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  14. Perceived visual informativeness (PVI): construct and scale development to assess visual information in printed materials.

    Science.gov (United States)

    King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick

    2014-01-01

    There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.

  15. Introduction: Critical Visual Theory

    Directory of Open Access Journals (Sweden)

    Peter Ludes

    2014-03-01

    Full Text Available The studies selected for publication in this special issue on Critical Visual Theory can be divided into three thematic groups: (1 image making as power making, (2 commodification and recanonization, and (3 approaches to critical visual theory. The approaches to critical visual theory adopted by the authors of this issue may be subsumed under the following headings (3.1 critical visual discourse and visual memes in general and Anonymous visual discourse in particular, (3.2 collective memory and gendered gaze, and (3.3 visual capitalism, global north and south.

  16. Sci-Sat AM(2): Brachy-07: Tomosynthesis-based seed reconstruction in LDR prostate brachytherapy: A clinical study.

    Science.gov (United States)

    Brunet-Benkhoucha, M; Verhaegen, F; Lassalle, S; Béliveau-Nadeau, D; Reniers, B; Donath, D; Taussky, D; Carrier, J-F

    2008-07-01

    To develop a tomosynthesis-based dose assessment procedure that can be performed after an I-125 prostate seed implantation, while the patient is still under anaesthesia on the treatment table. Our seed detection procedure involves the reconstruction of a volume of interest based on the backprojection of 7 seed-only binary images acquired over an angle of 60° with an isocentric imaging system. A binary seed-only volume is generated by a simple thresholding of the volume of interest. Seeds positions are extracted from this volume with a 3D connected component analysis and a statistical classifier that determines the number of seeds in each cluster of connected voxels. A graphical user interface (GUI) allows to visualize the result and to introduce corrections, if needed. A phantom and a clinical study (24 patients) were carried out to validate the technique. A phantom study demonstrated a very good localization accuracy of (0.4+/-0.4) mm when compared to CT-based reconstruction. This leads to dosimetric error on D90 and V100 of respectively 0.5% and 0.1%. In a patient study with an average of 56 seeds per implant, the automatic tomosynthesis-based reconstruction yields a detection rate of 96% of the seeds and less than 1.5% of false-positives. With the help of the GUI, the user can achieve a 100% detection rate in an average of 3 minutes. This technique would allow to identify possible underdosage and to correct it by potentially reimplanting additional seeds. A more uniform dose coverage could then be achieved in LDR prostate brachytherapy. © 2008 American Association of Physicists in Medicine.

  17. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  18. Cortical visual impairment

    OpenAIRE

    Koželj, Urša

    2013-01-01

    In this thesis we discuss cortical visual impairment, diagnosis that is in the developed world in first place, since 20 percent of children with blindness or low vision are diagnosed with it. The objectives of the thesis are to define cortical visual impairment and the definition of characters suggestive of the cortical visual impairment as well as to search for causes that affect the growing diagnosis of cortical visual impairment. There are a lot of signs of cortical visual impairment. ...

  19. Relativity of Visual Communication

    OpenAIRE

    Arto Mutanen

    2016-01-01

    Communication is sharing and conveying information. In visual communication especially visual messages have to be formulated and interpreted. The interpretation is relative to a method of information presentation method which is human construction. This holds also in the case of visual languages. The notions of syntax and semantics for visual languages are not so well founded as they are for natural languages. Visual languages are both syntactically and semantically dense. The density is conn...

  20. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  1. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  2. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    International Nuclear Information System (INIS)

    Javaid, Zarrar; Unsworth, Charles P.; Boocock, Mark G.; McNair, Peter J.

    2016-01-01

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  3. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    Energy Technology Data Exchange (ETDEWEB)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz [Department of Engineering Science, The University of Auckland, Auckland 1010 (New Zealand); Boocock, Mark G.; McNair, Peter J. [Health and Rehabilitation Research Center, Auckland University of Technology, Auckland 1142 (New Zealand)

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  4. Study of DNA reconstruction enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, M [Kyushu Univ., Fukuoka (Japan). Faculty of Science

    1976-12-01

    Description was made of the characteristics and mechanism of 3 reconstructive enzymes which received from M. luteus or E. coli or T4, and of which natures were clarified as reconstructive enzymes of DNA irradiated with ultraviolet rays. As characteristics, the site of breaking, reaction, molecular weight, electric charge in the neutrality and a specific adhesion to DNA irradiated with ultraviolet rays were mentioned. As to mutant of ultraviolet ray sensitivity, hereditary control mechanism of removal and reconstruction by endo-nuclease activation was described, and suggestion was referred to removal and reconstruction of cells of xedoderma pigmentosum which is a hereditary disease of human. Description was also made as to the mechanism of exonuclease activation which separates dimer selectively from irradiated DNA.

  5. Quantum Logic and Quantum Reconstruction

    OpenAIRE

    Stairs, Allen

    2015-01-01

    Quantum logic understood as a reconstruction program had real successes and genuine limitations. This paper offers a synopsis of both and suggests a way of seeing quantum logic in a larger, still thriving context.

  6. Reconstructing see-saw models

    International Nuclear Information System (INIS)

    Ibarra, Alejandro

    2007-01-01

    In this talk we discuss the prospects to reconstruct the high-energy see-saw Lagrangian from low energy experiments in supersymmetric scenarios. We show that the model with three right-handed neutrinos could be reconstructed in theory, but not in practice. Then, we discuss the prospects to reconstruct the model with two right-handed neutrinos, which is the minimal see-saw model able to accommodate neutrino observations. We identify the relevant processes to achieve this goal, and comment on the sensitivity of future experiments to them. We find the prospects much more promising and we emphasize in particular the importance of the observation of rare leptonic decays for the reconstruction of the right-handed neutrino masses

  7. Breast Reconstruction with Flap Surgery

    Science.gov (United States)

    ... augmented with a breast implant to achieve the desired breast size. Surgical methods Autologous tissue breast reconstruction ... as long as a year or two before feeling completely healed and back to normal. Future breast ...

  8. Rational reconstructions of modern physics

    CERN Document Server

    Mittelstaedt, Peter

    2013-01-01

    Newton’s classical physics and its underlying ontology are loaded with several metaphysical hypotheses that cannot be justified by rational reasoning nor by experimental evidence. Furthermore, it is well known that some of these hypotheses are not contained in the great theories of Modern Physics, such as the theory of Special Relativity and Quantum Mechanics. This book shows that, on the basis of Newton’s classical physics and by rational reconstruction, the theory of Special Relativity as well as Quantum Mechanics can be obtained by partly eliminating or attenuating the metaphysical hypotheses. Moreover, it is shown that these reconstructions do not require additional hypotheses or new experimental results. In the second edition the rational reconstructions are completed with respect to General Relativity and Cosmology. In addition, the statistics of quantum objects is elaborated in more detail with respect to the rational reconstruction of quantum mechanics. The new material completes the approach of t...

  9. Reconstruction of piano hammer force from string velocity.

    Science.gov (United States)

    Chaigne, Antoine

    2016-11-01

    A method is presented for reconstructing piano hammer forces through appropriate filtering of the measured string velocity. The filter design is based on the analysis of the pulses generated by the hammer blow and propagating along the string. In the five lowest octaves, the hammer force is reconstructed by considering two waves only: the incoming wave from the hammer and its first reflection at the front end. For the higher notes, four- or eight-wave schemes must be considered. The theory is validated on simulated string velocities by comparing imposed and reconstructed forces. The simulations are based on a nonlinear damped stiff string model previously developed by Chabassier, Chaigne, and Joly [J. Acoust. Soc. Am. 134(1), 648-665 (2013)]. The influence of absorption, dispersion, and amplitude of the string waves on the quality of the reconstruction is discussed. Finally, the method is applied to real piano strings. The measured string velocity is compared to the simulated velocity excited by the reconstructed force, showing a high degree of accuracy. A number of simulations are compared to simulated strings excited by a force derived from measurements of mass and acceleration of the hammer head. One application to an historic piano is also presented.

  10. Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.

    Science.gov (United States)

    Benazzi, Stefano; Senck, Sascha

    2011-04-01

    In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Parametric image reconstruction using spectral analysis of PET projection data

    International Nuclear Information System (INIS)

    Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat

    1998-01-01

    Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)

  12. Petz recovery versus matrix reconstruction

    Science.gov (United States)

    Holzäpfel, Milan; Cramer, Marcus; Datta, Nilanjana; Plenio, Martin B.

    2018-04-01

    The reconstruction of the state of a multipartite quantum mechanical system represents a fundamental task in quantum information science. At its most basic, it concerns a state of a bipartite quantum system whose subsystems are subjected to local operations. We compare two different methods for obtaining the original state from the state resulting from the action of these operations. The first method involves quantum operations called Petz recovery maps, acting locally on the two subsystems. The second method is called matrix (or state) reconstruction and involves local, linear maps that are not necessarily completely positive. Moreover, we compare the quantities on which the maps employed in the two methods depend. We show that any state that admits Petz recovery also admits state reconstruction. However, the latter is successful for a strictly larger set of states. We also compare these methods in the context of a finite spin chain. Here, the state of a finite spin chain is reconstructed from the reduced states of a few neighbouring spins. In this setting, state reconstruction is the same as the matrix product operator reconstruction proposed by Baumgratz et al. [Phys. Rev. Lett. 111, 020401 (2013)]. Finally, we generalize both these methods so that they employ long-range measurements instead of relying solely on short-range correlations embodied in such local reduced states. Long-range measurements enable the reconstruction of states which cannot be reconstructed from measurements of local few-body observables alone and hereby we improve existing methods for quantum state tomography of quantum many-body systems.

  13. Animated Reconstruction of Forensic Animation

    OpenAIRE

    Hala, Albert; Unver, Ertu

    1998-01-01

    An animated accident display in court can be significant evidentiary tool. Computer graphics animation reconstructions which can be shown in court are cost effective, save valuable time and illustrate complex and technical issues, are realistic and can prove or disprove arguments or theories with reference to the perplexing newtonian physics involved in many accidents: this technology may well revolutionise accident reconstruction, thus enabling prosecution and defence to be more effective in...

  14. Value of selective MIP reconstructions of respiratory triggered 3D-TSE-MR cholangiography on a workstation versus standard MIP reconstructions and single-shot MRCP

    International Nuclear Information System (INIS)

    Schaible, R.; Textor, J.; Kreft, B.; Schild, H.; Neubrand, M.

    2001-01-01

    Comparison of anatomical visualisation and diagnostic value of selective MIP reconstructions of respiratory triggered 3D-TSE-MRCP versus standard MIP reconstructions and single-shot MRCP. Material and Methods: 50 patients with pancreaticobiliary disease were examined at 1.5 Tesla (ACS NT II, Philips Medical Systems) using a breath-hold single-shot (SS) and a respiratory triggered 3D-TSE-MRCP technique in 12 standard MIP projections. Additional selective MIP reconstructions with different slice thickness (2, 4, 10 cm) and projections were performed on a workstation. Visualization of the pancreaticobiliary system and the diagnostic value of the examinations were analysed. Results: Single-shot and 3D-TSE in standard projections showed comparable anatomical visualisation. On selective MIP reconstructions the biliary system (SS p [de

  15. Evaluation of Available Software for Reconstruction of a Structure from its Imagery

    Science.gov (United States)

    2017-04-01

    scene and signature generation for ladar and imaging sensors, in Proc. SPIE 9071 Infrared Imaging System : Design , Analysis , Modeling, and Testing XXV...UNCLASSIFIED Evaluation of Available Software for Reconstruction of a Structure from its Imagery Leonid K Antanovskii Weapons and Combat Systems ...project. The Computer Vision System toolbox of MATLAB R© and the Visual Structure from Motion (VisualSFM) software are evaluated on three datasets of

  16. Fast group matching for MR fingerprinting reconstruction.

    Science.gov (United States)

    Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L

    2015-08-01

    MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.

  17. Secondary reconstruction of maxillofacial trauma.

    Science.gov (United States)

    Castro-Núñez, Jaime; Van Sickels, Joseph E

    2017-08-01

    Craniomaxillofacial trauma is one of the most complex clinical conditions in contemporary maxillofacial surgery. Vital structures and possible functional and esthetic sequelae are important considerations following this type of trauma and intervention. Despite the best efforts of the primary surgery, there are a group of patients that will have poor outcomes requiring secondary reconstruction to restore form and function. The purpose of this study is to review current concepts on secondary reconstruction to the maxillofacial complex. The evaluation of a posttraumatic patient for a secondary reconstruction must include an assessment of the different subunits of the upper face, middle face, and lower face. Virtual surgical planning and surgical guides represent the most important innovations in secondary reconstruction over the past few years. Intraoperative navigational surgery/computed-assisted navigation is used in complex cases. Facial asymmetry can be corrected or significantly improved by segmentation of the computerized tomography dataset and mirroring of the unaffected side by means of virtual surgical planning. Navigational surgery/computed-assisted navigation allows for a more precise surgical correction when secondary reconstruction involves the replacement of extensive anatomical areas. The use of technology can result in custom-made replacements and prebent plates, which are more stable and resistant to fracture because of metal fatigue. Careful perioperative evaluation is the key to positive outcomes of secondary reconstruction after trauma. The advent of technological tools has played a capital role in helping the surgical team perform a given treatment plan in a more precise and predictable manner.

  18. Technical basis for dose reconstruction

    International Nuclear Information System (INIS)

    Anspaugh, L.R.

    1996-01-01

    The purpose of this paper is to consider two general topics: Technical considerations of why dose-reconstruction studies should or should not be performed and methods of dose reconstruction. The first topic is of general and growing interest as the number of dose-reconstruction studies increases, and one asks the question whether it is necessary to perform a dose reconstruction for virtually every site at which, for example, the Department of Energy (DOE) has operated a nuclear-related facility. And there is the broader question of how one might logically draw the line at performing or not performing dose-reconstruction (radiological and chemical) studies for virtually every industrial complex in the entire country. The second question is also of general interest. There is no single correct way to perform a dose-reconstruction study, and it is important not to follow blindly a single method to the point that cheaper, faster, more accurate, and more transparent methods might not be developed and applied. 90 refs., 4 tabs

  19. Technical basis for dose reconstruction

    International Nuclear Information System (INIS)

    Anspaugh, L.R.

    1996-01-01

    The purpose of this paper is to consider two general topics: technical considerations of why dose-reconstruction studies should or should not be performed and methods of dose reconstruction. The first topic is of general and growing interest as the number of dose-reconstruction studies increases, and one asks the question whether it is necessary to perform a dose reconstruction for virtually every site at which, for example, the Department of Energy (DOE) has operated a nuclear-related facility. And there is the broader question of how one might logically draw the line at performing or not performing dose-reconstruction (radiological and chemical) studies for virtually every industrial complex in the entire country. The second question is also of general interest. There is no single correct way to perform a dose-reconstruction study, and it is important not to follow blindly a single method to the point that cheaper, faster, more accurate, and more transparent methods might not be developed and applied

  20. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  1. Understanding visualization: a formal approach using category theory and semiotics.

    Science.gov (United States)

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  2. Visual communication of engineering and scientific data in the courtroom

    Science.gov (United States)

    Jackson, Gerald W.; Henry, Andrew C.

    1993-01-01

    Presenting engineering and scientific information in the courtroom is challenging. Quite often the data is voluminous and, therefore, difficult to digest by engineering experts, let alone a lay judge, lawyer, or jury. This paper discusses computer visualization techniques designed to provide the court methods of communicating data in visual formats thus allowing a more accurate understanding of complicated concepts and results. Examples are presented that include accident reconstructions, technical concept illustration, and engineering data visualization. Also presented is the design of an electronic courtroom which facilitates the display and communication of information to the courtroom.

  3. Reconstruction from gamma radiography and ultrasonic images

    International Nuclear Information System (INIS)

    Gautier, S.; Lavayssiere, B.; Idier, J.; Mohammad-Djafari, A.

    1998-02-01

    This work deals with the three-dimensional reconstruction from gamma radiographic and ultrasonic images. Such an issue belongs to the field of data fusion since the data provide complementary information. The two sets of data are independently related to two sets of parameters: gamma ray attenuation and ultrasonic reflectivity. The fusion problem is addressed in a Bayesian framework; the kingpin of the task is then to define a joint a priori model for both attenuation and reflectivity. Thus, the developing of this model and the entailed joint estimation constitute the principal contribution of this work. The results of real data treatments demonstrate the validity of this method as compared to a sequential approach of the two sets of data

  4. 3D reconstruction of coronary arteries from 2D angiographic projections using non-uniform rational basis splines (NURBS for accurate modelling of coronary stenoses.

    Directory of Open Access Journals (Sweden)

    Francesca Galassi

    Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.

  5. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography

    International Nuclear Information System (INIS)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-01-01

    Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity

  6. Visual Inspection for Caries Detection

    DEFF Research Database (Denmark)

    Gimenez, T; Piovesan, C; Braga, M M

    2015-01-01

    July 2014 to identify published and nonpublished studies in English. Studies of visual inspection were included that 1) assessed accuracy of the method in detecting caries lesions; 2) were performed on occlusal, proximal, or free smooth surfaces in primary or permanent teeth; 3) had a reference...... (from 5,808 articles initially identified) and 1 abstract (from 168) met the inclusion criteria. In general, the analysis demonstrated that the visual method had good accuracy for detecting caries lesions. Although laboratory and clinical studies have presented similar accuracy, clinically obtained...... caries detection method has good overall performance. Furthermore, although the identified studies had high heterogeneity and risk of bias, the use of detailed and validated indices seems to improve the accuracy of the method....

  7. Recent advances in 3D SEM surface reconstruction.

    Science.gov (United States)

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Alavi, Zahrasadat; Owen, Heather A; Yu, Zeyun

    2015-11-01

    The scanning electron microscope (SEM), as one of the most commonly used instruments in biology and material sciences, employs electrons instead of light to determine the surface properties of specimens. However, the SEM micrographs still remain 2D images. To effectively measure and visualize the surface attributes, we need to restore the 3D shape model from the SEM images. 3D surface reconstruction is a longstanding topic in microscopy vision as it offers quantitative and visual information for a variety of applications consisting medicine, pharmacology, chemistry, and mechanics. In this paper, we attempt to explain the expanding body of the work in this area, including a discussion of recent techniques and algorithms. With the present work, we also enhance the reliability, accuracy, and speed of 3D SEM surface reconstruction by designing and developing an optimized multi-view framework. We then consider several real-world experiments as well as synthetic data to examine the qualitative and quantitative attributes of our proposed framework. Furthermore, we present a taxonomy of 3D SEM surface reconstruction approaches and address several challenging issues as part of our future work. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Pattern visual evoked potentials in dyslexic versus normal children

    Directory of Open Access Journals (Sweden)

    Javad Heravian

    2015-01-01

    Conclusion: The sensitivity of PVEP has high validity to detect visual deficits in children with dyslexic problem. However, no significant difference was found between dyslexia and normal children using high contrast stimuli.

  9. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  10. Calibration of reconstruction parameters in atom probe tomography using a single crystallographic orientation

    International Nuclear Information System (INIS)

    Suram, Santosh K.; Rajan, Krishna

    2013-01-01

    The purpose of this work is to develop a methodology to estimate the APT reconstruction parameters when limited crystallographic information is available. Reliable spatial scaling of APT data currently requires identification of multiple crystallographic poles from the field desorption image for estimating the reconstruction parameters. This requirement limits the capacity of accurately reconstructing APT data for certain complex systems, such as highly alloyed systems and nanostructured materials wherein more than one pole is usually not observed within one grain. To overcome this limitation, we develop a quantitative methodology for calibrating the reconstruction parameters in an APT dataset by ensuring accurate inter-planar spacing and optimizing the curvature correction for the atomic planes corresponding to a single crystallographic orientation. We validate our approach on an aluminum dataset and further illustrate its capabilities by computing geometric reconstruction parameters for W and Al–Mg–Sc datasets. - Highlights: ► Quantitative approach is developed to accurately reconstruct APT data. ► Curvature of atomic planes in APT data is used to calibrate the reconstruction. ► APT reconstruction parameters are determined from a single crystallographic axis. ► Quantitative approach is demonstrated on W, Al and Al–Mg–Sc systems. ► Accurate APT reconstruction of complex materials is now possible

  11. Relativity of Visual Communication

    Directory of Open Access Journals (Sweden)

    Arto Mutanen

    2016-03-01

    Full Text Available Communication is sharing and conveying information. In visual communication especially visual messages have to be formulated and interpreted. The interpretation is relative to a method of information presentation method which is human construction. This holds also in the case of visual languages. The notions of syntax and semantics for visual languages are not so well founded as they are for natural languages. Visual languages are both syntactically and semantically dense. The density is connected to the compositionality of the (pictorial languages. I