WorldWideScience

Sample records for volumetric dataset full

  1. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  2. Full-spectrum volumetric solar thermal conversion via photonic nanofluids.

    Science.gov (United States)

    Liu, Xianglei; Xuan, Yimin

    2017-10-12

    Volumetric solar thermal conversion is an emerging technique for a plethora of applications such as solar thermal power generation, desalination, and solar water splitting. However, achieving broadband solar thermal absorption via dilute nanofluids is still a daunting challenge. In this work, full-spectrum volumetric solar thermal conversion is demonstrated over a thin layer of the proposed 'photonic nanofluids'. The underlying mechanism is found to be the photonic superposition of core resonances, shell plasmons, and core-shell resonances at different wavelengths, whose coexistence is enabled by the broken symmetry of specially designed composite nanoparticles, i.e., Janus nanoparticles. The solar thermal conversion efficiency can be improved by 10.8% compared with core-shell nanofluids. The extinction coefficient of Janus dimers with various configurations is also investigated to unveil the effects of particle couplings. This work provides the possibility to achieve full-spectrum volumetric solar thermal conversion, and may have potential applications in efficient solar energy harvesting and utilization.

  3. Volumetric full-range magnetomotive optical coherence tomography

    Science.gov (United States)

    Ahmad, Adeel; Kim, Jongsik; Shemonski, Nathan D.; Marjanovic, Marina; Boppart, Stephen A.

    2014-01-01

    Abstract. Magnetomotive optical coherence tomography (MM-OCT) can be utilized to spatially localize the presence of magnetic particles within tissues or organs. These magnetic particle-containing regions are detected by using the capability of OCT to measure small-scale displacements induced by the activation of an external electromagnet coil typically driven by a harmonic excitation signal. The constraints imposed by the scanning schemes employed and tissue viscoelastic properties limit the speed at which conventional MM-OCT data can be acquired. Realizing that electromagnet coils can be designed to exert MM force on relatively large tissue volumes (comparable or larger than typical OCT imaging fields of view), we show that an order-of-magnitude improvement in three-dimensional (3-D) MM-OCT imaging speed can be achieved by rapid acquisition of a volumetric scan during the activation of the coil. Furthermore, we show volumetric (3-D) MM-OCT imaging over a large imaging depth range by combining this volumetric scan scheme with full-range OCT. Results with tissue equivalent phantoms and a biological tissue are shown to demonstrate this technique. PMID:25472770

  4. Level-1 muon trigger performance with the full 2017 dataset

    CERN Document Server

    CMS Collaboration

    2018-01-01

    This document describes the performance of the CMS Level-1 Muon Trigger with the full dataset of 2017. Efficiency plots are included for each track finder (TF) individually and for the system as a whole. The efficiency is measured to be greater than 90% for all track finders.

  5. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  6. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  7. Reconstructing flaw image using dataset of full matrix capture technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Tae Hun; Kim, Yong Sik; Lee, Jeong Seok [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2017-02-15

    A conventional phased array ultrasonic system offers the ability to steer an ultrasonic beam by applying independent time delays of individual elements in the array and produce an ultrasonic image. In contrast, full matrix capture (FMC) is a data acquisition process that collects a complete matrix of A-scans from every possible independent transmit-receive combination in a phased array transducer and makes it possible to reconstruct various images that cannot be produced by conventional phased array with the post processing as well as images equivalent to a conventional phased array image. In this paper, a basic algorithm based on the LLL mode total focusing method (TFM) that can image crack type flaws is described. And this technique was applied to reconstruct flaw images from the FMC dataset obtained from the experiments and ultrasonic simulation.

  8. An automatic algorithm for detecting stent endothelialization from volumetric optical coherence tomography datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bonnema, Garret T; Barton, Jennifer K [College of Optical Sciences, University of Arizona, Tucson, AZ (United States); Cardinal, Kristen O' Halloran [Biomedical and General Engineering, California Polytechnic State University (United States); Williams, Stuart K [Cardiovascular Innovation Institute, University of Louisville, Louisville, KY 40292 (United States)], E-mail: barton@u.arizona.edu

    2008-06-21

    Recent research has suggested that endothelialization of vascular stents is crucial to reducing the risk of late stent thrombosis. With a resolution of approximately 10 {mu}m, optical coherence tomography (OCT) may be an appropriate imaging modality for visualizing the vascular response to a stent and measuring the percentage of struts covered with an anti-thrombogenic cellular lining. We developed an image analysis program to locate covered and uncovered stent struts in OCT images of tissue-engineered blood vessels. The struts were found by exploiting the highly reflective and shadowing characteristics of the metallic stent material. Coverage was evaluated by comparing the luminal surface with the depth of the strut reflection. Strut coverage calculations were compared to manual assessment of OCT images and epi-fluorescence analysis of the stented grafts. Based on the manual assessment, the strut identification algorithm operated with a sensitivity of 93% and a specificity of 99%. The strut coverage algorithm was 81% sensitive and 96% specific. The present study indicates that the program can automatically determine percent cellular coverage from volumetric OCT datasets of blood vessel mimics. The program could potentially be extended to assessments of stent endothelialization in native stented arteries.

  9. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    Science.gov (United States)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  10. Volumetric breast density estimation from full-field digital mammograms.

    NARCIS (Netherlands)

    Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.

    2006-01-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast

  11. Volumetric breast density estimation from full-field digital mammograms.

    Science.gov (United States)

    van Engeland, Saskia; Snoeren, Peter R; Huisman, Henkjan; Boetes, Carla; Karssemeijer, Nico

    2006-03-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.

  12. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    International Nuclear Information System (INIS)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho; Woo, Hyun Soo; Jo, Jae Min; Lee, Min Hee

    2015-01-01

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques

  13. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  14. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  15. Area and volumetric density estimation in processed full-field digital mammograms for risk assessment of breast cancer.

    Directory of Open Access Journals (Sweden)

    Abbas Cheddad

    Full Text Available INTRODUCTION: Mammographic density, the white radiolucent part of a mammogram, is a marker of breast cancer risk and mammographic sensitivity. There are several means of measuring mammographic density, among which are area-based and volumetric-based approaches. Current volumetric methods use only unprocessed, raw mammograms, which is a problematic restriction since such raw mammograms are normally not stored. We describe fully automated methods for measuring both area and volumetric mammographic density from processed images. METHODS: The data set used in this study comprises raw and processed images of the same view from 1462 women. We developed two algorithms for processed images, an automated area-based approach (CASAM-Area and a volumetric-based approach (CASAM-Vol. The latter method was based on training a random forest prediction model with image statistical features as predictors, against a volumetric measure, Volpara, for corresponding raw images. We contrast the three methods, CASAM-Area, CASAM-Vol and Volpara directly and in terms of association with breast cancer risk and a known genetic variant for mammographic density and breast cancer, rs10995190 in the gene ZNF365. Associations with breast cancer risk were evaluated using images from 47 breast cancer cases and 1011 control subjects. The genetic association analysis was based on 1011 control subjects. RESULTS: All three measures of mammographic density were associated with breast cancer risk and rs10995190 (p0.10 for risk, p>0.03 for rs10995190. CONCLUSIONS: Our results show that it is possible to obtain reliable automated measures of volumetric and area mammographic density from processed digital images. Area and volumetric measures of density on processed digital images performed similar in terms of risk and genetic association.

  16. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  17. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity to the migrated image. We propose a combined FWI and RWI optimization problem through dividing the velocity into the background and perturbed components. We optimize both the background and perturbed components, as independent parameters. The new objective function is quadratic with respect to the perturbed component, which will reduce the nonlinearity of the optimization problem. Solving this optimization provides a true amplitude image and utilizes the diving waves to update the velocity of the shallow parts. To insure a proper wavenumber continuation, we use an efficient scattering angle filter to direct the inversion at the early stages to direct energy corresponding to large (smooth velocity) scattering angles to the background velocity update and the small (high wavenumber) scattering angles to the perturbed velocity update. This efficient implementation of the filter is fast and requires less memory than the conventional approach based on extended images. Thus, the new FWI procedure updates the background velocity mainly along the wavepath for both diving and reflected waves in the initial stages. At the same time, it updates the perturbation with mainly reflections (filtering out the diving waves). To demonstrate the capability of this method, we apply it to a real 2D marine dataset.

  18. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2015-01-01

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity

  19. Three-dimensional volumetric gray-scale uterine cervix histogram prediction of days to delivery in full term pregnancy.

    Science.gov (United States)

    Kim, Ji Youn; Kim, Hai-Joong; Hahn, Meong Hi; Jeon, Hye Jin; Cho, Geum Joon; Hong, Sun Chul; Oh, Min Jeong

    2013-09-01

    Our aim was to figure out whether volumetric gray-scale histogram difference between anterior and posterior cervix can indicate the extent of cervical consistency. We collected data of 95 patients who were appropriate for vaginal delivery with 36th to 37th weeks of gestational age from September 2010 to October 2011 in the Department of Obstetrics and Gynecology, Korea University Ansan Hospital. Patients were excluded who had one of the followings: Cesarean section, labor induction, premature rupture of membrane. Thirty-four patients were finally enrolled. The patients underwent evaluation of the cervix through Bishop score, cervical length, cervical volume, three-dimensional (3D) cervical volumetric gray-scale histogram. The interval days from the cervix evaluation to the delivery day were counted. We compared to 3D cervical volumetric gray-scale histogram, Bishop score, cervical length, cervical volume with interval days from the evaluation of the cervix to the delivery. Gray-scale histogram difference between anterior and posterior cervix was significantly correlated to days to delivery. Its correlation coefficient (R) was 0.500 (P = 0.003). The cervical length was significantly related to the days to delivery. The correlation coefficient (R) and P-value between them were 0.421 and 0.013. However, anterior lip histogram, posterior lip histogram, total cervical volume, Bishop score were not associated with days to delivery (P >0.05). By using gray-scale histogram difference between anterior and posterior cervix and cervical length correlated with the days to delivery. These methods can be utilized to better help predict a cervical consistency.

  20. Operating scheme for the light-emitting diode array of a volumetric display that exhibits multiple full-color dynamic images

    Science.gov (United States)

    Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-07-01

    We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.

  1. Full-field mapping of internal strain distribution in red sandstone specimen under compression using digital volumetric speckle photography and X-ray computed tomography

    Directory of Open Access Journals (Sweden)

    Lingtao Mao

    2015-04-01

    Full Text Available It is always desirable to know the interior deformation pattern when a rock is subjected to mechanical load. Few experimental techniques exist that can represent full-field three-dimensional (3D strain distribution inside a rock specimen. And yet it is crucial that this information is available for fully understanding the failure mechanism of rocks or other geomaterials. In this study, by using the newly developed digital volumetric speckle photography (DVSP technique in conjunction with X-ray computed tomography (CT and taking advantage of natural 3D speckles formed inside the rock due to material impurities and voids, we can probe the interior of a rock to map its deformation pattern under load and shed light on its failure mechanism. We apply this technique to the analysis of a red sandstone specimen under increasing uniaxial compressive load applied incrementally. The full-field 3D displacement fields are obtained in the specimen as a function of the load, from which both the volumetric and the deviatoric strain fields are calculated. Strain localization zones which lead to the eventual failure of the rock are identified. The results indicate that both shear and tension are contributing factors to the failure mechanism.

  2. Full 3D internal strain measurement for device packaging materials using synchrotron laminography and volumetric digital image correlation method

    International Nuclear Information System (INIS)

    Asada, Takashi; Kimura, Hidehiko; Yamaguchi, Satoshi; Kano, Taiki; Kajiwara, Kentaro

    2014-01-01

    In order to measure full 3D internal strain field of resin molding compound specimens, synchrotron computed tomography and laminography at SPring-8 were performed. Then the reconstructed images were applied to 3D digital image correlation method to compute internal strain field. The results showed that internal strains in resin molding compound could be visualized in this way. (author)

  3. Performance of single and multi-atlas based automated landmarking methods compared to expert annotations in volumetric microCT datasets of mouse mandibles.

    Science.gov (United States)

    Young, Ryan; Maga, A Murat

    2015-01-01

    Here we present an application of advanced registration and atlas building framework DRAMMS to the automated annotation of mouse mandibles through a series of tests using single and multi-atlas segmentation paradigms and compare the outcomes to the current gold standard, manual annotation. Our results showed multi-atlas annotation procedure yields landmark precisions within the human observer error range. The mean shape estimates from gold standard and multi-atlas annotation procedure were statistically indistinguishable for both Euclidean Distance Matrix Analysis (mean form matrix) and Generalized Procrustes Analysis (Goodall F-test). Further research needs to be done to validate the consistency of variance-covariance matrix estimates from both methods with larger sample sizes. Multi-atlas annotation procedure shows promise as a framework to facilitate truly high-throughput phenomic analyses by channeling investigators efforts to annotate only a small portion of their datasets.

  4. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    International Nuclear Information System (INIS)

    Rios Velazquez, E; Meier, R; Dunn, W; Gutman, D; Alexander, B; Wiest, R; Reyes, M; Bauer, S; Aerts, H

    2015-01-01

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research

  5. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    Energy Technology Data Exchange (ETDEWEB)

    Rios Velazquez, E [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Meier, R [Institute for Surgical Technology and Biomechanics, Bern, NA (Switzerland); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Alexander, B [Dana- Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Wiest, R; Reyes, M [Institute for Surgical Technology and Biomechanics, University of Bern, Bern, NA (Switzerland); Bauer, S [Institute for Surgical Technology and Biomechanics, Support Center for Adva, Bern, NA (Switzerland); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.

  6. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms.

    Science.gov (United States)

    Li, Hui; Giger, Maryellen L; Huynh, Benjamin Q; Antropova, Natalia O

    2017-10-01

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text

  7. Automatic Estimation of Volumetric Breast Density Using Artificial Neural Network-Based Calibration of Full-Field Digital Mammography: Feasibility on Japanese Women With and Without Breast Cancer.

    Science.gov (United States)

    Wang, Jeff; Kato, Fumi; Yamashita, Hiroko; Baba, Motoi; Cui, Yi; Li, Ruijiang; Oyama-Manabe, Noriko; Shirato, Hiroki

    2017-04-01

    Breast cancer is the most common invasive cancer among women and its incidence is increasing. Risk assessment is valuable and recent methods are incorporating novel biomarkers such as mammographic density. Artificial neural networks (ANN) are adaptive algorithms capable of performing pattern-to-pattern learning and are well suited for medical applications. They are potentially useful for calibrating full-field digital mammography (FFDM) for quantitative analysis. This study uses ANN modeling to estimate volumetric breast density (VBD) from FFDM on Japanese women with and without breast cancer. ANN calibration of VBD was performed using phantom data for one FFDM system. Mammograms of 46 Japanese women diagnosed with invasive carcinoma and 53 with negative findings were analyzed using ANN models learned. ANN-estimated VBD was validated against phantom data, compared intra-patient, with qualitative composition scoring, with MRI VBD, and inter-patient with classical risk factors of breast cancer as well as cancer status. Phantom validations reached an R 2 of 0.993. Intra-patient validations ranged from R 2 of 0.789 with VBD to 0.908 with breast volume. ANN VBD agreed well with BI-RADS scoring and MRI VBD with R 2 ranging from 0.665 with VBD to 0.852 with breast volume. VBD was significantly higher in women with cancer. Associations with age, BMI, menopause, and cancer status previously reported were also confirmed. ANN modeling appears to produce reasonable measures of mammographic density validated with phantoms, with existing measures of breast density, and with classical biomarkers of breast cancer. FFDM VBD is significantly higher in Japanese women with cancer.

  8. Search for the lepton flavour violating decay μ{sup +} → e{sup +}γ with the full dataset of the MEG experiment

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Cerri, C.; Dussoni, S.; Galli, L.; Grassi, M.; Morsani, F.; Pazzi, R.; Raffaelli, F.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Egger, J.; Hildebrandt, M.; Kettle, P.R.; Mtchedilishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [ICEPP, The University of Tokyo, Tokyo (Japan); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [Pisa Univ. (Italy). Dipt. di Fisica; INFN Sezione di Pisa, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genoa (Italy); Genoa Univ., Dipartimento di Fisica (Italy); Boca, G.; De Bari, A.; Nardo, R.; Simonetta, M. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica (Italy); Cascella, M. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); University College London, Department of Physics and Astronomy, London (United Kingdom); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Voena, C. [Rome Univ. ' ' Sapienza' ' (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Chiri, C.; Corvaglia, A.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); De Gerone, M. [Genoa Univ. (Italy); INFN Sezione di Genova, Genoa (Italy); Doke, T. [Waseda University, Research Institute for Science and Engineering, Tokyo (Japan); Fujii, Y.; Ieki, K.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Orito, S.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [ICEPP, The University of Tokyo, Tokyo (Japan); Grancagnolo, F.; Tassielli, G.F. [Universita del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Maki, A.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (JP); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (RU); Novosibirsk State University, Novosibirsk (RU); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z.; Zanello, D. [University of California, Irvine, CA (US); Khomutov, N.; Korenchenko, A.; Kravchuk, N.; Mzavia, D. [Joint Institute for Nuclear Research, Dubna (RU); Renga, F. [Paul Scherrer Institut PSI, Villigen (CH); INFN Sezione di Roma, Rome (IT); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (IT); Venturini, M. [INFN Sezione di Pisa, Pisa (IT); Pisa Univ., Scuola Normale Superiore (IT); Collaboration: MEG Collaboration

    2016-08-15

    The final results of the search for the lepton flavour violating decay μ{sup +} → e{sup +}γ based on the full dataset collected by the MEG experiment at the Paul Scherrer Institut in the period 2009-2013 and totalling 7.5 x 10{sup 14} stopped muons on target are presented. No significant excess of events is observed in the dataset with respect to the expected background and a new upper limit on the branching ratio of this decay of B(μ{sup +} → e{sup +}γ) < 4.2 x 10{sup -13} (90 % confidence level) is established, which represents the most stringent limit on the existence of this decay to date. (orig.)

  9. A feasibility study of digital tomosynthesis for volumetric dental imaging

    International Nuclear Information System (INIS)

    Cho, M K; Kim, H K; Youn, H; Kim, S S

    2012-01-01

    We present a volumetric dental tomography method that compensates for insufficient projection views obtained from limited-angle scans. The reconstruction algorithm is based on the backprojection filtering method which employs apodizing filters that reduce out-of-plane blur artifacts and suppress high-frequency noise. In order to accompolish this volumetric imaging two volume-reconstructed datasets are synthesized. These individual datasets provide two different limited-angle scans performed at orthogonal angles. The obtained reconstructed images, using less than 15% of the number of projection views needed for a full skull phantom scan, demonstrate the potential use of the proposed method in dental imaging applications. This method enables a much smaller radiation dose for the patient compared to conventional dental tomography.

  10. Dataset of Atmospheric Environment Publication in 2016, Source emission and model evaluation of formaldehyde from composite and solid wood furniture in a full-scale chamber

    Data.gov (United States)

    U.S. Environmental Protection Agency — The data presented in this data file is a product of a journal publication. The dataset contains formaldehyde air concentrations in the emission test chamber and...

  11. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  12. Coaxial volumetric velocimetry

    Science.gov (United States)

    Schneiders, Jan F. G.; Scarano, Fulvio; Jux, Constantin; Sciacchitano, Andrea

    2018-06-01

    This study describes the working principles of the coaxial volumetric velocimeter (CVV) for wind tunnel measurements. The measurement system is derived from the concept of tomographic PIV in combination with recent developments of Lagrangian particle tracking. The main characteristic of the CVV is its small tomographic aperture and the coaxial arrangement between the illumination and imaging directions. The system consists of a multi-camera arrangement subtending only few degrees solid angle and a long focal depth. Contrary to established PIV practice, laser illumination is provided along the same direction as that of the camera views, reducing the optical access requirements to a single viewing direction. The laser light is expanded to illuminate the full field of view of the cameras. Such illumination and imaging conditions along a deep measurement volume dictate the use of tracer particles with a large scattering area. In the present work, helium-filled soap bubbles are used. The fundamental principles of the CVV in terms of dynamic velocity and spatial range are discussed. Maximum particle image density is shown to limit tracer particle seeding concentration and instantaneous spatial resolution. Time-averaged flow fields can be obtained at high spatial resolution by ensemble averaging. The use of the CVV for time-averaged measurements is demonstrated in two wind tunnel experiments. After comparing the CVV measurements with the potential flow in front of a sphere, the near-surface flow around a complex wind tunnel model of a cyclist is measured. The measurements yield the volumetric time-averaged velocity and vorticity field. The measurements of the streamlines in proximity of the surface give an indication of the skin-friction lines pattern, which is of use in the interpretation of the surface flow topology.

  13. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  14. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  15. Hierarchical anatomical brain networks for MCI prediction: revisiting volumetric measures.

    Directory of Open Access Journals (Sweden)

    Luping Zhou

    Full Text Available Owning to its clinical accessibility, T1-weighted MRI (Magnetic Resonance Imaging has been extensively studied in the past decades for prediction of Alzheimer's disease (AD and mild cognitive impairment (MCI. The volumes of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF are the most commonly used measurements, resulting in many successful applications. It has been widely observed that disease-induced structural changes may not occur at isolated spots, but in several inter-related regions. Therefore, for better characterization of brain pathology, we propose in this paper a means to extract inter-regional correlation based features from local volumetric measurements. Specifically, our approach involves constructing an anatomical brain network for each subject, with each node representing a Region of Interest (ROI and each edge representing Pearson correlation of tissue volumetric measurements between ROI pairs. As second order volumetric measurements, network features are more descriptive but also more sensitive to noise. To overcome this limitation, a hierarchy of ROIs is used to suppress noise at different scales. Pairwise interactions are considered not only for ROIs with the same scale in the same layer of the hierarchy, but also for ROIs across different scales in different layers. To address the high dimensionality problem resulting from the large number of network features, a supervised dimensionality reduction method is further employed to embed a selected subset of features into a low dimensional feature space, while at the same time preserving discriminative information. We demonstrate with experimental results the efficacy of this embedding strategy in comparison with some other commonly used approaches. In addition, although the proposed method can be easily generalized to incorporate other metrics of regional similarities, the benefits of using Pearson correlation in our application are reinforced by the experimental

  16. Volumetric composition of nanocomposites

    DEFF Research Database (Denmark)

    Madsen, Bo; Lilholt, Hans; Mannila, Juha

    2015-01-01

    is presented, using cellulose/epoxy and aluminosilicate/polylactate nanocomposites as case materials. The buoyancy method is used for the accurate measurements of materials density. The accuracy of the method is determined to be high, allowing the measured nanocomposite densities to be reported with 5...... significant figures. The plotting of the measured nanocomposite density as a function of the nanofibre weight content is shown to be a first good approach of assessing the porosity content of the materials. The known gravimetric composition of the nanocomposites is converted into a volumetric composition...

  17. Exploring interaction with 3D volumetric displays

    Science.gov (United States)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  18. Characterizing volumetric deformation behavior of naturally occuring bituminous sand materials

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2009-05-01

    Full Text Available newly proposed hydrostatic compression test procedure. The test procedure applies field loading conditions of off-road construction and mining equipment to closely simulate the volumetric deformation and stiffness behaviour of oil sand materials. Based...

  19. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  20. Soil volumetric water content measurements using TDR technique

    Directory of Open Access Journals (Sweden)

    S. Vincenzi

    1996-06-01

    Full Text Available A physical model to measure some hydrological and thermal parameters in soils will to be set up. The vertical profiles of: volumetric water content, matric potential and temperature will be monitored in different soils. The volumetric soil water content is measured by means of the Time Domain Reflectometry (TDR technique. The result of a test to determine experimentally the reproducibility of the volumetric water content measurements is reported together with the methodology and the results of the analysis of the TDR wave forms. The analysis is based on the calculation of the travel time of the TDR signal in the wave guide embedded in the soil.

  1. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  2. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  3. Soft bilateral filtering volumetric shadows using cube shadow maps.

    Directory of Open Access Journals (Sweden)

    Hatam H Ali

    Full Text Available Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications.

  4. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  5. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  6. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  7. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  8. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  9. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  10. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  11. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-01-01

    Full Text Available Segmentation of the left atrium (LA from cardiac magnetic resonance imaging (MRI datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs and active contour model (ACM approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC and average surface-to-surface distance (S2S, were computed as 0.9227±0.0598 and 1.14±1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively.

  12. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A; Toga, Arthur W; Thompson, Paul M

    2011-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic "Demons" algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future.

  13. Dataset reporting the perceiver identification rates of basic emotions expressed by male, female and ambiguous gendered walkers in full-light, point-light and synthetically modelled point-light walkers.

    Science.gov (United States)

    Halovic, Shaun; Kroos, Christian

    2017-12-01

    This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into H t rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides H t rates separately for male, female and ambiguously gendered walkers.

  14. Volumetric polymerization shrinkage of contemporary composite resins

    Directory of Open Access Journals (Sweden)

    Halim Nagem Filho

    2007-10-01

    Full Text Available The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill Magic, Alert, and Solitaire to determine whether there are differences among these materials. The tests were conducted with precision of 0.1 mg. The volumetric shrinkage was measured by hydrostatic weighing before and after polymerization and calculated by known mathematical equations. One-way ANOVA (a or = 0.05 was used to determine statistically significant differences in volumetric shrinkage among the tested composite resins. Suprafill (1.87±0.01 and Definite (1.89±0.01 shrank significantly less than the other composite resins. SureFil (2.01±0.06, Filtek Z250 (1.99±0.03, and Fill Magic (2.02±0.02 presented intermediate levels of polymerization shrinkage. Alert and Solitaire presented the highest degree of polymerization shrinkage. Knowing the polymerization shrinkage rates of the commercially available composite resins, the dentist would be able to choose between using composite resins with lower polymerization shrinkage rates or adopting technical or operational procedures to minimize the adverse effects deriving from resin contraction during light-activation.

  15. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  16. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  17. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  18. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  19. Aspects of volumetric efficiency measurement for reciprocating engines

    Directory of Open Access Journals (Sweden)

    Pešić Radivoje B.

    2013-01-01

    Full Text Available The volumetric efficiency significantly influences engine output. Both design and dimensions of an intake and exhaust system have large impact on volumetric efficiency. Experimental equipment for measuring of airflow through the engine, which is placed in the intake system, may affect the results of measurements and distort the real picture of the impact of individual structural factors. This paper deals with the problems of experimental determination of intake airflow using orifice plates and the influence of orifice plate diameter on the results of the measurements. The problems of airflow measurements through a multi-process Otto/Diesel engine were analyzed. An original method for determining volumetric efficiency was developed based on in-cylinder pressure measurement during motored operation, and appropriate calibration of the experimental procedure was performed. Good correlation between the results of application of the original method for determination of volumetric efficiency and the results of theoretical model used in research of influence of the intake pipe length on volumetric efficiency was determined. [Acknowledgments. The paper is the result of the research within the project TR 35041 financed by the Ministry of Science and Technological Development of the Republic of Serbia

  20. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an

  1. Volumetric multimodality neural network for brain tumor segmentation

    Science.gov (United States)

    Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo

    2017-11-01

    Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.

  2. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  3. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  4. QSAR ligand dataset for modelling mutagenicity, genotoxicity, and rodent carcinogenicity

    Directory of Open Access Journals (Sweden)

    Davy Guan

    2018-04-01

    Full Text Available Five datasets were constructed from ligand and bioassay result data from the literature. These datasets include bioassay results from the Ames mutagenicity assay, Greenscreen GADD-45a-GFP assay, Syrian Hamster Embryo (SHE assay, and 2 year rat carcinogenicity assay results. These datasets provide information about chemical mutagenicity, genotoxicity and carcinogenicity.

  5. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  6. Hologlyphics: volumetric image synthesis performance system

    Science.gov (United States)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  7. Volumetric velocimetry for fluid flows

    Science.gov (United States)

    Discetti, Stefano; Coletti, Filippo

    2018-04-01

    In recent years, several techniques have been introduced that are capable of extracting 3D three-component velocity fields in fluid flows. Fast-paced developments in both hardware and processing algorithms have generated a diverse set of methods, with a growing range of applications in flow diagnostics. This has been further enriched by the increasingly marked trend of hybridization, in which the differences between techniques are fading. In this review, we carry out a survey of the prominent methods, including optical techniques and approaches based on medical imaging. An overview of each is given with an example of an application from the literature, while focusing on their respective strengths and challenges. A framework for the evaluation of velocimetry performance in terms of dynamic spatial range is discussed, along with technological trends and emerging strategies to exploit 3D data. While critical challenges still exist, these observations highlight how volumetric techniques are transforming experimental fluid mechanics, and that the possibilities they offer have just begun to be explored.

  8. Volumetric Visualization of Human Skin

    Science.gov (United States)

    Kawai, Toshiyuki; Kurioka, Yoshihiro

    We propose a modeling and rendering technique of human skin, which can provide realistic color, gloss and translucency for various applications in computer graphics. Our method is based on volumetric representation of the structure inside of the skin. Our model consists of the stratum corneum and three layers of pigments. The stratum corneum has also layered structure in which the incident light is reflected, refracted and diffused. Each layer of pigment has carotene, melanin or hemoglobin. The density distributions of pigments which define the color of each layer can be supplied as one of the voxel values. Surface normals of upper-side voxels are fluctuated to produce bumps and lines on the skin. We apply ray tracing approach to this model to obtain the rendered image. Multiple scattering in the stratum corneum, reflective and absorptive spectrum of pigments are considered. We also consider Fresnel term to calculate the specular component for glossy surface of skin. Some examples of rendered images are shown, which can successfully visualize a human skin.

  9. DIFFERENTIAL ANALYSIS OF VOLUMETRIC STRAINS IN POROUS MATERIALS IN TERMS OF WATER FREEZING

    Directory of Open Access Journals (Sweden)

    Rusin Z.

    2013-06-01

    Full Text Available The paper presents the differential analysis of volumetric strain (DAVS. The method allows measurements of volumetric deformations of capillary-porous materials caused by water-ice phase change. The VSE indicator (volumetric strain effect, which under certain conditions can be interpreted as the minimum degree of phase change of water contained in the material pores, is proposed. The test results (DAVS for three materials with diversified microstructure: clinker brick, calcium-silicate brick and Portland cement mortar were compared with the test results for pore characteristics obtained with the mercury intrusion porosimetry.

  10. Extended Kalman filtering for continuous volumetric MR-temperature imaging.

    Science.gov (United States)

    Denis de Senneville, Baudouin; Roujol, Sébastien; Hey, Silke; Moonen, Chrit; Ries, Mario

    2013-04-01

    Real time magnetic resonance (MR) thermometry has evolved into the method of choice for the guidance of high-intensity focused ultrasound (HIFU) interventions. For this role, MR-thermometry should preferably have a high temporal and spatial resolution and allow observing the temperature over the entire targeted area and its vicinity with a high accuracy. In addition, the precision of real time MR-thermometry for therapy guidance is generally limited by the available signal-to-noise ratio (SNR) and the influence of physiological noise. MR-guided HIFU would benefit of the large coverage volumetric temperature maps, including characterization of volumetric heating trajectories as well as near- and far-field heating. In this paper, continuous volumetric MR-temperature monitoring was obtained as follows. The targeted area was continuously scanned during the heating process by a multi-slice sequence. Measured data and a priori knowledge of 3-D data derived from a forecast based on a physical model were combined using an extended Kalman filter (EKF). The proposed reconstruction improved the temperature measurement resolution and precision while maintaining guaranteed output accuracy. The method was evaluated experimentally ex vivo on a phantom, and in vivo on a porcine kidney, using HIFU heating. On the in vivo experiment, it allowed the reconstruction from a spatio-temporally under-sampled data set (with an update rate for each voxel of 1.143 s) to a 3-D dataset covering a field of view of 142.5×285×54 mm(3) with a voxel size of 3×3×6 mm(3) and a temporal resolution of 0.127 s. The method also provided noise reduction, while having a minimal impact on accuracy and latency.

  11. National Elevation Dataset

    Science.gov (United States)

    ,

    2002-01-01

    The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.

  12. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  13. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  15. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  16. Potential Applications of Flat-Panel Volumetric CT in Morphologic, Functional Small Animal Imaging

    Directory of Open Access Journals (Sweden)

    Susanne Greschus

    2005-08-01

    Full Text Available Noninvasive radiologic imaging has recently gained considerable interest in basic, preclinical research for monitoring disease progression, therapeutic efficacy. In this report, we introduce flat-panel volumetric computed tomography (fpVCT as a powerful new tool for noninvasive imaging of different organ systems in preclinical research. The three-dimensional visualization that is achieved by isotropic high-resolution datasets is illustrated for the skeleton, chest, abdominal organs, brain of mice. The high image quality of chest scans enables the visualization of small lung nodules in an orthotopic lung cancer model, the reliable imaging of therapy side effects such as lung fibrosis. Using contrast-enhanced scans, fpVCT displayed the vascular trees of the brain, liver, kidney down to the subsegmental level. Functional application of fpVCT in dynamic contrast-enhanced scans of the rat brain delivered physiologically reliable data of perfusion, tissue blood volume. Beyond scanning of small animal models as demonstrated here, fpVCT provides the ability to image animals up to the size of primates.

  17. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  18. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  19. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  20. Volumetric composition in composites and historical data

    DEFF Research Database (Denmark)

    Lilholt, Hans; Madsen, Bo

    2013-01-01

    The obtainable volumetric composition in composites is of importance for the prediction of mechanical and physical properties, and in particular to assess the best possible (normally the highest) values for these properties. The volumetric model for the composition of (fibrous) composites gives...... guidance to the optimal combination of fibre content, matrix content and porosity content, in order to achieve the best obtainable properties. Several composite materials systems have been shown to be handleable with this model. An extensive series of experimental data for the system of cellulose fibres...... and polymer (resin) was produced in 1942 – 1944, and these data have been (re-)analysed by the volumetric composition model, and the property values for density, stiffness and strength have been evaluated. Good agreement has been obtained and some further observations have been extracted from the analysis....

  1. Process conditions and volumetric composition in composites

    DEFF Research Database (Denmark)

    Madsen, Bo

    2013-01-01

    The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated...... as a function of the fiber weight fraction, and where parameters are included for the composite microstructure, and the fiber assembly compaction behavior. Based on experimental data of composites manufactured with different process conditions, together with model predictions, different types of process related...... effects are analyzed. The applied consolidation pressure is found to have a marked effect on the volumetric composition. A power-law relationship is found to well describe the found relations between the maximum obtainable fiber volume fraction and the consolidation pressure. The degree of fiber...

  2. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  3. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  4. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  5. Real-time volumetric scintillation dosimetry

    International Nuclear Information System (INIS)

    Beddar, S

    2015-01-01

    The goal of this brief review is to review the current status of real-time 3D scintillation dosimetry and what has been done so far in this area. The basic concept is to use a large volume of a scintillator material (liquid or solid) to measure or image the dose distributions from external radiation therapy (RT) beams in three dimensions. In this configuration, the scintillator material fulfills the dual role of being the detector and the phantom material in which the measurements are being performed. In this case, dose perturbations caused by the introduction of a detector within a phantom will not be at issue. All the detector configurations that have been conceived to date used a Charge-Coupled Device (CCD) camera to measure the light produced within the scintillator. In order to accurately measure the scintillation light, one must correct for various optical artefacts that arise as the light propagates from the scintillating centers through the optical chain to the CCD chip. Quenching, defined in its simplest form as a nonlinear response to high-linear energy transfer (LET) charged particles, is one of the disadvantages when such systems are used to measure the absorbed dose from high-LET particles such protons. However, correction methods that restore the linear dose response through the whole proton range have been proven to be effective for both liquid and plastic scintillators. Volumetric scintillation dosimetry has the potential to provide fast, high-resolution and accurate 3D imaging of RT dose distributions. Further research is warranted to optimize the necessary image reconstruction methods and optical corrections needed to achieve its full potential

  6. The Influence of Water and Mineral Oil On Volumetric Losses in a Hydraulic Motor

    Directory of Open Access Journals (Sweden)

    Śliwiński Pawel

    2017-04-01

    Full Text Available In this paper volumetric losses in hydraulic motor supplied with water and mineral oil (two liquids having significantly different viscosity and lubricating properties are described and compared. The experimental tests were conducted using an innovative hydraulic satellite motor, that is dedicated to work with different liquids, including water. The sources of leaks in this motor are also characterized and described. On this basis, a mathematical model of volumetric losses and model of effective rotational speed have been developed and presented. The results of calculation of volumetric losses according to the model are compared with the results of experiment. It was found that the difference is not more than 20%. Furthermore, it has been demonstrated that this model well describes in both the volumetric losses in the motor supplied with water and oil. Experimental studies have shown that the volumetric losses in the motor supplied with water are even three times greater than the volumetric losses in the motor supplied with oil. It has been shown, that in a small constant stream of water the speed of the motor is reduced even by half in comparison of speed of motor supplied with the same stream of oil.

  7. Volumetric, dashboard-mounted augmented display

    Science.gov (United States)

    Kessler, David; Grabowski, Christopher

    2017-11-01

    The optical design of a compact volumetric display for drivers is presented. The system displays a true volume image with realistic physical depth cues, such as focal accommodation, parallax and convergence. A large eyebox is achieved with a pupil expander. The windshield is used as the augmented reality combiner. A freeform windshield corrector is placed at the dashboard.

  8. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  9. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  10. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  11. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  12. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  13. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  14. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  15. Agreement of mammographic measures of volumetric breast density to MRI.

    Directory of Open Access Journals (Sweden)

    Jeff Wang

    Full Text Available Clinical scores of mammographic breast density are highly subjective. Automated technologies for mammography exist to quantify breast density objectively, but the technique that most accurately measures the quantity of breast fibroglandular tissue is not known.To compare the agreement of three automated mammographic techniques for measuring volumetric breast density with a quantitative volumetric MRI-based technique in a screening population.Women were selected from the UCSF Medical Center screening population that had received both a screening MRI and digital mammogram within one year of each other, had Breast Imaging Reporting and Data System (BI-RADS assessments of normal or benign finding, and no history of breast cancer or surgery. Agreement was assessed of three mammographic techniques (Single-energy X-ray Absorptiometry [SXA], Quantra, and Volpara with MRI for percent fibroglandular tissue volume, absolute fibroglandular tissue volume, and total breast volume.Among 99 women, the automated mammographic density techniques were correlated with MRI measures with R(2 values ranging from 0.40 (log fibroglandular volume to 0.91 (total breast volume. Substantial agreement measured by kappa statistic was found between all percent fibroglandular tissue measures (0.72 to 0.63, but only moderate agreement for log fibroglandular volumes. The kappa statistics for all percent density measures were highest in the comparisons of the SXA and MRI results. The largest error source between MRI and the mammography techniques was found to be differences in measures of total breast volume.Automated volumetric fibroglandular tissue measures from screening digital mammograms were in substantial agreement with MRI and if associated with breast cancer could be used in clinical practice to enhance risk assessment and prevention.

  16. Method for Determining Volumetric Efficiency and Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ambrozik Andrzej

    2017-12-01

    Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.

  17. Plant fibre composites - porosity and volumetric interaction

    DEFF Research Database (Denmark)

    Madsen, Bo; Thygesen, Anders; Lilholt, Hans

    2007-01-01

    the combination of a high fibre volume fraction, a low porosity and a high composite density is optimal. Experimental data from the literature on volumetric composition and density of four types of plant fibre composites are used to validate the model. It is demonstrated that the model provides a concept......Plant fibre composites contain typically a relative large amount of porosity, which considerably influences properties and performance of the composites. The large porosity must be integrated in the conversion of weight fractions into volume fractions of the fibre and matrix parts. A model...... is presented to predict the porosity as a function of the fibre weight fractions, and to calculate the related fibre and matrix volume fractions, as well as the density of the composite. The model predicts two cases of composite volumetric interaction separated by a transition fibre weight fraction, at which...

  18. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.; Martin, Tobias; Grosset, A. V Pascal; Brownlee, Carson; Hollt, Thomas; Brown, Benjamin P.; Smith, Sean T.; Hansen, Charles D.

    2012-01-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  19. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.

    2012-02-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  20. Volumetric and superficial characterization of carbon activated

    International Nuclear Information System (INIS)

    Carrera G, L.M.; Garcia S, I.; Jimenez B, J.; Solache R, M.; Lopez M, B.; Bulbulian G, S.; Olguin G, M.T.

    2000-01-01

    The activated carbon is the resultant material of the calcination process of natural carbonated materials as coconut shells or olive little bones. It is an excellent adsorbent of diluted substances, so much in colloidal form, as in particles form. Those substances are attracted and retained by the carbon surface. In this work is make the volumetric and superficial characterization of activated carbon treated thermically (300 Centigrade) in function of the grain size average. (Author)

  1. Volumetric polymerization shrinkage of contemporary composite resins

    OpenAIRE

    Nagem Filho, Halim; Nagem, Haline Drumond; Francisconi, Paulo Afonso Silveira; Franco, Eduardo Batista; Mondelli, Rafael Francisco Lia; Coutinho, Kennedy Queiroz

    2007-01-01

    The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill ...

  2. A volumetric data system for environmental robotics

    International Nuclear Information System (INIS)

    Tourtellott, J.

    1994-01-01

    A three-dimensional, spatially organized or volumetric data system provides an effective means for integrating and presenting environmental sensor data to robotic systems and operators. Because of the unstructed nature of environmental restoration applications, new robotic control strategies are being developed that include environmental sensors and interactive data interpretation. The volumetric data system provides key features to facilitate these new control strategies including: integrated representation of surface, subsurface and above-surface data; differentiation of mapped and unmapped regions in space; sculpting of regions in space to best exploit data from line-of-sight sensors; integration of diverse sensor data (for example, dimensional, physical/geophysical, chemical, and radiological); incorporation of data provided at different spatial resolutions; efficient access for high-speed visualization and analysis; and geometric modeling tools to update a open-quotes world modelclose quotes of an environment. The applicability to underground storage tank remediation and buried waste site remediation are demonstrated in several examples. By integrating environmental sensor data into robotic control, the volumetric data system will lead to safer, faster, and more cost-effective environmental cleanup

  3. MR volumetric assessment of endolymphatic hydrops

    International Nuclear Information System (INIS)

    Guerkov, R.; Berman, A.; Jerin, C.; Krause, E.; Dietrich, O.; Flatz, W.; Ertl-Wagner, B.; Keeser, D.

    2015-01-01

    We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)

  4. MR volumetric assessment of endolymphatic hydrops

    Energy Technology Data Exchange (ETDEWEB)

    Guerkov, R.; Berman, A.; Jerin, C.; Krause, E. [University of Munich, Department of Otorhinolaryngology Head and Neck Surgery, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); Dietrich, O.; Flatz, W.; Ertl-Wagner, B. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); Keeser, D. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); University of Munich, Department of Psychiatry and Psychotherapy, Innenstadtkliniken Medical Centre, Munich (Germany)

    2014-10-16

    We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)

  5. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  6. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  7. Green chemistry volumetric titration kit for pharmaceutical formulations: Econoburette

    Directory of Open Access Journals (Sweden)

    Man Singh

    2009-08-01

    Full Text Available Stopcock SC and Spring Sp models of Econoburette (Calibrated, RTC (NR, Ministry of Small Scale Industries, Government of India, developed for semimicro volumetric titration of pharmaceutical formulations are reported. These are economized and risk free titration where pipette is replaced by an inbuilt pipette and conical flask by inbuilt bulb. A step of pipetting of stock solution by mouth is deleted. It is used to allow solution exposure to user’s body. This risk is removed and even volatile and toxic solutions are titrated with full proof safety. Econoburette minimizes use of materials and time by 90 % and prevent discharge of polluting effluent to environment. Few acid and base samples are titrated and an analysis of experimental expenditure is described in the papers.

  8. Breast Density Estimation with Fully Automated Volumetric Method: Comparison to Radiologists' Assessment by BI-RADS Categories.

    Science.gov (United States)

    Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan

    2016-01-01

    The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  10. Mapping of coastal landforms and volumetric change analysis in the south west coast of Kanyakumari, South India using remote sensing and GIS techniques

    Directory of Open Access Journals (Sweden)

    S. Kaliraj

    2017-12-01

    Full Text Available The coastal landforms along the south west coast of Kanyakumari have undergone remarkable change in terms of shape and disposition due to both natural and anthropogenic interference. An attempt is made here to map the coastal landforms along the coast using remote sensing and GIS techniques. Spatial data sources, such as, topographical map published by Survey of India, Landsat ETM+ (30 m image, IKONOS image (0.82 m, SRTM and ASTER DEM datasets have been comprehensively analyzed for extracting coastal landforms. Change detection methods, such as, (i topographical change detection, (ii cross-shore profile analysis, (iii Geomorphic Change Detection (GCD using DEM of Difference (DoD were adopted for assessment of volumetric changes of coastal landforms for the period between 2000 and 2011. The GCD analysis uses ASTER and SRTM DEM datasets by resampling them into common scale (pixel size using pixel-by-pixel based Wavelet Transform and Pan-Sharpening techniques in ERDAS Imagine software. Volumetric changes of coastal landforms were validated with data derived from GPS-based field survey. Coastal landform units were mapped based on process of their evolution such as beach landforms including sandy beach, cusp, berm, scarp, beach terrace, upland, rockyshore, cliffs, wave-cut notches and wave-cut platforms; and the fluvial landforms. Comprising of alluvial plain, flood plains, and other shallow marshes in estuaries. The topographical change analysis reveals that the beach landforms have reduced their elevation ranging from 1 to 3 m probably due to sediment removal or flattening. Analysis of cross-shore profiles for twelve locations indicate varying degrees of loss or gain of coastal landforms. For example, the K3-K3′ profile across the Kovalam coast has shown significant erosion (−0.26 to −0.76 m of the sandy beaches resulting in the formation of beach cusps and beach scarps within a distance of 300 m from the shoreline. The volumetric change

  11. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  12. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  13. Passive Containment DataSet

    Science.gov (United States)

    This data is for Figures 6 and 7 in the journal article. The data also includes the two EPANET input files used for the analysis described in the paper, one for the looped system and one for the block system.This dataset is associated with the following publication:Grayman, W., R. Murray , and D. Savic. Redesign of Water Distribution Systems for Passive Containment of Contamination. JOURNAL OF THE AMERICAN WATER WORKS ASSOCIATION. American Water Works Association, Denver, CO, USA, 108(7): 381-391, (2016).

  14. Volumetric expiratory high-resolution CT of the lung

    International Nuclear Information System (INIS)

    Nishino, Mizuki; Hatabu, Hiroto

    2004-01-01

    We developed a volumetric expiratory high-resolution CT (HRCT) protocol that provides combined inspiratory and expiratory volumetric imaging of the lung without increasing radiation exposure, and conducted a preliminary feasibility assessment of this protocol to evaluate diffuse lung disease with small airway abnormalities. The volumetric expiratory high-resolution CT increased the detectability of the conducting airway to the areas of air trapping (P<0.0001), and added significant information about extent and distribution of air trapping (P<0.0001)

  15. Vessel suppressed chest Computed Tomography for semi-automated volumetric measurements of solid pulmonary nodules.

    Science.gov (United States)

    Milanese, Gianluca; Eberhard, Matthias; Martini, Katharina; Vittoria De Martini, Ilaria; Frauenfelder, Thomas

    2018-04-01

    To evaluate whether vessel-suppressed computed tomography (VSCT) can be reliably used for semi-automated volumetric measurements of solid pulmonary nodules, as compared to standard CT (SCT) MATERIAL AND METHODS: Ninety-three SCT were elaborated by dedicated software (ClearRead CT, Riverain Technologies, Miamisburg, OH, USA), that allows subtracting vessels from lung parenchyma. Semi-automated volumetric measurements of 65 solid nodules were compared between SCT and VSCT. The measurements were repeated by two readers. For each solid nodule, volume measured on SCT by Reader 1 and Reader 2 was averaged and the average volume between readers acted as standard of reference value. Concordance between measurements was assessed using Lin's Concordance Correlation Coefficient (CCC). Limits of agreement (LoA) between readers and CT datasets were evaluated. Standard of reference nodule volume ranged from 13 to 366 mm 3 . The mean overestimation between readers was 3 mm 3 and 2.9 mm 3 on SCT and VSCT, respectively. Semi-automated volumetric measurements on VSCT showed substantial agreement with the standard of reference (Lin's CCC = 0.990 for Reader 1; 0.985 for Reader 2). The upper and lower LoA between readers' measurements were (16.3, -22.4 mm 3 ) and (15.5, -21.4 mm 3 ) for SCT and VSCT, respectively. VSCT datasets are feasible for the measurements of solid nodules, showing an almost perfect concordance between readers and with measurements on SCT. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  17. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  18. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  19. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  20. Discovery and Reuse of Open Datasets: An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Sara

    2016-07-01

    Full Text Available Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.

  1. Adaptive controller for volumetric display of neuroimaging studies

    Science.gov (United States)

    Bleiberg, Ben; Senseney, Justin; Caban, Jesus

    2014-03-01

    Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.

  2. Stability and Volumetric Properties of Asphalt Mixture Containing Waste Plastic

    Directory of Open Access Journals (Sweden)

    Abd Kader Siti Aminah

    2017-01-01

    Full Text Available The objectives of this study are to determine the optimum bitumen content (OBC for every percentage added of waste plastics in asphalt mixtures and to investigate the stability properties of the asphalt mixtures containing waste plastic. Marshall stability and flow values along with density, air voids in total mix, voids in mineral aggregate, and voids filled with bitumen were determined to obtain OBC at different percentages of waste plastic, i.e., 4%, 6%, 8%, and 10% by weight of bitumen as additive. Results showed that the OBC for the plastic-modified asphalt mixtures at 4%, 6%, 8%, and 10% are 4.98, 5.44, 5.48, and 5.14, respectively. On the other hand, the controlled specimen’s shows better volumetric properties compared to plastic mixes. However, 4% additional of waste plastic indicated better stability than controlled specimen.

  3. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  4. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  5. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  6. Homogenised Australian climate datasets used for climate change monitoring

    International Nuclear Information System (INIS)

    Trewin, Blair; Jones, David; Collins; Dean; Jovanovic, Branislava; Braganza, Karl

    2007-01-01

    Full text: The Australian Bureau of Meteorology has developed a number of datasets for use in climate change monitoring. These datasets typically cover 50-200 stations distributed as evenly as possible over the Australian continent, and have been subject to detailed quality control and homogenisation.The time period over which data are available for each element is largely determined by the availability of data in digital form. Whilst nearly all Australian monthly and daily precipitation data have been digitised, a significant quantity of pre-1957 data (for temperature and evaporation) or pre-1987 data (for some other elements) remains to be digitised, and is not currently available for use in the climate change monitoring datasets. In the case of temperature and evaporation, the start date of the datasets is also determined by major changes in instruments or observing practices for which no adjustment is feasible at the present time. The datasets currently available cover: Monthly and daily precipitation (most stations commence 1915 or earlier, with many extending back to the late 19th century, and a few to the mid-19th century); Annual temperature (commences 1910); Daily temperature (commences 1910, with limited station coverage pre-1957); Twice-daily dewpoint/relative humidity (commences 1957); Monthly pan evaporation (commences 1970); Cloud amount (commences 1957) (Jovanovic etal. 2007). As well as the station-based datasets listed above, an additional dataset being developed for use in climate change monitoring (and other applications) covers tropical cyclones in the Australian region. This is described in more detail in Trewin (2007). The datasets already developed are used in analyses of observed climate change, which are available through the Australian Bureau of Meteorology website (http://www.bom.gov.au/silo/products/cli_chg/). They are also used as a basis for routine climate monitoring, and in the datasets used for the development of seasonal

  7. A Comparative Analysis of Classification Algorithms on Diverse Datasets

    Directory of Open Access Journals (Sweden)

    M. Alghobiri

    2018-04-01

    Full Text Available Data mining involves the computational process to find patterns from large data sets. Classification, one of the main domains of data mining, involves known structure generalizing to apply to a new dataset and predict its class. There are various classification algorithms being used to classify various data sets. They are based on different methods such as probability, decision tree, neural network, nearest neighbor, boolean and fuzzy logic, kernel-based etc. In this paper, we apply three diverse classification algorithms on ten datasets. The datasets have been selected based on their size and/or number and nature of attributes. Results have been discussed using some performance evaluation measures like precision, accuracy, F-measure, Kappa statistics, mean absolute error, relative absolute error, ROC Area etc. Comparative analysis has been carried out using the performance evaluation measures of accuracy, precision, and F-measure. We specify features and limitations of the classification algorithms for the diverse nature datasets.

  8. Volumetric visualization of anatomy for treatment planning

    International Nuclear Information System (INIS)

    Pelizzari, Charles A.; Grzeszczuk, Robert; Chen, George T. Y.; Heimann, Ruth; Haraf, Daniel J.; Vijayakumar, Srinivasan; Ryan, Martin J.

    1996-01-01

    Purpose: Delineation of volumes of interest for three-dimensional (3D) treatment planning is usually performed by contouring on two-dimensional sections. We explore the usage of segmentation-free volumetric rendering of the three-dimensional image data set for tumor and normal tissue visualization. Methods and Materials: Standard treatment planning computed tomography (CT) studies, with typically 5 to 10 mm slice thickness, and spiral CT studies with 3 mm slice thickness were used. The data were visualized using locally developed volume-rendering software. Similar to the method of Drebin et al., CT voxels are automatically assigned an opacity and other visual properties (e.g., color) based on a probabilistic classification into tissue types. Using volumetric compositing, a projection into the opacity-weighted volume is produced. Depth cueing, perspective, and gradient-based shading are incorporated to achieve realistic images. Unlike surface-rendered displays, no hand segmentation is required to produce detailed renditions of skin, muscle, or bony anatomy. By suitable manipulation of the opacity map, tissue classes can be made transparent, revealing muscle, vessels, or bone, for example. Manually supervised tissue masking allows irrelevant tissues overlying tumors or other structures of interest to be removed. Results: Very high-quality renditions are produced in from 5 s to 1 min on midrange computer workstations. In the pelvis, an anteroposterior (AP) volume rendered view from a typical planning CT scan clearly shows the skin and bony anatomy. A muscle opacity map permits clear visualization of the superficial thigh muscles, femoral veins, and arteries. Lymph nodes are seen in the femoral triangle. When overlying muscle and bone are cut away, the prostate, seminal vessels, bladder, and rectum are seen in 3D perspective. Similar results are obtained for thorax and for head and neck scans. Conclusion: Volumetric visualization of anatomy is useful in treatment

  9. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  10. Determination of Uncertainty for a One Milli Litre Volumetric Pipette

    International Nuclear Information System (INIS)

    Torowati; Asminar; Rahmiati; Arif-Sasongko-Adi

    2007-01-01

    An observation had been conducted to determine the uncertainty of volumetric pipette. The uncertainty was determined from data obtained from a determine process which used method of gravimetry. Calculation result from an uncertainty of volumetric pipette the confidence level of 95% and k=2. (author)

  11. Short-term mechanisms influencing volumetric brain dynamics

    Directory of Open Access Journals (Sweden)

    Nikki Dieleman

    2017-01-01

    Full Text Available With the use of magnetic resonance imaging (MRI and brain analysis tools, it has become possible to measure brain volume changes up to around 0.5%. Besides long-term brain changes caused by atrophy in aging or neurodegenerative disease, short-term mechanisms that influence brain volume may exist. When we focus on short-term changes of the brain, changes may be either physiological or pathological. As such determining the cause of volumetric dynamics of the brain is essential. Additionally for an accurate interpretation of longitudinal brain volume measures by means of neurodegeneration, knowledge about the short-term changes is needed. Therefore, in this review, we discuss the possible mechanisms influencing brain volumes on a short-term basis and set-out a framework of MRI techniques to be used for volumetric changes as well as the used analysis tools. 3D T1-weighted images are the images of choice when it comes to MRI of brain volume. These images are excellent to determine brain volume and can be used together with an analysis tool to determine the degree of volume change. Mechanisms that decrease global brain volume are: fluid restriction, evening MRI measurements, corticosteroids, antipsychotics and short-term effects of pathological processes like Alzheimer's disease, hypertension and Diabetes mellitus type II. Mechanisms increasing the brain volume include fluid intake, morning MRI measurements, surgical revascularization and probably medications like anti-inflammatory drugs and anti-hypertensive medication. Exercise was found to have no effect on brain volume on a short-term basis, which may imply that dehydration caused by exercise differs from dehydration by fluid restriction. In the upcoming years, attention should be directed towards studies investigating physiological short-term changes within the light of long-term pathological changes. Ultimately this may lead to a better understanding of the physiological short-term effects of

  12. Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets

    Directory of Open Access Journals (Sweden)

    Mingwei Leng

    2013-01-01

    Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.

  13. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  14. Optical Addressing of Multi-Colour Photochromic Material Mixture for Volumetric Display

    Science.gov (United States)

    Hirayama, Ryuji; Shiraki, Atsushi; Naruse, Makoto; Nakamura, Shinichiro; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-08-01

    This is the first study to demonstrate that colour transformations in the volume of a photochromic material (PM) are induced at the intersections of two control light channels, one controlling PM colouration and the other controlling decolouration. Thus, PM colouration is induced by position selectivity, and therefore, a dynamic volumetric display may be realised using these two control lights. Moreover, a mixture of multiple PM types with different absorption properties exhibits different colours depending on the control light spectrum. Particularly, the spectrum management of the control light allows colour-selective colouration besides position selectivity. Therefore, a PM-based, full-colour volumetric display is realised. We experimentally construct a mixture of two PM types and validate the operating principles of such a volumetric display system. Our system is constructed simply by mixing multiple PM types; therefore, the display hardware structure is extremely simple, and the minimum size of a volume element can be as small as the size of a molecule. Volumetric displays can provide natural three-dimensional (3D) perception; therefore, the potential uses of our system include high-definition 3D visualisation for medical applications, architectural design, human-computer interactions, advertising, and entertainment.

  15. 2006 Fynmeet sea clutter measurement trial: Datasets

    CSIR Research Space (South Africa)

    Herselman, PLR

    2007-09-06

    Full Text Available -011............................................................................................................................................................................................. 25 iii Dataset CAD14-001 0 5 10 15 20 25 30 35 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-001 2400 2600 2800... 40 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-002 2400 2600 2800 3000 3200 3400 3600 -30 -25 -20 -15 -10 -5 0 5 10...

  16. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2012-01-01

      Introduction The first part of the year presented an important test for the new Physics Performance and Dataset (PPD) group (cf. its mandate: http://cern.ch/go/8f77). The activity was focused on the validation of the new releases meant for the Monte Carlo (MC) production and the data-processing in 2012 (CMSSW 50X and 52X), and on the preparation of the 2012 operations. In view of the Chamonix meeting, the PPD and physics groups worked to understand the impact of the higher pile-up scenario on some of the flagship Higgs analyses to better quantify the impact of the high luminosity on the CMS physics potential. A task force is working on the optimisation of the reconstruction algorithms and on the code to cope with the performance requirements imposed by the higher event occupancy as foreseen for 2012. Concerning the preparation for the analysis of the new data, a new MC production has been prepared. The new samples, simulated at 8 TeV, are already being produced and the digitisation and recons...

  17. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    Science.gov (United States)

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (Psockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  18. Effects of Different Reconstruction Parameters on CT Volumetric Measurement 
of Pulmonary Nodules

    Directory of Open Access Journals (Sweden)

    Rongrong YANG

    2012-02-01

    Full Text Available Background and objective It has been proven that volumetric measurements could detect subtle changes in small pulmonary nodules in serial CT scans, and thus may play an important role in the follow-up of indeterminate pulmonary nodules and in differentiating malignant nodules from benign nodules. The current study aims to evaluate the effects of different reconstruction parameters on the volumetric measurements of pulmonary nodules in chest CT scans. Methods Thirty subjects who underwent chest CT scan because of indeterminate pulmonary nodules in General Hospital of Tianjin Medical University from December 2009 to August 2011 were retrospectively analyzed. A total of 52 pulmonary nodules were included, and all CT data were reconstructed using three reconstruction algorithms and three slice thicknesses. The volumetric measurements of the nodules were performed using the advanced lung analysis (ALA software. The effects of the reconstruction algorithms, slice thicknesses, and nodule diameters on the volumetric measurements were assessed using the multivariate analysis of variance for repeated measures, the correlation analysis, and the Bland-Altman method. Results The reconstruction algorithms (F=13.6, P<0.001 and slice thicknesses (F=4.4, P=0.02 had significant effects on the measured volume of pulmonary nodules. In addition, the coefficients of variation of nine measurements were inversely related with nodule diameter (r=-0.814, P<0.001. The volume measured at the 2.5 mm slice thickness had poor agreement with the volumes measured at 1.25 mm and 0.625 mm, respectively. Moreover, the best agreement was achieved between the slice thicknesses of 1.25 mm and 0.625 mm using the bone algorithm. Conclusion Reconstruction algorithms and slice thicknesses have significant impacts on the volumetric measurements of lung nodules, especially for the small nodules. Therefore, the reconstruction setting in serial CT scans should be consistent in the follow

  19. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  20. A new bed elevation dataset for Greenland

    Directory of Open Access Journals (Sweden)

    J. L. Bamber

    2013-03-01

    Full Text Available We present a new bed elevation dataset for Greenland derived from a combination of multiple airborne ice thickness surveys undertaken between the 1970s and 2012. Around 420 000 line kilometres of airborne data were used, with roughly 70% of this having been collected since the year 2000, when the last comprehensive compilation was undertaken. The airborne data were combined with satellite-derived elevations for non-glaciated terrain to produce a consistent bed digital elevation model (DEM over the entire island including across the glaciated–ice free boundary. The DEM was extended to the continental margin with the aid of bathymetric data, primarily from a compilation for the Arctic. Ice thickness was determined where an ice shelf exists from a combination of surface elevation and radar soundings. The across-track spacing between flight lines warranted interpolation at 1 km postings for significant sectors of the ice sheet. Grids of ice surface elevation, error estimates for the DEM, ice thickness and data sampling density were also produced alongside a mask of land/ocean/grounded ice/floating ice. Errors in bed elevation range from a minimum of ±10 m to about ±300 m, as a function of distance from an observation and local topographic variability. A comparison with the compilation published in 2001 highlights the improvement in resolution afforded by the new datasets, particularly along the ice sheet margin, where ice velocity is highest and changes in ice dynamics most marked. We estimate that the volume of ice included in our land-ice mask would raise mean sea level by 7.36 m, excluding any solid earth effects that would take place during ice sheet decay.

  1. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  2. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  3. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  4. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  5. Soil moisture datasets at five sites in the central Sierra Nevada and northern Coast Ranges, California

    Science.gov (United States)

    Stern, Michelle A.; Anderson, Frank A.; Flint, Lorraine E.; Flint, Alan L.

    2018-05-03

    In situ soil moisture datasets are important inputs used to calibrate and validate watershed, regional, or statewide modeled and satellite-based soil moisture estimates. The soil moisture dataset presented in this report includes hourly time series of the following: soil temperature, volumetric water content, water potential, and total soil water content. Data were collected by the U.S. Geological Survey at five locations in California: three sites in the central Sierra Nevada and two sites in the northern Coast Ranges. This report provides a description of each of the study areas, procedures and equipment used, processing steps, and time series data from each site in the form of comma-separated values (.csv) tables.

  6. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems.

    Science.gov (United States)

    Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-04-06

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.

  7. BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters

    Directory of Open Access Journals (Sweden)

    Mithun Biswas

    2017-06-01

    Full Text Available BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.

  8. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel; Kuester, Falko

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual

  9. Non-uniform volumetric structures in Richtmyer-Meshkov flows

    NARCIS (Netherlands)

    Staniç, M.; McFarland, J.; Stellingwerf, R.F.; Cassibry, J.T.; Ranjan, D.; Bonazza, R.; Greenough, J.A.; Abarzhi, S.I.

    2013-01-01

    We perform an integrated study of volumetric structures in Richtmyer-Meshkov (RM) flows induced by moderate shocks. Experiments, theoretical analyses, Smoothed Particle Hydrodynamics simulations, and ARES Arbitrary Lagrange Eulerian simulations are employed to analyze RM evolution for fluids with

  10. The OXL format for the exchange of integrated datasets

    Directory of Open Access Journals (Sweden)

    Taubert Jan

    2007-12-01

    Full Text Available A prerequisite for systems biology is the integration and analysis of heterogeneous experimental data stored in hundreds of life-science databases and millions of scientific publications. Several standardised formats for the exchange of specific kinds of biological information exist. Such exchange languages facilitate the integration process; however they are not designed to transport integrated datasets. A format for exchanging integrated datasets needs to i cover data from a broad range of application domains, ii be flexible and extensible to combine many different complex data structures, iii include metadata and semantic definitions, iv include inferred information, v identify the original data source for integrated entities and vi transport large integrated datasets. Unfortunately, none of the exchange formats from the biological domain (e.g. BioPAX, MAGE-ML, PSI-MI, SBML or the generic approaches (RDF, OWL fulfil these requirements in a systematic way.

  11. Dataset of transcriptional landscape of B cell early activation

    Directory of Open Access Journals (Sweden)

    Alexander S. Garruss

    2015-09-01

    Full Text Available Signaling via B cell receptors (BCR and Toll-like receptors (TLRs result in activation of B cells with distinct physiological outcomes, but transcriptional regulatory mechanisms that drive activation and distinguish these pathways remain unknown. At early time points after BCR and TLR ligand exposure, 0.5 and 2 h, RNA-seq was performed allowing observations on rapid transcriptional changes. At 2 h, ChIP-seq was performed to allow observations on important regulatory mechanisms potentially driving transcriptional change. The dataset includes RNA-seq, ChIP-seq of control (Input, RNA Pol II, H3K4me3, H3K27me3, and a separate RNA-seq for miRNA expression, which can be found at Gene Expression Omnibus Dataset GSE61608. Here, we provide details on the experimental and analysis methods used to obtain and analyze this dataset and to examine the transcriptional landscape of B cell early activation.

  12. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  13. Volumetric optoacoustic monitoring of endovenous laser treatments

    Science.gov (United States)

    Fehm, Thomas F.; Deán-Ben, Xosé L.; Schaur, Peter; Sroka, Ronald; Razansky, Daniel

    2016-03-01

    Chronic venous insufficiency (CVI) is one of the most common medical conditions with reported prevalence estimates as high as 30% in the adult population. Although conservative management with compression therapy may improve the symptoms associated with CVI, healing often demands invasive procedures. Besides established surgical methods like vein stripping or bypassing, endovenous laser therapy (ELT) emerged as a promising novel treatment option during the last 15 years offering multiple advantages such as less pain and faster recovery. Much of the treatment success hereby depends on monitoring of the treatment progression using clinical imaging modalities such as Doppler ultrasound. The latter however do not provide sufficient contrast, spatial resolution and three-dimensional imaging capacity which is necessary for accurate online lesion assessment during treatment. As a consequence, incidence of recanalization, lack of vessel occlusion and collateral damage remains highly variable among patients. In this study, we examined the capacity of volumetric optoacoustic tomography (VOT) for real-time monitoring of ELT using an ex-vivo ox foot model. ELT was performed on subcutaneous veins while optoacoustic signals were acquired and reconstructed in real-time and at a spatial resolution in the order of 200μm. VOT images showed spatio-temporal maps of the lesion progression, characteristics of the vessel wall, and position of the ablation fiber's tip during the pull back. It was also possible to correlate the images with the temperature elevation measured in the area adjacent to the ablation spot. We conclude that VOT is a promising tool for providing online feedback during endovenous laser therapy.

  14. Serial volumetric registration of pulmonary CT studies

    Science.gov (United States)

    Silva, José Silvestre; Silva, Augusto; Sousa Santos, Beatriz

    2008-03-01

    Detailed morphological analysis of pulmonary structures and tissue, provided by modern CT scanners, is of utmost importance as in the case of oncological applications both for diagnosis, treatment, and follow-up. In this case, a patient may go through several tomographic studies throughout a period of time originating volumetric sets of image data that must be appropriately registered in order to track suspicious radiological findings. The structures or regions of interest may change their position or shape in CT exams acquired at different moments, due to postural, physiologic or pathologic changes, so, the exams should be registered before any follow-up information can be extracted. Postural mismatching throughout time is practically impossible to avoid being particularly evident when imaging is performed at the limiting spatial resolution. In this paper, we propose a method for intra-patient registration of pulmonary CT studies, to assist in the management of the oncological pathology. Our method takes advantage of prior segmentation work. In the first step, the pulmonary segmentation is performed where trachea and main bronchi are identified. Then, the registration method proceeds with a longitudinal alignment based on morphological features of the lungs, such as the position of the carina, the pulmonary areas, the centers of mass and the pulmonary trans-axial principal axis. The final step corresponds to the trans-axial registration of the corresponding pulmonary masked regions. This is accomplished by a pairwise sectional registration process driven by an iterative search of the affine transformation parameters leading to optimal similarity metrics. Results with several cases of intra-patient, intra-modality registration, up to 7 time points, show that this method provides accurate registration which is needed for quantitative tracking of lesions and the development of image fusion strategies that may effectively assist the follow-up process.

  15. Dual-gated volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Fahimian, Benjamin; Wu, Junqing; Wu, Huanmei; Geneser, Sarah; Xing, Lei

    2014-01-01

    Gated Volumetric Modulated Arc Therapy (VMAT) is an emerging radiation therapy modality for treatment of tumors affected by respiratory motion. However, gating significantly prolongs the treatment time, as delivery is only activated during a single respiratory phase. To enhance the efficiency of gated VMAT delivery, a novel dual-gated VMAT (DG-VMAT) technique, in which delivery is executed at both exhale and inhale phases in a given arc rotation, is developed and experimentally evaluated. Arc delivery at two phases is realized by sequentially interleaving control points consisting of MUs, MLC sequences, and angles of VMAT plans generated at the exhale and inhale phases. Dual-gated delivery is initiated when a respiration gating signal enters the exhale window; when the exhale delivery concludes, the beam turns off and the gantry rolls back to the starting position for the inhale window. The process is then repeated until both inhale and exhale arcs are fully delivered. DG-VMAT plan delivery accuracy was assessed using a pinpoint chamber and diode array phantom undergoing programmed motion. DG-VMAT delivery was experimentally implemented through custom XML scripting in Varian’s TrueBeam™ STx Developer Mode. Relative to single gated delivery at exhale, the treatment time was improved by 95.5% for a sinusoidal breathing pattern. The pinpoint chamber dose measurement agreed with the calculated dose within 0.7%. For the DG-VMAT delivery, 97.5% of the diode array measurements passed the 3%/3 mm gamma criterion. The feasibility of DG-VMAT delivery scheme has been experimentally demonstrated for the first time. By leveraging the stability and natural pauses that occur at end-inspiration and end-exhalation, DG-VMAT provides a practical method for enhancing gated delivery efficiency by up to a factor of two

  16. Wind and wave dataset for Matara, Sri Lanka

    Directory of Open Access Journals (Sweden)

    Y. Luo

    2018-01-01

    Full Text Available We present a continuous in situ hydro-meteorology observational dataset from a set of instruments first deployed in December 2012 in the south of Sri Lanka, facing toward the north Indian Ocean. In these waters, simultaneous records of wind and wave data are sparse due to difficulties in deploying measurement instruments, although the area hosts one of the busiest shipping lanes in the world. This study describes the survey, deployment, and measurements of wind and waves, with the aim of offering future users of the dataset the most comprehensive and as much information as possible. This dataset advances our understanding of the nearshore hydrodynamic processes and wave climate, including sea waves and swells, in the north Indian Ocean. Moreover, it is a valuable resource for ocean model parameterization and validation. The archived dataset (Table 1 is examined in detail, including wave data at two locations with water depths of 20 and 10 m comprising synchronous time series of wind, ocean astronomical tide, air pressure, etc. In addition, we use these wave observations to evaluate the ERA-Interim reanalysis product. Based on Buoy 2 data, the swells are the main component of waves year-round, although monsoons can markedly alter the proportion between swell and wind sea. The dataset (Luo et al., 2017 is publicly available from Science Data Bank (https://doi.org/10.11922/sciencedb.447.

  17. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  18. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  19. Volumetric breast density affects performance of digital screening mammography

    OpenAIRE

    Wanders, JO; Holland, K; Veldhuis, WB; Mann, RM; Pijnappel, RM; Peeters, PH; Van Gils, CH; Karssemeijer, N

    2016-01-01

    PURPOSE: To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). METHODS: We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the Ameri...

  20. Increasing the volumetric efficiency of Diesel engines by intake pipes

    Science.gov (United States)

    List, Hans

    1933-01-01

    Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.

  1. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    Science.gov (United States)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in

  2. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  3. Visualization and volumetric structures from MR images of the brain

    Energy Technology Data Exchange (ETDEWEB)

    Parvin, B.; Johnston, W.; Robertson, D.

    1994-03-01

    Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.

  4. Volumetric characteristics and compactability of asphalt rubber mixtures with organic warm mix asphalt additives

    Directory of Open Access Journals (Sweden)

    A. M. Rodríguez-Alloza

    2017-04-01

    Full Text Available Warm Mix Asphalt (WMA refers to technologies that reduce manufacturing and compaction temperatures of asphalt mixtures allowing lower energy consumption and reducing greenhouse gas emissions from asphalt plants. These benefits, combined with the effective reuse of a solid waste product, make asphalt rubber (AR mixtures with WMA additives an excellent environmentally-friendly material for road construction. The effect of WMA additives on rubberized mixtures has not yet been established in detail and the lower mixing/compaction temperatures of these mixtures may result in insufficient compaction. In this sense, the present study uses a series of laboratory tests to evaluate the volumetric characteristics and compactability of AR mixtures with organic additives when production/compaction temperatures are decreased. The results of this study indicate that the additives selected can decrease the mixing/compaction temperatures without compromising the volumetric characteristics and compactability.

  5. Very high frame rate volumetric integration of depth images on mobile devices.

    Science.gov (United States)

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  6. SU-E-T-540: Volumetric Modulated Total Body Irradiation Using a Rotational Lazy Susan-Like Immobilization System

    International Nuclear Information System (INIS)

    Gu, X; Hrycushko, B; Lee, H; Lamphier, R; Jiang, S; Abdulrahman, R; Timmerman, R

    2014-01-01

    Purpose: Traditional extended SSD total body irradiation (TBI) techniques can be problematic in terms of patient comfort and/or dose uniformity. This work aims to develop a comfortable TBI technique that achieves a uniform dose distribution to the total body while reducing the dose to organs at risk for complications. Methods: To maximize patient comfort, a lazy Susan-like couch top immobilization system which rotates about a pivot point was developed. During CT simulation, a patient is immobilized by a Vac-Lok bag within the body frame. The patient is scanned head-first and then feet-first following 180° rotation of the frame. The two scans are imported into the Pinnacle treatment planning system and concatenated to give a full-body CT dataset. Treatment planning matches multiple isocenter volumetric modulated arc (VMAT) fields of the upper body and multiple isocenter parallel-opposed fields of the lower body. VMAT fields of the torso are optimized to satisfy lung dose constraints while achieving a therapeutic dose to the torso. The multiple isocenter VMAT fields are delivered with an indexed couch, followed by body frame rotation about the pivot point to treat the lower body isocenters. The treatment workflow was simulated with a Rando phantom, and the plan was mapped to a solid water slab phantom for point- and film-dose measurements at multiple locations. Results: The treatment plan of 12Gy over 8 fractions achieved 80.2% coverage of the total body volume within ±10% of the prescription dose. The mean lung dose was 8.1 Gy. All ion chamber measurements were within ±1.7% compared to the calculated point doses. All relative film dosimetry showed at least a 98.0% gamma passing rate using a 3mm/3% passing criteria. Conclusion: The proposed patient comfort-oriented TBI technique provides for a uniform dose distribution within the total body while reducing the dose to the lungs

  7. Volumetric associations between uncinate fasciculus, amygdala, and trait anxiety

    Directory of Open Access Journals (Sweden)

    Baur Volker

    2012-01-01

    Full Text Available Abstract Background Recent investigations of white matter (WM connectivity suggest an important role of the uncinate fasciculus (UF, connecting anterior temporal areas including the amygdala with prefrontal-/orbitofrontal cortices, for anxiety-related processes. Volume of the UF, however, has rarely been investigated, but may be an important measure of structural connectivity underlying limbic neuronal circuits associated with anxiety. Since UF volumetric measures are newly applied measures, it is necessary to cross-validate them using further neural and behavioral indicators of anxiety. Results In a group of 32 subjects not reporting any history of psychiatric disorders, we identified a negative correlation between left UF volume and trait anxiety, a finding that is in line with previous results. On the other hand, volume of the left amygdala, which is strongly connected with the UF, was positively correlated with trait anxiety. In addition, volumes of the left UF and left amygdala were inversely associated. Conclusions The present study emphasizes the role of the left UF as candidate WM fiber bundle associated with anxiety-related processes and suggests that fiber bundle volume is a WM measure of particular interest. Moreover, these results substantiate the structural relatedness of UF and amygdala by a non-invasive imaging method. The UF-amygdala complex may be pivotal for the control of trait anxiety.

  8. NEW APPROACH FOR TECHNOLOGY OF VOLUMETRIC – SUPERFICIAL HARDENING OF GEAR DETAILS OF THE BACK AXLE OF MOBILE MACHINES

    Directory of Open Access Journals (Sweden)

    A. I. Mihluk

    2010-01-01

    Full Text Available The new approach for technology of volumetric – superficial hardening of gear details of the back axle made of steel lowered harden ability is offered. This approach consisting in formation of intense – hardened condition on all surface of a detail.

  9. Volumetric modulated arc radiotherapy for esophageal cancer

    International Nuclear Information System (INIS)

    Vivekanandan, Nagarajan; Sriram, Padmanaban; Syam Kumar, S.A.; Bhuvaneswari, Narayanan; Saranya, Kamalakannan

    2012-01-01

    A treatment planning study was performed to evaluate the performance of volumetric arc modulation with RapidArc (RA) against 3D conformal radiation therapy (3D-CRT) and conventional intensity-modulated radiation therapy (IMRT) techniques for esophageal cancer. Computed tomgraphy scans of 10 patients were included in the study. 3D-CRT, 4-field IMRT, and single-arc and double-arc RA plans were generated with the aim to spare organs at risk (OAR) and healthy tissue while enforcing highly conformal target coverage. The planning objective was to deliver 54 Gy to the planning target volume (PTV) in 30 fractions. Plans were evaluated based on target conformity and dose-volume histograms of organs at risk (lung, spinal cord, and heart). The monitor unit (MU) and treatment delivery time were also evaluated to measure the treatment efficiency. The IMRT plan improves target conformity and spares OAR when compared with 3D-CRT. Target conformity improved with RA plans compared with IMRT. The mean lung dose was similar in all techniques. However, RA plans showed a reduction in the volume of the lung irradiated at V 20Gy and V 30Gy dose levels (range, 4.62–17.98%) compared with IMRT plans. The mean dose and D 35% of heart for the RA plans were better than the IMRT by 0.5–5.8%. Mean V 10Gy and integral dose to healthy tissue were almost similar in all techniques. But RA plans resulted in a reduced low-level dose bath (15–20 Gy) in the range of 14–16% compared with IMRT plans. The average MU needed to deliver the prescribed dose by RA technique was reduced by 20–25% compared with IMRT technique. The preliminary study on RA for esophageal cancers showed improvements in sparing OAR and healthy tissue with reduced beam-on time, whereas only double-arc RA offered improved target coverage compared with IMRT and 3D-CRT plans.

  10. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  11. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  12. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  13. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  14. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. BASE MAP DATASET, CHEROKEE COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  16. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  17. Harvard Aging Brain Study : Dataset and accessibility

    NARCIS (Netherlands)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G.; Chatwal, Jasmeer P.; Papp, Kathryn V.; Amariglio, Rebecca E.; Blacker, Deborah; Rentz, Dorene M.; Johnson, Keith A.; Sperling, Reisa A.; Schultz, Aaron P.

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging.

  18. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  19. BASE MAP DATASET, EDGEFIELD COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  20. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  1. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  2. BASE MAP DATASET, INYO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  3. BASE MAP DATASET, JACKSON COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  4. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  5. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  6. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  7. BASE MAP DATASET, KINGFISHER COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  8. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  9. The importance of accurate anatomic assessment for the volumetric analysis of the amygdala

    Directory of Open Access Journals (Sweden)

    L. Bonilha

    2005-03-01

    Full Text Available There is a wide range of values reported in volumetric studies of the amygdala. The use of single plane thick magnetic resonance imaging (MRI may prevent the correct visualization of anatomic landmarks and yield imprecise results. To assess whether there is a difference between volumetric analysis of the amygdala performed with single plane MRI 3-mm slices and with multiplanar analysis of MRI 1-mm slices, we studied healthy subjects and patients with temporal lobe epilepsy. We performed manual delineation of the amygdala on T1-weighted inversion recovery, 3-mm coronal slices and manual delineation of the amygdala on three-dimensional volumetric T1-weighted images with 1-mm slice thickness. The data were compared using a dependent t-test. There was a significant difference between the volumes obtained by the coronal plane-based measurements and the volumes obtained by three-dimensional analysis (P < 0.001. An incorrect estimate of the amygdala volume may preclude a correct analysis of the biological effects of alterations in amygdala volume. Three-dimensional analysis is preferred because it is based on more extensive anatomical assessment and the results are similar to those obtained in post-mortem studies.

  10. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  11. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  12. A new laboratory-scale experimental facility for detailed aerothermal characterizations of volumetric absorbers

    Science.gov (United States)

    Gomez-Garcia, Fabrisio; Santiago, Sergio; Luque, Salvador; Romero, Manuel; Gonzalez-Aguilar, Jose

    2016-05-01

    This paper describes a new modular laboratory-scale experimental facility that was designed to conduct detailed aerothermal characterizations of volumetric absorbers for use in concentrating solar power plants. Absorbers are generally considered to be the element with the highest potential for efficiency gains in solar thermal energy systems. The configu-ration of volumetric absorbers enables concentrated solar radiation to penetrate deep into their solid structure, where it is progressively absorbed, prior to being transferred by convection to a working fluid flowing through the structure. Current design trends towards higher absorber outlet temperatures have led to the use of complex intricate geometries in novel ceramic and metallic elements to maximize the temperature deep inside the structure (thus reducing thermal emission losses at the front surface and increasing efficiency). Although numerical models simulate the conjugate heat transfer mechanisms along volumetric absorbers, they lack, in many cases, the accuracy that is required for precise aerothermal validations. The present work aims to aid this objective by the design, development, commissioning and operation of a new experimental facility which consists of a 7 kWe (1.2 kWth) high flux solar simulator, a radiation homogenizer, inlet and outlet collector modules and a working section that can accommodate volumetric absorbers up to 80 mm × 80 mm in cross-sectional area. Experimental measurements conducted in the facility include absorber solid temperature distributions along its depth, inlet and outlet air temperatures, air mass flow rate and pressure drop, incident radiative heat flux, and overall thermal efficiency. In addition, two windows allow for the direct visualization of the front and rear absorber surfaces, thus enabling full-coverage surface temperature measurements by thermal imaging cameras. This paper presents the results from the aerothermal characterization of a siliconized silicon

  13. PEMODELAN OBYEK TIGA DIMENSI DARI GAMBAR SINTETIS DUA DIMENSI DENGAN PENDEKATAN VOLUMETRIC

    Directory of Open Access Journals (Sweden)

    Rudy Adipranata

    2005-01-01

    Full Text Available In this paper, we implemented 3D object modeling from 2D input images. Modeling is performed by using volumetric reconstruction approaches by using volumetric reconstruction approaches, the 3D space is tesselated into discrete volumes called voxels. We use voxel coloring method to reconstruct 3D object from synthetic input images by using voxel coloring, we can get photorealistic result and also has advantage to solve occlusion problem that occur in many case of 3D reconstruction. Photorealistic 3D object reconstruction is a challenging problem in computer graphics and still an active area nowadays. Many applications that make use the result of reconstruction, include virtual reality, augmented reality, 3D games, and another 3D applications. Voxel coloring considered the reconstruction problem as a color reconstruction problem, instead of shape reconstruction problem. This method works by discretizing scene space into voxels, then traversed and colored those voxels in special order. The result is photorealitstic 3D object. Abstract in Bahasa Indonesia : Dalam penelitian ini dilakukan implementasi untuk pemodelan obyek tiga dimensi yang berasal dari gambar dua dimensi. Pemodelan ini dilakukan dengan menggunakan pendekatan volumetric. Dengan menggunakan pendekatan volumetric, ruang tiga dimensi dibagi menjadi bentuk diskrit yang disebut voxel. Kemudian pada voxel-voxel tersebut dilakukan metode pewarnaan voxel untuk mendapatkan hasil berupa obyek tiga dimensi yang bersifat photorealistic. Bagaimana memodelkan obyek tiga dimensi untuk menghasilkan hasil photorealistic merupakan masalah yang masih aktif di bidang komputer grafik. Banyak aplikasi lain yang dapat memanfaatkan hasil dari pemodelan tersebut seperti virtual reality, augmented reality dan lain-lain. Pewarnaan voxel merupakan pemodelan obyek tiga dimensi dengan melakukan rekonstruksi warna, bukan rekonstruksi bentuk. Metode ini bekerja dengan cara mendiskritkan obyek menjadi voxel dan

  14. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  15. The Effect of Elevation on Volumetric Measurements of the Lower Extremity

    Directory of Open Access Journals (Sweden)

    Cordial M. Gillette

    2017-07-01

    Full Text Available Background: The empirical evidence for the use of RICE (rest, ice, compression, elevation has been questioned regarding its   clinical effectiveness. The component of RICE that has the least literature regarding its effectiveness is elevation. Objective: The objective of this study was to determine if various positions of elevation result in volumetric changes of the lower extremity. Methodology: A randomized crossover design was used to determine the effects of the four following conditions on volumetric changes of the lower extremity: seated at the end of a table (seated, lying supine (flat, lying supine with the foot elevated 12 inches off the table (elevated, and lying prone with the knees bent to 90 degrees (prone. The conditions were randomized using a Latin Square. Each subject completed all conditions with at least 24 hours between each session. Pre and post volumetric measurements were taken using a volumetric tank. The subject was placed in one of the four described testing positions for 30 minutes. The change in weight of the displaced water was the main outcome measure. The data was analyzed using an ANOVA of the pre and post measurements with a Bonferroni post hoc analysis. The level of significance was set at P<.05 for all analyses. Results: The only statistically significant difference was between the gravity dependent position (seated and all other positions (p <.001. There was no significant difference between lying supine (flat, on a bolster (elevated, or prone with the knees flexed to 90 degrees (prone. Conclusions: From these results, the extent of elevation does not appear to have an effect on changes in low leg volume. Elevation above the heart did not significantly improve reduction in limb volume, but removing the limb from a gravity dependent position might be beneficial.

  16. Volumetric fat-water separated T2-weighted MRI

    International Nuclear Information System (INIS)

    Vasanawala, Shreyas S.; Sonik, Arvind; Madhuranthakam, Ananth J.; Venkatesan, Ramesh; Lai, Peng; Brau, Anja C.S.

    2011-01-01

    Pediatric body MRI exams often cover multiple body parts, making the development of broadly applicable protocols and obtaining uniform fat suppression a challenge. Volumetric T2 imaging with Dixon-type fat-water separation might address this challenge, but it is a lengthy process. We develop and evaluate a faster two-echo approach to volumetric T2 imaging with fat-water separation. A volumetric spin-echo sequence was modified to include a second shifted echo so two image sets are acquired. A region-growing reconstruction approach was developed to decompose separate water and fat images. Twenty-six children were recruited with IRB approval and informed consent. Fat-suppression quality was graded by two pediatric radiologists and compared against conventional fat-suppressed fast spin-echo T2-W images. Additionally, the value of in- and opposed-phase images was evaluated. Fat suppression on volumetric images had high quality in 96% of cases (95% confidence interval of 80-100%) and were preferred over or considered equivalent to conventional two-dimensional fat-suppressed FSE T2 imaging in 96% of cases (95% confidence interval of 78-100%). In- and opposed-phase images had definite value in 12% of cases. Volumetric fat-water separated T2-weighted MRI is feasible and is likely to yield improved fat suppression over conventional fat-suppressed T2-weighted imaging. (orig.)

  17. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  19. Full 40 km crustal reflection seismic datasets in several Indonesian basins

    Science.gov (United States)

    Dinkelman, M. G.; Granath, J. W.; Christ, J. M.; Emmet, P. A.; Bird, D. E.

    2010-12-01

    Long offset, deep penetration regional 2D seismic data sets have been acquired since 2002 by GX Technology in a number of regions worldwide (www.iongeo.com/Data_Libraries/Spans/). Typical surveys consist of 10+ lines located to image specific critical aspects of basin structure. Early surveys were processed to 20 km, but more recent ones have extended to 40-45 km from 16 sec records. Pre-stack time migration is followed by pre-stack depth migration using gravity and in some cases magnetic modeling to constrain the velocity structure. We illustrate several cases in the SE Asian and Australasian area. In NatunaSPAN™ two generations of inversion can be distinguished, one involving Paleogene faults with Neogene inversion and one involving strike slip-related uplift in the West Natuna Basin. Crustal structure in the very deep Neogene East Natuna Basin has also been imaged. The JavaSPAN™ program traced Paleogene sediments onto oceanic crust of the Flores Sea, thus equating back arc spreading there to the widespread Eocene extension. It also imaged basement in the Makassar Strait beneath as much as 6 km of Cenozoic sedimentary rocks that accumulated Eocene rift basins (the North and South Makassar basins) on the edge of Sundaland, the core of SE Asia. The basement is seismically layered: a noisy upper crust overlies a prominent 10 km thick transparent zone, the base of which marks another change to slightly noisier reflectivity. Eocene normal faults responsible for the opening of extensional basins root in the top of the transparent layer which may be Moho or a brittle-ductile transition within the extended continental crust. Of particular significance is the first image of thick Precambrian basins comprising the bulk of continental crust under the Arafura Sea in the ArafuraSPAN™ program. Four lines some 1200 km long located between Australia and New Guinea on the Arafura platform image a thin Phanerozoic section overlying a striking Precambrian basement composed of sedimentary and burial metamorphosed sedimentary rock that we divide into two packages on the basis of seismic character. The upper is 8-15 km of undeformed late Precambrian sediments the top of which ties Eocambrian rocks in wells in offshore New Guinea. This package appears to correlate to the Wessel Group in northern Australia. The lower package is composed of 10-15 km of strongly bedded, presumably burial metamorphosed rocks that make up the bulk of the lower crust. These may equate to any of a number of northern Australian Mesoproterozoic basins. This lower package offlaps ‘pods’ of seismically transparent basement (?Paleoproterozoic or Archean) that make up at most the lowermost 15 km of the 40 km PSDM line. Both Precambrian packages appear to be craton-margin sedimentary wedges, the younger overlapping the older. The SE extent of the lowermost package is deformed in a thrust system which may mark the event that detached it from its original underlying oceanic or transitional crust during cratonization. The SPAN programs are important new data sets to clarify and in some cases solve outstanding problems in basin architecture and tectonic evolution.

  20. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  1. Developing predictive imaging biomarkers using whole-brain classifiers: Application to the ABIDE I dataset

    Directory of Open Access Journals (Sweden)

    Swati Rane

    2017-03-01

    Full Text Available We designed a modular machine learning program that uses functional magnetic resonance imaging (fMRI data in order to distinguish individuals with autism spectrum disorders from neurodevelopmentally normal individuals. Data was selected from the Autism Brain Imaging Dataset Exchange (ABIDE I Preprocessed Dataset.

  2. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  3. Volumetric display using a roof mirror grid array

    Science.gov (United States)

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuuki; Ohno, Keisuke; Maekawa, Satoshi

    2010-02-01

    A volumetric display system using a roof mirror grid array (RMGA) is proposed. The RMGA consists of a two-dimensional array of dihedral corner reflectors and forms a real image at a plane-symmetric position. A two-dimensional image formed with a RMGA is moved at thigh speed by a mirror scanner. Cross-sectional images of a three-dimensional object are displayed in accordance with the position of the image plane. A volumetric image can be observed as a stack of the cross-sectional images by high-speed scanning. Image formation by a RMGA is free from aberrations. Moreover, a compact optical system can be constructed because a RMGA doesn't have a focal length. An experimental volumetric display system using a galvanometer mirror and a digital micromirror device was constructed. The formation of a three-dimensional image consisting of 1024 × 768 × 400 voxels is confirmed by the experimental system.

  4. Dataset of mitochondrial genome variants in oncocytic tumors

    Directory of Open Access Journals (Sweden)

    Lihua Lyu

    2018-04-01

    Full Text Available This dataset presents the mitochondrial genome variants associated with oncocytic tumors. These data were obtained by Sanger sequencing of the whole mitochondrial genomes of oncocytic tumors and the adjacent normal tissues from 32 patients. The mtDNA variants are identified after compared with the revised Cambridge sequence, excluding those defining haplogroups of our patients. The pathogenic prediction for the novel missense variants found in this study was performed with the Mitimpact 2 program.

  5. Gradients estimation from random points with volumetric tensor in turbulence

    Science.gov (United States)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  6. Cost-effectiveness of volumetric alcohol taxation in Australia.

    Science.gov (United States)

    Byrnes, Joshua M; Cobiac, Linda J; Doran, Christopher M; Vos, Theo; Shakeshaft, Anthony P

    2010-04-19

    To estimate the potential health benefits and cost savings of an alcohol tax rate that applies equally to all alcoholic beverages based on their alcohol content (volumetric tax) and to compare the cost savings with the cost of implementation. Mathematical modelling of three scenarios of volumetric alcohol taxation for the population of Australia: (i) no change in deadweight loss, (ii) no change in tax revenue, and (iii) all alcoholic beverages taxed at the same rate as spirits. Estimated change in alcohol consumption, tax revenue and health benefit. The estimated cost of changing to a volumetric tax rate is $18 million. A volumetric tax that is deadweight loss-neutral would increase the cost of beer and wine and reduce the cost of spirits, resulting in an estimated annual increase in taxation revenue of $492 million and a 2.77% reduction in annual consumption of pure alcohol. The estimated net health gain would be 21 000 disability-adjusted life-years (DALYs), with potential cost offsets of $110 million per annum. A tax revenue-neutral scenario would result in an 0.05% decrease in consumption, and a tax on all alcohol at a spirits rate would reduce consumption by 23.85% and increase revenue by $3094 million [corrected]. All volumetric tax scenarios would provide greater health benefits and cost savings to the health sector than the existing taxation system, based on current understandings of alcohol-related health effects. An equalized volumetric tax that would reduce beer and wine consumption while increasing the consumption of spirits would need to be approached with caution. Further research is required to examine whether alcohol-related health effects vary by type of alcoholic beverage independent of the amount of alcohol consumed to provide a strong evidence platform for alcohol taxation policies.

  7. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  8. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT).

    Science.gov (United States)

    Men, Chunhua; Romeijn, H Edwin; Jia, Xun; Jiang, Steve B

    2010-11-01

    To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.

  9. Comparison of global 3-D aviation emissions datasets

    Directory of Open Access Journals (Sweden)

    S. C. Olsen

    2013-01-01

    Full Text Available Aviation emissions are unique from other transportation emissions, e.g., from road transportation and shipping, in that they occur at higher altitudes as well as at the surface. Aviation emissions of carbon dioxide, soot, and water vapor have direct radiative impacts on the Earth's climate system while emissions of nitrogen oxides (NOx, sulfur oxides, carbon monoxide (CO, and hydrocarbons (HC impact air quality and climate through their effects on ozone, methane, and clouds. The most accurate estimates of the impact of aviation on air quality and climate utilize three-dimensional chemistry-climate models and gridded four dimensional (space and time aviation emissions datasets. We compare five available aviation emissions datasets currently and historically used to evaluate the impact of aviation on climate and air quality: NASA-Boeing 1992, NASA-Boeing 1999, QUANTIFY 2000, Aero2k 2002, and AEDT 2006 and aviation fuel usage estimates from the International Energy Agency. Roughly 90% of all aviation emissions are in the Northern Hemisphere and nearly 60% of all fuelburn and NOx emissions occur at cruise altitudes in the Northern Hemisphere. While these datasets were created by independent methods and are thus not strictly suitable for analyzing trends they suggest that commercial aviation fuelburn and NOx emissions increased over the last two decades while HC emissions likely decreased and CO emissions did not change significantly. The bottom-up estimates compared here are consistently lower than International Energy Agency fuelburn statistics although the gap is significantly smaller in the more recent datasets. Overall the emissions distributions are quite similar for fuelburn and NOx with regional peaks over the populated land masses of North America, Europe, and East Asia. For CO and HC there are relatively larger differences. There are however some distinct differences in the altitude distribution

  10. Geoseq: a tool for dissecting deep-sequencing datasets

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2010-10-01

    Full Text Available Abstract Background Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO, Sequence Read Archive (SRA hosted by the NCBI, or the DNA Data Bank of Japan (ddbj. Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Results Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Conclusions Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a identify differential isoform expression in mRNA-seq datasets, b identify miRNAs (microRNAs in libraries, and identify mature and star sequences in miRNAS and c to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  11. A new dataset validation system for the Planetary Science Archive

    Science.gov (United States)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that

  12. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  13. Harvard Aging Brain Study: Dataset and accessibility.

    Science.gov (United States)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G; Chatwal, Jasmeer P; Papp, Kathryn V; Amariglio, Rebecca E; Blacker, Deborah; Rentz, Dorene M; Johnson, Keith A; Sperling, Reisa A; Schultz, Aaron P

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging. To promote more extensive analyses, imaging data was designed to be compatible with other publicly available datasets. A cloud-based system enables access to interested researchers with blinded data available contingent upon completion of a data usage agreement and administrative approval. Data collection is ongoing and currently in its fifth year. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  15. VOLUMETRIC LEAK DETECTION IN LARGE UNDERGROUND STORAGE TANKS - VOLUME I

    Science.gov (United States)

    A set of experiments was conducted to determine whether volumetric leak detection system presently used to test underground storage tanks (USTs) up to 38,000 L (10,000 gal) in capacity could meet EPA's regulatory standards for tank tightness and automatic tank gauging systems whe...

  16. Tandem Gravimetric and Volumetric Apparatus for Methane Sorption Measurements

    Science.gov (United States)

    Burress, Jacob; Bethea, Donald

    Concerns about global climate change have driven the search for alternative fuels. Natural gas (NG, methane) is a cleaner fuel than gasoline and abundantly available due to hydraulic fracturing. One hurdle to the adoption of NG vehicles is the bulky cylindrical storage vessels needed to store the NG at high pressures (3600 psi, 250 bar). The adsorption of methane in microporous materials can store large amounts of methane at low enough pressures for the allowance of conformable, ``flat'' pressure vessels. The measurement of the amount of gas stored in sorbent materials is typically done by measuring pressure differences (volumetric, manometric) or masses (gravimetric). Volumetric instruments of the Sievert type have uncertainties that compound with each additional measurement. Therefore, the highest-pressure measurement has the largest uncertainty. Gravimetric instruments don't have that drawback, but can have issues with buoyancy corrections. An instrument will be presented with which methane adsorption measurements can be performed using both volumetric and gravimetric methods in tandem. The gravimetric method presented has no buoyancy corrections and low uncertainty. Therefore, the gravimetric measurements can be performed throughout an entire isotherm or just at the extrema to verify the results from the volumetric measurements. Results from methane sorption measurements on an activated carbon (MSC-30) and a metal-organic framework (Cu-BTC, HKUST-1, MOF-199) will be shown. New recommendations for calculations of gas uptake and uncertainty measurements will be discussed.

  17. 100KE/KW fuel storage basin surface volumetric factors

    International Nuclear Information System (INIS)

    Conn, K.R.

    1996-01-01

    This Supporting Document presents calculations of surface Volumetric factors for the 100KE and 100KW Fuel Storage Basins. These factors relate water level changes to basin loss or additions of water, or the equivalent water displacement volumes of objects added to or removed from the basin

  18. Designing remote web-based mechanical-volumetric flow meter ...

    African Journals Online (AJOL)

    Today, in water and wastewater industry a lot of mechanical-volumetric flow meters are used for the navigation of the produced water and the data of these flow meters, due to use in a wide geographical range, is done physically and by in person presence. All this makes reading the data costly and, in some cases, due to ...

  19. Augmented Reality Prototype for Visualizing Large Sensors’ Datasets

    Directory of Open Access Journals (Sweden)

    Folorunso Olufemi A.

    2011-04-01

    Full Text Available This paper addressed the development of an augmented reality (AR based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations which made data exploration and visualisation daunting tasks. Therefore a model to manage such data and enhance computational support needed for effective explorations are developed in this paper. A challenge of this approach is to reduce the data inefficiency. This paper presented a model for computing information gain for each data attributes and determine a lead attribute.The computed lead attribute is then used for the development of an AR-based scientific visualization interface which automatically identifies, localises and visualizes all necessary data relevant to a particularly selected region of interest (ROI on the network. Necessary architectural system supports and the interface requirements for such visualizations are also presented.

  20. An integrated dataset for in silico drug discovery

    Directory of Open Access Journals (Sweden)

    Cockell Simon J

    2010-12-01

    Full Text Available Drug development is expensive and prone to failure. It is potentially much less risky and expensive to reuse a drug developed for one condition for treating a second disease, than it is to develop an entirely new compound. Systematic approaches to drug repositioning are needed to increase throughput and find candidates more reliably. Here we address this need with an integrated systems biology dataset, developed using the Ondex data integration platform, for the in silico discovery of new drug repositioning candidates. We demonstrate that the information in this dataset allows known repositioning examples to be discovered. We also propose a means of automating the search for new treatment indications of existing compounds.

  1. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    Science.gov (United States)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  2. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  3. Thesaurus Dataset of Educational Technology in Chinese

    Science.gov (United States)

    Wu, Linjing; Liu, Qingtang; Zhao, Gang; Huang, Huan; Huang, Tao

    2015-01-01

    The thesaurus dataset of educational technology is a knowledge description of educational technology in Chinese. The aims of this thesaurus were to collect the subject terms in the domain of educational technology, facilitate the standardization of terminology and promote the communication between Chinese researchers and scholars from various…

  4. Public Availability to ECS Collected Datasets

    Science.gov (United States)

    Henderson, J. F.; Warnken, R.; McLean, S. J.; Lim, E.; Varner, J. D.

    2013-12-01

    Coastal nations have spent considerable resources exploring the limits of their extended continental shelf (ECS) beyond 200 nm. Although these studies are funded to fulfill requirements of the UN Convention on the Law of the Sea, the investments are producing new data sets in frontier areas of Earth's oceans that will be used to understand, explore, and manage the seafloor and sub-seafloor for decades to come. Although many of these datasets are considered proprietary until a nation's potential ECS has become 'final and binding' an increasing amount of data are being released and utilized by the public. Data sets include multibeam, seismic reflection/refraction, bottom sampling, and geophysical data. The U.S. ECS Project, a multi-agency collaboration whose mission is to establish the full extent of the continental shelf of the United States consistent with international law, relies heavily on data and accurate, standard metadata. The United States has made it a priority to make available to the public all data collected with ECS-funding as quickly as possible. The National Oceanic and Atmospheric Administration's (NOAA) National Geophysical Data Center (NGDC) supports this objective by partnering with academia and other federal government mapping agencies to archive, inventory, and deliver marine mapping data in a coordinated, consistent manner. This includes ensuring quality, standard metadata and developing and maintaining data delivery capabilities built on modern digital data archives. Other countries, such as Ireland, have submitted their ECS data for public availability and many others have made pledges to participate in the future. The data services provided by NGDC support the U.S. ECS effort as well as many developing nation's ECS effort through the U.N. Environmental Program. Modern discovery, visualization, and delivery of scientific data and derived products that span national and international sources of data ensure the greatest re-use of data and

  5. Volumetric modulated arc therapy: IMRT in a single gantry arc

    International Nuclear Information System (INIS)

    Otto, Karl

    2008-01-01

    In this work a novel plan optimization platform is presented where treatment is delivered efficiently and accurately in a single dynamically modulated arc. Improvements in patient care achieved through image-guided positioning and plan adaptation have resulted in an increase in overall treatment times. Intensity-modulated radiation therapy (IMRT) has also increased treatment time by requiring a larger number of beam directions, increased monitor units (MU), and, in the case of tomotherapy, a slice-by-slice delivery. In order to maintain a similar level of patient throughput it will be necessary to increase the efficiency of treatment delivery. The solution proposed here is a novel aperture-based algorithm for treatment plan optimization where dose is delivered during a single gantry arc of up to 360 deg. The technique is similar to tomotherapy in that a full 360 deg. of beam directions are available for optimization but is fundamentally different in that the entire dose volume is delivered in a single source rotation. The new technique is referred to as volumetric modulated arc therapy (VMAT). Multileaf collimator (MLC) leaf motion and number of MU per degree of gantry rotation is restricted during the optimization so that gantry rotation speed, leaf translation speed, and dose rate maxima do not excessively limit the delivery efficiency. During planning, investigators model continuous gantry motion by a coarse sampling of static gantry positions and fluence maps or MLC aperture shapes. The technique presented here is unique in that gantry and MLC position sampling is progressively increased throughout the optimization. Using the full gantry range will theoretically provide increased flexibility in generating highly conformal treatment plans. In practice, the additional flexibility is somewhat negated by the additional constraints placed on the amount of MLC leaf motion between gantry samples. A series of studies are performed that characterize the relationship

  6. SU-D-18A-02: Towards Real-Time On-Board Volumetric Image Reconstruction for Intrafraction Target Verification in Radiation Therapy

    International Nuclear Information System (INIS)

    Xu, X; Iliopoulos, A; Zhang, Y; Pitsianis, N; Sun, X; Yin, F; Ren, L

    2014-01-01

    Purpose: To expedite on-board volumetric image reconstruction from limited-angle kV—MV projections for intrafraction verification. Methods: A limited-angle intrafraction verification (LIVE) system has recently been developed for real-time volumetric verification of moving targets, using limited-angle kV—MV projections. Currently, it is challenged by the intensive computational load of the prior-knowledge-based reconstruction method. To accelerate LIVE, we restructure the software pipeline to make it adaptable to model and algorithm parameter changes, while enabling efficient utilization of rapidly advancing, modern computer architectures. In particular, an innovative two-level parallelization scheme has been designed: At the macroscopic level, data and operations are adaptively partitioned, taking into account algorithmic parameters and the processing capacity or constraints of underlying hardware. The control and data flows of the pipeline are scheduled in such a way as to maximize operation concurrency and minimize total processing time. At the microscopic level, the partitioned functions act as independent modules, operating on data partitions in parallel. Each module is pre-parallelized and optimized for multi-core processors (CPUs) and graphics processing units (GPUs). Results: We present results from a parallel prototype, where most of the controls and module parallelization are carried out via Matlab and its Parallel Computing Toolbox. The reconstruction is 5 times faster on a data-set of twice the size, compared to recently reported results, without compromising on algorithmic optimization control. Conclusion: The prototype implementation and its results have served to assess the efficacy of our system concept. While a production implementation will yield much higher processing rates by approaching full-capacity utilization of CPUs and GPUs, some mutual constraints between algorithmic flow and architecture specifics remain. Based on a careful analysis

  7. Spatially continuous dataset at local scale of Taita Hills in Kenya and Mount Kilimanjaro in Tanzania

    Directory of Open Access Journals (Sweden)

    Sizah Mwalusepo

    2016-09-01

    Full Text Available Climate change is a global concern, requiring local scale spatially continuous dataset and modeling of meteorological variables. This dataset article provided the interpolated temperature, rainfall and relative humidity dataset at local scale along Taita Hills and Mount Kilimanjaro altitudinal gradients in Kenya and Tanzania, respectively. The temperature and relative humidity were recorded hourly using automatic onset THHOBO data loggers and rainfall was recorded daily using GENERALR wireless rain gauges. Thin plate spline (TPS was used to interpolate, with the degree of data smoothing determined by minimizing the generalized cross validation. The dataset provide information on the status of the current climatic conditions along the two mountainous altitudinal gradients in Kenya and Tanzania. The dataset will, thus, enhance future research. Keywords: Spatial climate data, Climate change, Modeling, Local scale

  8. SU-E-J-217: Accuracy Comparison Between Surface and Volumetric Registrations for Patient Setup of Head and Neck Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y [Stanford University School of Medicine, Stanford, CA (United States); Korea Institute of Science and Technology, Seoul (Korea, Republic of); Li, R; Na, Y; Jenkins, C; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Lee, R [Ewha Womans University, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: Optical surface imaging has been applied to radiation therapy patient setup. This study aims to investigate the accuracy of the surface registration of the optical surface imaging compared with that of the conventional method of volumetric registration for patient setup in head and neck radiation therapy. Methods: Clinical datasets of planning CT and treatment Cone Beam CT (CBCT) were used to compare the surface and volumetric registrations in radiation therapy patient setup. The Iterative Closest Points based on point-plane closest method was implemented for surface registration. We employed 3D Slicer for rigid volumetric registration of planning CT and treatment CBCT. 6 parameters of registration results (3 rotations and 3 translations) were obtained by the two registration methods, and the results were compared. Digital simulation tests in ideal cases were also performed to validate each registration method. Results: Digital simulation tests showed that both of the registration methods were accurate and robust enough to compare the registration results. In experiments with the actual clinical data, the results showed considerable deviation between the surface and volumetric registrations. The average root mean squared translational error was 2.7 mm and the maximum translational error was 5.2 mm. Conclusion: The deviation between the surface and volumetric registrations was considerable. Special caution should be taken in using an optical surface imaging. To ensure the accuracy of optical surface imaging in radiation therapy patient setup, additional measures are required. This research was supported in part by the KIST institutional program (2E24551), the Industrial Strategic technology development program (10035495) funded by the Ministry of Trade, Industry and Energy (MOTIE, KOREA), and the Radiation Safety Research Programs (1305033) through the Nuclear Safety and Security Commission, and the NIH (R01EB016777)

  9. SU-E-J-217: Accuracy Comparison Between Surface and Volumetric Registrations for Patient Setup of Head and Neck Radiation Therapy

    International Nuclear Information System (INIS)

    Kim, Y; Li, R; Na, Y; Jenkins, C; Xing, L; Lee, R

    2014-01-01

    Purpose: Optical surface imaging has been applied to radiation therapy patient setup. This study aims to investigate the accuracy of the surface registration of the optical surface imaging compared with that of the conventional method of volumetric registration for patient setup in head and neck radiation therapy. Methods: Clinical datasets of planning CT and treatment Cone Beam CT (CBCT) were used to compare the surface and volumetric registrations in radiation therapy patient setup. The Iterative Closest Points based on point-plane closest method was implemented for surface registration. We employed 3D Slicer for rigid volumetric registration of planning CT and treatment CBCT. 6 parameters of registration results (3 rotations and 3 translations) were obtained by the two registration methods, and the results were compared. Digital simulation tests in ideal cases were also performed to validate each registration method. Results: Digital simulation tests showed that both of the registration methods were accurate and robust enough to compare the registration results. In experiments with the actual clinical data, the results showed considerable deviation between the surface and volumetric registrations. The average root mean squared translational error was 2.7 mm and the maximum translational error was 5.2 mm. Conclusion: The deviation between the surface and volumetric registrations was considerable. Special caution should be taken in using an optical surface imaging. To ensure the accuracy of optical surface imaging in radiation therapy patient setup, additional measures are required. This research was supported in part by the KIST institutional program (2E24551), the Industrial Strategic technology development program (10035495) funded by the Ministry of Trade, Industry and Energy (MOTIE, KOREA), and the Radiation Safety Research Programs (1305033) through the Nuclear Safety and Security Commission, and the NIH (R01EB016777)

  10. ENHANCED DATA DISCOVERABILITY FOR IN SITU HYPERSPECTRAL DATASETS

    Directory of Open Access Journals (Sweden)

    B. Rasaiah

    2016-06-01

    Full Text Available Field spectroscopic metadata is a central component in the quality assurance, reliability, and discoverability of hyperspectral data and the products derived from it. Cataloguing, mining, and interoperability of these datasets rely upon the robustness of metadata protocols for field spectroscopy, and on the software architecture to support the exchange of these datasets. Currently no standard for in situ spectroscopy data or metadata protocols exist. This inhibits the effective sharing of growing volumes of in situ spectroscopy datasets, to exploit the benefits of integrating with the evolving range of data sharing platforms. A core metadataset for field spectroscopy was introduced by Rasaiah et al., (2011-2015 with extended support for specific applications. This paper presents a prototype model for an OGC and ISO compliant platform-independent metadata discovery service aligned to the specific requirements of field spectroscopy. In this study, a proof-of-concept metadata catalogue has been described and deployed in a cloud-based architecture as a demonstration of an operationalized field spectroscopy metadata standard and web-based discovery service.

  11. Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets

    Directory of Open Access Journals (Sweden)

    Nicolas Robitaille

    2012-01-01

    Full Text Available Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.

  12. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  13. A cross-country Exchange Market Pressure (EMP dataset

    Directory of Open Access Journals (Sweden)

    Mohit Desai

    2017-06-01

    Full Text Available The data presented in this article are related to the research article titled - “An exchange market pressure measure for cross country analysis” (Patnaik et al. [1]. In this article, we present the dataset for Exchange Market Pressure values (EMP for 139 countries along with their conversion factors, ρ (rho. Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values for the point estimates of ρ’s. Using the standard errors of estimates of ρ’s, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  14. Sharing Video Datasets in Design Research

    DEFF Research Database (Denmark)

    Christensen, Bo; Abildgaard, Sille Julie Jøhnk

    2017-01-01

    This paper examines how design researchers, design practitioners and design education can benefit from sharing a dataset. We present the Design Thinking Research Symposium 11 (DTRS11) as an exemplary project that implied sharing video data of design processes and design activity in natural settings...... with a large group of fellow academics from the international community of Design Thinking Research, for the purpose of facilitating research collaboration and communication within the field of Design and Design Thinking. This approach emphasizes the social and collaborative aspects of design research, where...... a multitude of appropriate perspectives and methods may be utilized in analyzing and discussing the singular dataset. The shared data is, from this perspective, understood as a design object in itself, which facilitates new ways of working, collaborating, studying, learning and educating within the expanding...

  15. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  16. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  17. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  18. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  19. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  20. High volumetric power density, non-enzymatic, glucose fuel cells.

    Science.gov (United States)

    Oncescu, Vlad; Erickson, David

    2013-01-01

    The development of new implantable medical devices has been limited in the past by slow advances in lithium battery technology. Non-enzymatic glucose fuel cells are promising replacement candidates for lithium batteries because of good long-term stability and adequate power density. The devices developed to date however use an "oxygen depletion design" whereby the electrodes are stacked on top of each other leading to low volumetric power density and complicated fabrication protocols. Here we have developed a novel single-layer fuel cell with good performance (2 μW cm⁻²) and stability that can be integrated directly as a coating layer on large implantable devices, or stacked to obtain a high volumetric power density (over 16 μW cm⁻³). This represents the first demonstration of a low volume non-enzymatic fuel cell stack with high power density, greatly increasing the range of applications for non-enzymatic glucose fuel cells.

  1. Volumetric properties of ammonium nitrate in N,N-dimethylformamide

    International Nuclear Information System (INIS)

    Vranes, Milan; Dozic, Sanja; Djeric, Vesna; Gadzuric, Slobodan

    2012-01-01

    Highlights: ► We observed interactions and changes in the solution using volumetric properties. ► The greatest influence on the solvent–solvent interactions has temperature. ► The smallest influence temperature has on the ion–ion interactions. ► Temperature has no influence on concentrated systems and partially solvated melts. - Abstract: The densities of the ammonium nitrate in N,N-dimethylformamide (DMF) mixtures were measured at T = (308.15 to 348.15) K for different ammonium nitrate molalities in the range from (0 to 6.8404) mol·kg −1 . From the obtained density data, volumetric properties (apparent molar volumes and partial molar volumes) have been evaluated and discussed in the term of respective ionic and dipole interactions. From the apparent molar volume, determined at various temperatures, the apparent molar expansibility and the coefficients of thermal expansion were also calculated.

  2. Predicting positional error of MLC using volumetric analysis

    International Nuclear Information System (INIS)

    Hareram, E.S.

    2008-01-01

    IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods

  3. Reference volumetric samples of gamma-spectroscopic sources

    International Nuclear Information System (INIS)

    Taskaev, E.; Taskaeva, M.; Grigorov, T.

    1993-01-01

    The purpose of this investigation is to determine the requirements for matrices of reference volumetric radiation sources necessary for detector calibration. The first stage of this determination consists in analysing some available organic and nonorganic materials. Different sorts of food, grass, plastics, minerals and building materials have been considered, taking into account the various procedures of their processing (grinding, screening, homogenizing) and their properties (hygroscopy, storage life, resistance to oxidation during gamma sterilization). The procedures of source processing, sample preparation, matrix irradiation and homogenization have been determined. A rotation homogenizing device has been elaborated enabling to homogenize the matrix activity irrespective of the vessel geometry. 33 standard volumetric radioactive sources have been prepared: 14 - on organic matrix and 19 - on nonorganic matrix. (author)

  4. Determination of uranium by a gravimetric-volumetric titration method

    International Nuclear Information System (INIS)

    Krtil, J.

    1998-01-01

    A volumetric-gravimetric modification of a method for the determination of uranium based on the reduction of uranium to U (IV) in a phosphoric acid medium and titration with a standard potassium dichromate solution is described. More than 99% of the stoichiometric amount of the titrating solution is weighed and the remainder is added volumetrically by using the Mettler DL 40 RC Memotitrator. Computer interconnected with analytical balances collects continually the data on the analyzed samples and evaluates the results of determination. The method allows to determine uranium in samples of uranium metal, alloys, oxides, and ammonium diuranate by using aliquot portions containing 30 - 100 mg of uranium with the error of determination, expressed as the relative standard deviation, of 0.02 - 0.05%. (author)

  5. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  6. Volumetric determination of tumor size abdominal masses. Problems -feasabilities

    International Nuclear Information System (INIS)

    Helmberger, H.; Bautz, W.; Sendler, A.; Fink, U.; Gerhardt, P.

    1995-01-01

    The most important indication for clinically reliable volumetric determination of tumor size in the abdominal region is monitoring liver metastases during chemotherapy. Determination of volume can be effectively realized using 3D reconstruction. Therefore, the primary data set must be complete and contiguous. The mass should be depicted strongly enhanced and free of artifacts. At present, this prerequisite can only be complied with using thin-slice spiral CT. Phantom studies have proven that a semiautomatic reconstruction algorithm is recommendable. The basic difficulties involved in volumetric determination of tumor size are the problems in differentiating active malignant mass and changes in the surrounding tissue, as well as the lack of histomorphological correlation. Possible indications for volumetry of gastrointestinal masses in the assessment of neoadjuvant therapeutic concepts are under scientific evaluation. (orig./MG) [de

  7. CO2 Capacity Sorbent Analysis Using Volumetric Measurement Approach

    Science.gov (United States)

    Huang, Roger; Richardson, Tra-My Justine; Belancik, Grace; Jan, Darrell; Knox, Jim

    2017-01-01

    In support of air revitalization system sorbent selection for future space missions, Ames Research Center (ARC) has performed CO2 capacity tests on various solid sorbents to complement structural strength tests conducted at Marshall Space Flight Center (MSFC). The materials of interest are: Grace Davison Grade 544 13X, Honeywell UOP APG III, LiLSX VSA-10, BASF 13X, and Grace Davison Grade 522 5A. CO2 capacity was for all sorbent materials using a Micromeritics ASAP 2020 Physisorption Volumetric Analysis machine to produce 0C, 10C, 25C, 50C, and 75C isotherms. These data are to be used for modeling data and to provide a basis for continued sorbent research. The volumetric analysis method proved to be effective in generating consistent and repeatable data for the 13X sorbents, but the method needs to be refined to tailor to different sorbents.

  8. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm

    2015-01-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological....... This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...

  9. Volumetric formulation of lattice Boltzmann models with energy conservation

    OpenAIRE

    Sbragaglia, M.; Sugiyama, K.

    2010-01-01

    We analyze a volumetric formulation of lattice Boltzmann for compressible thermal fluid flows. The velocity set is chosen with the desired accuracy, based on the Gauss-Hermite quadrature procedure, and tested against controlled problems in bounded and unbounded fluids. The method allows the simulation of thermohydrodyamical problems without the need to preserve the exact space-filling nature of the velocity set, but still ensuring the exact conservation laws for density, momentum and energy. ...

  10. Volumetric Real-Time Imaging Using a CMUT Ring Array

    OpenAIRE

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device.

  11. 3-dimensional charge collection efficiency measurements using volumetric tomographic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, Daniel [CERN, Geneva (Switzerland)

    2016-07-01

    For a better understanding of the electrical field distribution of 3D semiconductor detectors and to allow efficiency based design improvements, a method to measure the 3D spatial charge collection efficiency of planar, 3D silicon and diamond sensors using 3D volumetric reconstruction techniques is possible. Simulation results and first measurements demonstrated the feasibility of this method and show that with soon available 10 times faster beam telescopes even small structures and efficiency differences will become measurable in few hours.

  12. Thermodynamic and volumetric databases and software for magnesium alloys

    Science.gov (United States)

    Kang, Youn-Bae; Aliravci, Celil; Spencer, Philip J.; Eriksson, Gunnar; Fuerst, Carlton D.; Chartrand, Patrice; Pelton, Arthur D.

    2009-05-01

    Extensive databases for the thermodynamic and volumetric properties of magnesium alloys have been prepared by critical evaluation, modeling, and optimization of available data. Software has been developed to access the databases to calculate equilibrium phase diagrams, heat effects, etc., and to follow the course of equilibrium or Scheil-Gulliver cooling, calculating not only the amounts of the individual phases, but also of the microstructural constituents.

  13. Volumetric 3D display using a DLP projection engine

    Science.gov (United States)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  14. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  15. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data

    OpenAIRE

    Fischer, Felix; Selver, M. Alper; Gezer, Sinem; Dicle, O?uz; Hillen, Walter

    2015-01-01

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant addi...

  16. In-Situ Spatial Variability Of Thermal Conductivity And Volumetric ...

    African Journals Online (AJOL)

    Studies of spatial variability of thermal conductivity and volumetric water content of silty topsoil were conduct-ed on a 0.6 ha site at Abeokuta, South-Western Nigeria. The thermal conductivity (k) was measured at depths of up to 0.06 m along four parallel profiles of 200 m long and at an average temperature of 25 C, using ...

  17. Three-dimensional volumetric display by inclined-plane scanning

    Science.gov (United States)

    Miyazaki, Daisuke; Eto, Takuma; Nishimura, Yasuhiro; Matsushita, Kenji

    2003-05-01

    A volumetric display system based on three-dimensional (3-D) scanning that uses an inclined two-dimensional (2-D) image is described. In the volumetric display system a 2-D display unit is placed obliquely in an imaging system into which a rotating mirror is inserted. When the mirror is rotated, the inclined 2-D image is moved laterally. A locus of the moving image can be observed by persistence of vision as a result of the high-speed rotation of the mirror. Inclined cross-sectional images of an object are displayed on the display unit in accordance with the position of the image plane to observe a 3-D image of the object by persistence of vision. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision. We constructed the volumetric display systems using a galvanometer mirror and a vector-scan display unit. In addition, we constructed a real-time 3-D measurement system based on a light section method. Measured 3-D images can be reconstructed in the 3-D display system in real time.

  18. A volumetric three-dimensional digital light photoactivatable dye display

    Science.gov (United States)

    Patel, Shreya K.; Cao, Jian; Lippert, Alexander R.

    2017-07-01

    Volumetric three-dimensional displays offer spatially accurate representations of images with a 360° view, but have been difficult to implement due to complex fabrication requirements. Herein, a chemically enabled volumetric 3D digital light photoactivatable dye display (3D Light PAD) is reported. The operating principle relies on photoactivatable dyes that become reversibly fluorescent upon illumination with ultraviolet light. Proper tuning of kinetics and emission wavelengths enables the generation of a spatial pattern of fluorescent emission at the intersection of two structured light beams. A first-generation 3D Light PAD was fabricated using the photoactivatable dye N-phenyl spirolactam rhodamine B, a commercial picoprojector, an ultraviolet projector and a custom quartz imaging chamber. The system displays a minimum voxel size of 0.68 mm3, 200 μm resolution and good stability over repeated `on-off' cycles. A range of high-resolution 3D images and animations can be projected, setting the foundation for widely accessible volumetric 3D displays.

  19. Reducing uncertainties in volumetric image based deformable organ registration

    International Nuclear Information System (INIS)

    Liang, J.; Yan, D.

    2003-01-01

    Applying volumetric image feedback in radiotherapy requires image based deformable organ registration. The foundation of this registration is the ability of tracking subvolume displacement in organs of interest. Subvolume displacement can be calculated by applying biomechanics model and the finite element method to human organs manifested on the multiple volumetric images. The calculation accuracy, however, is highly dependent on the determination of the corresponding organ boundary points. Lacking sufficient information for such determination, uncertainties are inevitable--thus diminishing the registration accuracy. In this paper, a method of consuming energy minimization was developed to reduce these uncertainties. Starting from an initial selection of organ boundary point correspondence on volumetric image sets, the subvolume displacement and stress distribution of the whole organ are calculated and the consumed energy due to the subvolume displacements is computed accordingly. The corresponding positions of the initially selected boundary points are then iteratively optimized to minimize the consuming energy under geometry and stress constraints. In this study, a rectal wall delineated from patient CT image was artificially deformed using a computer simulation and utilized to test the optimization. Subvolume displacements calculated based on the optimized boundary point correspondence were compared to the true displacements, and the calculation accuracy was thereby evaluated. Results demonstrate that a significant improvement on the accuracy of the deformable organ registration can be achieved by applying the consuming energy minimization in the organ deformation calculation

  20. Power analysis dataset for QCA based multiplexer circuits

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-Al-Shafi

    2017-04-01

    Full Text Available Power consumption in irreversible QCA logic circuits is a vital and a major issue; however in the practical cases, this focus is mostly omitted.The complete power depletion dataset of different QCA multiplexers have been worked out in this paper. At −271.15 °C temperature, the depletion is evaluated under three separate tunneling energy levels. All the circuits are designed with QCADesigner, a broadly used simulation engine and QCAPro tool has been applied for estimating the power dissipation.

  1. Equalizing imbalanced imprecise datasets for genetic fuzzy classifiers

    Directory of Open Access Journals (Sweden)

    AnaM. Palacios

    2012-04-01

    Full Text Available Determining whether an imprecise dataset is imbalanced is not immediate. The vagueness in the data causes that the prior probabilities of the classes are not precisely known, and therefore the degree of imbalance can also be uncertain. In this paper we propose suitable extensions of different resampling algorithms that can be applied to interval valued, multi-labelled data. By means of these extended preprocessing algorithms, certain classification systems designed for minimizing the fraction of misclassifications are able to produce knowledge bases that are also adequate under common metrics for imbalanced classification.

  2. Dataset concerning the analytical approximation of the Ae3 temperature

    Directory of Open Access Journals (Sweden)

    B.L. Ennis

    2017-02-01

    The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled “The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel” 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016 [1].

  3. Dataset of statements on policy integration of selected intergovernmental organizations

    Directory of Open Access Journals (Sweden)

    Jale Tosun

    2018-04-01

    Full Text Available This article describes data for 78 intergovernmental organizations (IGOs working on topics related to energy governance, environmental protection, and the economy. The number of IGOs covered also includes organizations active in other sectors. The point of departure for data construction was the Correlates of War dataset, from which we selected this sample of IGOs. We updated and expanded the empirical information on the IGOs selected by manual coding. Most importantly, we collected the primary law texts of the individual IGOs in order to code whether they commit themselves to environmental policy integration (EPI, climate policy integration (CPI and/or energy policy integration (EnPI.

  4. Dataset on records of Hericium erinaceus in Slovakia

    Directory of Open Access Journals (Sweden)

    Vladimír Kunca

    2017-06-01

    Full Text Available The data presented in this article are related to the research article entitled “Habitat preferences of Hericium erinaceus in Slovakia” (Kunca and Čiliak, 2016 [FUNECO607] [2]. The dataset include all available and unpublished data from Slovakia, besides the records from the same tree or stem. We compiled a database of records of collections by processing data from herbaria, personal records and communication with mycological activists. Data on altitude, tree species, host tree vital status, host tree position and intensity of management of forest stands were evaluated in this study. All surveys were based on basidioma occurrence and some result from targeted searches.

  5. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H. [Brigham and Women' s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Li, Ruijiang [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  6. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  7. 3D Volumetric Modeling and Microvascular Reconstruction of Irradiated Lumbosacral Defects After Oncologic Resection

    Directory of Open Access Journals (Sweden)

    Emilio Garcia-Tutor

    2016-12-01

    Full Text Available Background: Locoregional flaps are sufficient in most sacral reconstructions. However, large sacral defects due to malignancy necessitate a different reconstructive approach, with local flaps compromised by radiation and regional flaps inadequate for broad surface areas or substantial volume obliteration. In this report, we present our experience using free muscle transfer for volumetric reconstruction in such cases, and demonstrate 3D haptic models of the sacral defect to aid preoperative planning.Methods: Five consecutive patients with irradiated sacral defects secondary to oncologic resections were included, surface area ranging from 143-600cm2. Latissimus dorsi-based free flap sacral reconstruction was performed in each case, between 2005 and 2011. Where the superior gluteal artery was compromised, the subcostal artery was used as a recipient vessel. Microvascular technique, complications and outcomes are reported. The use of volumetric analysis and 3D printing is also demonstrated, with imaging data converted to 3D images suitable for 3D printing with Osirix software (Pixmeo, Geneva, Switzerland. An office-based, desktop 3D printer was used to print 3D models of sacral defects, used to demonstrate surface area and contour and produce a volumetric print of the dead space needed for flap obliteration. Results: The clinical series of latissimus dorsi free flap reconstructions is presented, with successful transfer in all cases, and adequate soft-tissue cover and volume obliteration achieved. The original use of the subcostal artery as a recipient vessel was successfully achieved. All wounds healed uneventfully. 3D printing is also demonstrated as a useful tool for 3D evaluation of volume and dead-space.Conclusion: Free flaps offer unique benefits in sacral reconstruction where local tissue is compromised by irradiation and tumor recurrence, and dead-space requires accurate volumetric reconstruction. We describe for the first time the use of

  8. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    Science.gov (United States)

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset

    Science.gov (United States)

    Eyre, T.; Van der Baan, M.

    2016-12-01

    Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and

  10. A global gridded dataset of daily precipitation going back to 1950, ideal for analysing precipitation extremes

    Science.gov (United States)

    Contractor, S.; Donat, M.; Alexander, L. V.

    2017-12-01

    Reliable observations of precipitation are necessary to determine past changes in precipitation and validate models, allowing for reliable future projections. Existing gauge based gridded datasets of daily precipitation and satellite based observations contain artefacts and have a short length of record, making them unsuitable to analyse precipitation extremes. The largest limiting factor for the gauge based datasets is a dense and reliable station network. Currently, there are two major data archives of global in situ daily rainfall data, first is Global Historical Station Network (GHCN-Daily) hosted by National Oceanic and Atmospheric Administration (NOAA) and the other by Global Precipitation Climatology Centre (GPCC) part of the Deutsche Wetterdienst (DWD). We combine the two data archives and use automated quality control techniques to create a reliable long term network of raw station data, which we then interpolate using block kriging to create a global gridded dataset of daily precipitation going back to 1950. We compare our interpolated dataset with existing global gridded data of daily precipitation: NOAA Climate Prediction Centre (CPC) Global V1.0 and GPCC Full Data Daily Version 1.0, as well as various regional datasets. We find that our raw station density is much higher than other datasets. To avoid artefacts due to station network variability, we provide multiple versions of our dataset based on various completeness criteria, as well as provide the standard deviation, kriging error and number of stations for each grid cell and timestep to encourage responsible use of our dataset. Despite our efforts to increase the raw data density, the in situ station network remains sparse in India after the 1960s and in Africa throughout the timespan of the dataset. Our dataset would allow for more reliable global analyses of rainfall including its extremes and pave the way for better global precipitation observations with lower and more transparent uncertainties.

  11. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  12. The Role of Datasets on Scientific Influence within Conflict Research.

    Directory of Open Access Journals (Sweden)

    Tracy Van Holt

    Full Text Available We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS over a 66-year period (1945-2011. We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA, a specialized social network analysis on this citation network (~1.5 million works, to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993. The critical path consisted of a number of key features: 1 Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2 Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3 We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography. Publically available conflict datasets developed early on helped

  13. Assessment of Volumetric versus Manual Measurement in Disseminated Testicular Cancer; No Difference in Assessment between Non-Radiologists and Genitourinary Radiologist.

    Directory of Open Access Journals (Sweden)

    Çiğdem Öztürk

    Full Text Available The aim of this study was to assess the feasibility and reproducibility of semi-automatic volumetric measurement of retroperitoneal lymph node metastases in testicular cancer (TC patients treated with chemotherapy versus the standardized manual measurements based on RECIST criteria.21 TC patients with retroperitoneal lymph node metastases of testicular cancer were studied with a CT scan of chest and abdomen before and after cisplatin based chemotherapy. Three readers, a surgical resident, a radiological technician and a radiologist, assessed tumor response independently using computerized volumetric analysis with Vitrea software® and manual measurement according to RECIST criteria (version 1.1. Intra- and inter-rater variability were evaluated with intra class correlations and Bland-Altman analysis.Assessment of intra observer and inter observer variance proved non-significant in both measurement modalities. In particularly all intraclass correlation (ICC values for the volumetric analysis were > .99 per observer and between observers. There was minimal bias in agreement for manual as well as volumetric analysis.In this study volumetric measurement using Vitrea software® appears to be a reliable, reproducible method to measure initial tumor volume of retroperitoneal lymph node metastases of testicular cancer after chemotherapy. Both measurement methods can be performed by experienced non-radiologists as well.

  14. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  15. Development of a SPARK Training Dataset

    International Nuclear Information System (INIS)

    Sayre, Amanda M.; Olson, Jarrod R.

    2015-01-01

    In its first five years, the National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK's intended analysis capability. The analysis demonstration sought to answer

  16. Calculation of climatic reference values and its use for automatic outlier detection in meteorological datasets

    Directory of Open Access Journals (Sweden)

    B. Téllez

    2008-04-01

    Full Text Available The climatic reference values for monthly and annual average air temperature and total precipitation in Catalonia – northeast of Spain – are calculated using a combination of statistical methods and geostatistical techniques of interpolation. In order to estimate the uncertainty of the method, the initial dataset is split into two parts that are, respectively, used for estimation and validation. The resulting maps are then used in the automatic outlier detection in meteorological datasets.

  17. Composite Match Index with Application of Interior Deformation Field Measurement from Magnetic Resonance Volumetric Images of Human Tissues

    Directory of Open Access Journals (Sweden)

    Penglin Zhang

    2012-01-01

    Full Text Available Whereas a variety of different feature-point matching approaches have been reported in computer vision, few feature-point matching approaches employed in images from nonrigid, nonuniform human tissues have been reported. The present work is concerned with interior deformation field measurement of complex human tissues from three-dimensional magnetic resonance (MR volumetric images. To improve the reliability of matching results, this paper proposes composite match index (CMI as the foundation of multimethod fusion methods to increase the reliability of these various methods. Thereinto, we discuss the definition, components, and weight determination of CMI. To test the validity of the proposed approach, it is applied to actual MR volumetric images obtained from a volunteer’s calf. The main result is consistent with the actual condition.

  18. Volumetric capnography: In the diagnostic work-up of chronic thromboembolic disease

    Directory of Open Access Journals (Sweden)

    Marcos Mello Moreira

    2010-05-01

    Full Text Available Marcos Mello Moreira1, Renato Giuseppe Giovanni Terzi1, Laura Cortellazzi2, Antonio Luis Eiras Falcão1, Heitor Moreno Junior2, Luiz Cláudio Martins2, Otavio Rizzi Coelho21Department of Surgery, 2Department of Internal Medicine, State University of Campinas, School of Medical Sciences, Campinas, Sao Paulo, BrazilAbstract: The morbidity and mortality of pulmonary embolism (PE have been found to be related to early diagnosis and appropriate treatment. The examinations used to diagnose PE are expensive and not always easily accessible. These options include noninvasive examinations, such as clinical pretests, ELISA D-dimer (DD tests, and volumetric capnography (VCap. We report the case of a patient whose diagnosis of PE was made via pulmonary arteriography. The clinical pretest revealed a moderate probability of the patient having PE, and the DD result was negative; however, the VCap associated with arterial blood gases result was positive. The patient underwent all noninvasive exams following admission to hospital and again eight months after discharge. Results gained from invasive tests were similar to those produced by image exams, highlighting the importance of VCap as an important noninvasive tool.Keywords: pulmonary embolism, pulmonary hypertension, volumetric capnography, d-dimers, pretest probability

  19. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  20. Effects of Prepolymerized Particle Size and Polymerization Kinetics on Volumetric Shrinkage of Dental Modeling Resins

    Directory of Open Access Journals (Sweden)

    Tae-Yub Kwon

    2014-01-01

    Full Text Available Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control. The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P>0.05 or significantly larger (P<0.05 than that of the control resin and were related to the polymerization kinetics (P<0.05 rather than the PMMA bead size (P=0.335. Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins.

  1. Integral transform solution of natural convection in a square cavity with volumetric heat generation

    Directory of Open Access Journals (Sweden)

    C. An

    2013-12-01

    Full Text Available The generalized integral transform technique (GITT is employed to obtain a hybrid numerical-analytical solution of natural convection in a cavity with volumetric heat generation. The hybrid nature of this approach allows for the establishment of benchmark results in the solution of non-linear partial differential equation systems, including the coupled set of heat and fluid flow equations that govern the steady natural convection problem under consideration. Through performing the GITT, the resulting transformed ODE system is then numerically solved by making use of the subroutine DBVPFD from the IMSL Library. Therefore, numerical results under user prescribed accuracy are obtained for different values of Rayleigh numbers, and the convergence behavior of the proposed eigenfunction expansions is illustrated. Critical comparisons against solutions produced by ANSYS CFX 12.0 are then conducted, which demonstrate excellent agreement. Several sets of reference results for natural convection with volumetric heat generation in a bi-dimensional square cavity are also provided for future verification of numerical results obtained by other researchers.

  2. Concentrated fed-batch cell culture increases manufacturing capacity without additional volumetric capacity.

    Science.gov (United States)

    Yang, William C; Minkler, Daniel F; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2016-01-10

    Biomanufacturing factories of the future are transitioning from large, single-product facilities toward smaller, multi-product, flexible facilities. Flexible capacity allows companies to adapt to ever-changing pipeline and market demands. Concentrated fed-batch (CFB) cell culture enables flexible manufacturing capacity with limited volumetric capacity; it intensifies cell culture titers such that the output of a smaller facility can rival that of a larger facility. We tested this hypothesis at bench scale by developing a feeding strategy for CFB and applying it to two cell lines. CFB improved cell line A output by 105% and cell line B output by 70% compared to traditional fed-batch (TFB) processes. CFB did not greatly change cell line A product quality, but it improved cell line B charge heterogeneity, suggesting that CFB has both process and product quality benefits. We projected CFB output gains in the context of a 2000-L small-scale facility, but the output was lower than that of a 15,000-L large-scale TFB facility. CFB's high cell mass also complicated operations, eroded volumetric productivity, and showed our current processes require significant improvements in specific productivity in order to realize their full potential and savings in manufacturing. Thus, improving specific productivity can resolve CFB's cost, scale-up, and operability challenges. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Impact of Turbocharger Non-Adiabatic Operation on Engine Volumetric Efficiency and Turbo Lag

    Directory of Open Access Journals (Sweden)

    S. Shaaban

    2012-01-01

    Full Text Available Turbocharger performance significantly affects the thermodynamic properties of the working fluid at engine boundaries and hence engine performance. Heat transfer takes place under all circumstances during turbocharger operation. This heat transfer affects the power produced by the turbine, the power consumed by the compressor, and the engine volumetric efficiency. Therefore, non-adiabatic turbocharger performance can restrict the engine charging process and hence engine performance. The present research work investigates the effect of turbocharger non-adiabatic performance on the engine charging process and turbo lag. Two passenger car turbochargers are experimentally and theoretically investigated. The effect of turbine casing insulation is also explored. The present investigation shows that thermal energy is transferred to the compressor under all circumstances. At high rotational speeds, thermal energy is first transferred to the compressor and latter from the compressor to the ambient. Therefore, the compressor appears to be “adiabatic” at high rotational speeds despite the complex heat transfer processes inside the compressor. A tangible effect of turbocharger non-adiabatic performance on the charging process is identified at turbocharger part load operation. The turbine power is the most affected operating parameter, followed by the engine volumetric efficiency. Insulating the turbine is recommended for reducing the turbine size and the turbo lag.

  4. An Improved Random Walker with Bayes Model for Volumetric Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chunhua Dong

    2017-01-01

    Full Text Available Random walk (RW method has been widely used to segment the organ in the volumetric medical image. However, it leads to a very large-scale graph due to a number of nodes equal to a voxel number and inaccurate segmentation because of the unavailability of appropriate initial seed point setting. In addition, the classical RW algorithm was designed for a user to mark a few pixels with an arbitrary number of labels, regardless of the intensity and shape information of the organ. Hence, we propose a prior knowledge-based Bayes random walk framework to segment the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the shape and intensity knowledge of the target organ for the adjacent slice. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT method and the organ model with a Gaussian process. Finally, a high-quality image segmentation result can be automatically achieved using Bayes RW algorithm. Comparing our method with conventional RW and state-of-the-art interactive segmentation methods, our results show an improvement in the accuracy for liver segmentation (p<0.001.

  5. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  6. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  7. Methodological proposal for the volumetric study of archaeological ceramics through 3D edition free-software programs: the case of the celtiberians cemeteries of the meseta

    Directory of Open Access Journals (Sweden)

    Álvaro Sánchez Climent

    2014-10-01

    Full Text Available Nowadays the free-software programs have been converted into the ideal tools for the archaeological researches, reaching the same level as other commercial programs. For that reason, the 3D modeling tool Blender has reached in the last years a great popularity offering similar characteristics like other commercial 3D editing programs such as 3D Studio Max or AutoCAD. Recently, it has been developed the necessary script for the volumetric calculations of three-dimnesional objects, offering great possibilities to calculate the volume of the archaeological ceramics. In this paper, we present a methodological approach for the volumetric studies with Blender and a study case of funerary urns from several celtiberians cemeteries of the Spanish Meseta. The goal is to demonstrate the great possibilities that the 3D editing free-software tools have in the volumetric studies at the present time.

  8. In-situ volumetric topography of IC chips for defect detection using infrared confocal measurement with active structured light

    International Nuclear Information System (INIS)

    Chen, Liang-Chia; Le, Manh-Trung; Phuc, Dao Cong; Lin, Shyh-Tsong

    2014-01-01

    The article presents the development of in-situ integrated circuit (IC) chip defect detection techniques for automated clipping detection by proposing infrared imaging and full-field volumetric topography. IC chip inspection, especially held during or post IC packaging, has become an extremely critical procedure in IC fabrication to assure manufacturing quality and reduce production costs. To address this, in the article, microscopic infrared imaging using an electromagnetic light spectrum that ranges from 0.9 to 1.7 µm is developed to perform volumetric inspection of IC chips, in order to identify important defects such as silicon clipping, cracking or peeling. The main difficulty of infrared (IR) volumetric imaging lies in its poor image contrast, which makes it incapable of achieving reliable inspection, as infrared imaging is sensitive to temperature difference but insensitive to geometric variance of materials, resulting in difficulty detecting and quantifying defects precisely. To overcome this, 3D volumetric topography based on 3D infrared confocal measurement with active structured light, as well as light refractive matching principles, is developed to detect defects the size, shape and position of defects in ICs. The experimental results show that the algorithm is effective and suitable for in-situ defect detection of IC semiconductor packaging. The quality of defect detection, such as measurement repeatability and accuracy, is addressed. Confirmed by the experimental results, the depth measurement resolution can reach up to 0.3 µm, and the depth measurement uncertainty with one standard deviation was verified to be less than 1.0% of the full-scale depth-measuring range. (paper)

  9. Full closure strategic analysis.

    Science.gov (United States)

    2014-07-01

    The full closure strategic analysis was conducted to create a decision process whereby full roadway : closures for construction and maintenance activities can be evaluated and approved or denied by CDOT : Traffic personnel. The study reviewed current...

  10. Quality Controlling CMIP datasets at GFDL

    Science.gov (United States)

    Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.

    2017-12-01

    As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.

  11. Integrated remotely sensed datasets for disaster management

    Science.gov (United States)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  12. Knowledge Mining from Clinical Datasets Using Rough Sets and Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Kindie Biredagn Nahato

    2015-01-01

    Full Text Available The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  13. Volumetric CT-images improve testing of radiological image interpretation skills

    Energy Technology Data Exchange (ETDEWEB)

    Ravesloot, Cécile J., E-mail: C.J.Ravesloot@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Schaaf, Marieke F. van der, E-mail: M.F.vanderSchaaf@uu.nl [Department of Pedagogical and Educational Sciences at Utrecht University, Heidelberglaan 1, 3584 CS Utrecht (Netherlands); Schaik, Jan P.J. van, E-mail: J.P.J.vanSchaik@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Cate, Olle Th.J. ten, E-mail: T.J.tenCate@umcutrecht.nl [Center for Research and Development of Education at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Gijp, Anouk van der, E-mail: A.vanderGijp-2@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Mol, Christian P., E-mail: C.Mol@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Vincken, Koen L., E-mail: K.Vincken@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands)

    2015-05-15

    Rationale and objectives: Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Materials and methods: Two groups of medical students (n = 139; n = 143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students’ test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Results: Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p < .001). The volumetric CT-image testing program was considered user-friendly. Conclusion: This study shows that volumetric image questions can be successfully integrated in students’ radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test.

  14. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a

  15. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    International Nuclear Information System (INIS)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina

    2013-01-01

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0

  16. Synoptic volumetric variations and flushing of the Tampa Bay estuary

    Science.gov (United States)

    Wilson, M.; Meyers, S. D.; Luther, M. E.

    2014-03-01

    Two types of analyses are used to investigate the synoptic wind-driven flushing of Tampa Bay in response to the El Niño-Southern Oscillation (ENSO) cycle from 1950 to 2007. Hourly sea level elevations from the St. Petersburg tide gauge, and wind speed and direction from three different sites around Tampa Bay are used for the study. The zonal (u) and meridional (v) wind components are rotated clockwise by 40° to obtain axial and co-axial components according to the layout of the bay. First, we use the subtidal observed water level as a proxy for mean tidal height to estimate the rate of volumetric bay outflow. Second, we use wavelet analysis to bandpass sea level and wind data in the time-frequency domain to isolate the synoptic sea level and surface wind variance. For both analyses the long-term monthly climatology is removed and we focus on the volumetric and wavelet variance anomalies. The overall correlation between the Oceanic Niño Index and volumetric analysis is small due to the seasonal dependence of the ENSO response. The mean monthly climatology between the synoptic wavelet variance of elevation and axial winds are in close agreement. During the winter, El Niño (La Niña) increases (decreases) the synoptic variability, but decreases (increases) it during the summer. The difference in winter El Niño/La Niña wavelet variances is about 20 % of the climatological value, meaning that ENSO can swing the synoptic flushing of the bay by 0.22 bay volumes per month. These changes in circulation associated with synoptic variability have the potential to impact mixing and transport within the bay.

  17. Agreement of mammographic measures of volumetric breast density to MRI.

    Science.gov (United States)

    Wang, Jeff; Azziz, Ania; Fan, Bo; Malkov, Serghei; Klifa, Catherine; Newitt, David; Yitta, Silaja; Hylton, Nola; Kerlikowske, Karla; Shepherd, John A

    2013-01-01

    Clinical scores of mammographic breast density are highly subjective. Automated technologies for mammography exist to quantify breast density objectively, but the technique that most accurately measures the quantity of breast fibroglandular tissue is not known. To compare the agreement of three automated mammographic techniques for measuring volumetric breast density with a quantitative volumetric MRI-based technique in a screening population. Women were selected from the UCSF Medical Center screening population that had received both a screening MRI and digital mammogram within one year of each other, had Breast Imaging Reporting and Data System (BI-RADS) assessments of normal or benign finding, and no history of breast cancer or surgery. Agreement was assessed of three mammographic techniques (Single-energy X-ray Absorptiometry [SXA], Quantra, and Volpara) with MRI for percent fibroglandular tissue volume, absolute fibroglandular tissue volume, and total breast volume. Among 99 women, the automated mammographic density techniques were correlated with MRI measures with R(2) values ranging from 0.40 (log fibroglandular volume) to 0.91 (total breast volume). Substantial agreement measured by kappa statistic was found between all percent fibroglandular tissue measures (0.72 to 0.63), but only moderate agreement for log fibroglandular volumes. The kappa statistics for all percent density measures were highest in the comparisons of the SXA and MRI results. The largest error source between MRI and the mammography techniques was found to be differences in measures of total breast volume. Automated volumetric fibroglandular tissue measures from screening digital mammograms were in substantial agreement with MRI and if associated with breast cancer could be used in clinical practice to enhance risk assessment and prevention.

  18. Designing the colorectal cancer core dataset in Iran

    Directory of Open Access Journals (Sweden)

    Sara Dorri

    2017-01-01

    Full Text Available Background: There is no need to explain the importance of collection, recording and analyzing the information of disease in any health organization. In this regard, systematic design of standard data sets can be helpful to record uniform and consistent information. It can create interoperability between health care systems. The main purpose of this study was design the core dataset to record colorectal cancer information in Iran. Methods: For the design of the colorectal cancer core data set, a combination of literature review and expert consensus were used. In the first phase, the draft of the data set was designed based on colorectal cancer literature review and comparative studies. Then, in the second phase, this data set was evaluated by experts from different discipline such as medical informatics, oncology and surgery. Their comments and opinion were taken. In the third phase refined data set, was evaluated again by experts and eventually data set was proposed. Results: In first phase, based on the literature review, a draft set of 85 data elements was designed. In the second phase this data set was evaluated by experts and supplementary information was offered by professionals in subgroups especially in treatment part. In this phase the number of elements totally were arrived to 93 numbers. In the third phase, evaluation was conducted by experts and finally this dataset was designed in five main parts including: demographic information, diagnostic information, treatment information, clinical status assessment information, and clinical trial information. Conclusion: In this study the comprehensive core data set of colorectal cancer was designed. This dataset in the field of collecting colorectal cancer information can be useful through facilitating exchange of health information. Designing such data set for similar disease can help providers to collect standard data from patients and can accelerate retrieval from storage systems.

  19. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  20. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Science.gov (United States)

    Reyna, Alberto; Panduro, Marco A.; Del Rio Bocio, Carlos

    2014-01-01

    This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction. PMID:24701150

  1. Volumetric and calorimetric properties of aqueous ionene solutions.

    Science.gov (United States)

    Lukšič, Miha; Hribar-Lee, Barbara

    2017-02-01

    The volumetric (partial and apparent molar volumes) and calorimetric properties (apparent heat capacities) of aqueous cationic polyelectrolyte solutions - ionenes - were studied using the oscillating tube densitometer and differential scanning calorimeter. The polyion's charge density and the counterion properties were considered as variables. The special attention was put to evaluate the contribution of electrostatic and hydrophobic effects to the properties studied. The contribution of the CH 2 group of the polyion's backbone to molar volumes and heat capacities was estimated. Synergistic effect between polyion and counterions was found.

  2. CT volumetric measurements of the orbits in Graves' disease

    International Nuclear Information System (INIS)

    Krahe, T.; Schlolaut, K.H.; Poss, T.; Trier, H.G.; Lackner, K.; Bonn Univ.; Bonn Univ.

    1989-01-01

    The volumes of the four recti muscles and the orbital fat was measured by CT in 40 normal persons and in 60 patients with clinically confirmed Graves' disease. Compared with normal persons, 42 patients (70%) showed an increase in muscle volume and 28 patients (46.7%) an increase in the amount of fat. In nine patients (15%) muscle volume was normal, but the fat was increased. By using volumetric measurements, the amount of fat in the orbits in patients with Graves' disease could be determined. (orig.) [de

  3. Strontium removal jar test dataset for all figures and tables.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The datasets where used to generate data to demonstrate strontium removal under various water quality and treatment conditions. This dataset is associated with the...

  4. Full page insight

    DEFF Research Database (Denmark)

    Cortsen, Rikke Platz

    2014-01-01

    Alan Moore and his collaborating artists often manipulate time and space by drawing upon the formal elements of comics and making alternative constellations. This article looks at an element that is used frequently in comics of all kinds – the full page – and discusses how it helps shape spatio......, something that it shares with the full page in comics. Through an analysis of several full pages from Moore titles like Swamp Thing, From Hell, Watchmen and Promethea, it is made clear why the full page provides an apt vehicle for an apocalypse in comics....

  5. An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.

    Science.gov (United States)

    Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto

    2018-04-15

    This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-valuefield strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Vector Nonlinear Time-Series Analysis of Gamma-Ray Burst Datasets on Heterogeneous Clusters

    Directory of Open Access Journals (Sweden)

    Ioana Banicescu

    2005-01-01

    Full Text Available The simultaneous analysis of a number of related datasets using a single statistical model is an important problem in statistical computing. A parameterized statistical model is to be fitted on multiple datasets and tested for goodness of fit within a fixed analytical framework. Definitive conclusions are hopefully achieved by analyzing the datasets together. This paper proposes a strategy for the efficient execution of this type of analysis on heterogeneous clusters. Based on partitioning processors into groups for efficient communications and a dynamic loop scheduling approach for load balancing, the strategy addresses the variability of the computational loads of the datasets, as well as the unpredictable irregularities of the cluster environment. Results from preliminary tests of using this strategy to fit gamma-ray burst time profiles with vector functional coefficient autoregressive models on 64 processors of a general purpose Linux cluster demonstrate the effectiveness of the strategy.

  7. A multimodal dataset for authoring and editing multimedia content: The MAMEM project

    Directory of Open Access Journals (Sweden)

    Spiros Nikolopoulos

    2017-12-01

    Full Text Available We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate signals collected from 34 individuals (18 able-bodied and 16 motor-impaired. Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  8. Integration of geophysical datasets by a conjoint probability tomography approach: application to Italian active volcanic areas

    Directory of Open Access Journals (Sweden)

    D. Patella

    2008-06-01

    Full Text Available We expand the theory of probability tomography to the integration of different geophysical datasets. The aim of the new method is to improve the information quality using a conjoint occurrence probability function addressed to highlight the existence of common sources of anomalies. The new method is tested on gravity, magnetic and self-potential datasets collected in the volcanic area of Mt. Vesuvius (Naples, and on gravity and dipole geoelectrical datasets collected in the volcanic area of Mt. Etna (Sicily. The application demonstrates that, from a probabilistic point of view, the integrated analysis can delineate the signature of some important volcanic targets better than the analysis of the tomographic image of each dataset considered separately.

  9. CoVennTree: A new method for the comparative analysis of large datasets

    Directory of Open Access Journals (Sweden)

    Steffen C. Lott

    2015-02-01

    Full Text Available The visualization of massive datasets, such as those resulting from comparative metatranscriptome analyses or the analysis of microbial population structures using ribosomal RNA sequences, is a challenging task. We developed a new method called CoVennTree (Comparative weighted Venn Tree that simultaneously compares up to three multifarious datasets by aggregating and propagating information from the bottom to the top level and produces a graphical output in Cytoscape. With the introduction of weighted Venn structures, the contents and relationships of various datasets can be correlated and simultaneously aggregated without losing information. We demonstrate the suitability of this approach using a dataset of 16S rDNA sequences obtained from microbial populations at three different depths of the Gulf of Aqaba in the Red Sea. CoVennTree has been integrated into the Galaxy ToolShed and can be directly downloaded and integrated into the user instance.

  10. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-06-01

    Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

  11. Evaluation of Modified Categorical Data Fuzzy Clustering Algorithm on the Wisconsin Breast Cancer Dataset

    Directory of Open Access Journals (Sweden)

    Amir Ahmad

    2016-01-01

    Full Text Available The early diagnosis of breast cancer is an important step in a fight against the disease. Machine learning techniques have shown promise in improving our understanding of the disease. As medical datasets consist of data points which cannot be precisely assigned to a class, fuzzy methods have been useful for studying of these datasets. Sometimes breast cancer datasets are described by categorical features. Many fuzzy clustering algorithms have been developed for categorical datasets. However, in most of these methods Hamming distance is used to define the distance between the two categorical feature values. In this paper, we use a probabilistic distance measure for the distance computation among a pair of categorical feature values. Experiments demonstrate that the distance measure performs better than Hamming distance for Wisconsin breast cancer data.

  12. New Fuzzy Support Vector Machine for the Class Imbalance Problem in Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Xiaoqing Gu

    2014-01-01

    Full Text Available In medical datasets classification, support vector machine (SVM is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM for the class imbalance problem (called FSVM-CIP is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

  13. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  14. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  15. Internationally coordinated glacier monitoring: strategy and datasets

    Science.gov (United States)

    Hoelzle, Martin; Armstrong, Richard; Fetterer, Florence; Gärtner-Roer, Isabelle; Haeberli, Wilfried; Kääb, Andreas; Kargel, Jeff; Nussbaumer, Samuel; Paul, Frank; Raup, Bruce; Zemp, Michael

    2014-05-01

    (c) the Randolph Glacier Inventory (RGI), a new and globally complete digital dataset of outlines from about 180,000 glaciers with some meta-information, which has been used for many applications relating to the IPCC AR5 report. Concerning glacier changes, a database (Fluctuations of Glaciers) exists containing information about mass balance, front variations including past reconstructed time series, geodetic changes and special events. Annual mass balance reporting contains information for about 125 glaciers with a subset of 37 glaciers with continuous observational series since 1980 or earlier. Front variation observations of around 1800 glaciers are available from most of the mountain ranges world-wide. This database was recently updated with 26 glaciers having an unprecedented dataset of length changes from from reconstructions of well-dated historical evidence going back as far as the 16th century. Geodetic observations of about 430 glaciers are available. The database is completed by a dataset containing information on special events including glacier surges, glacier lake outbursts, ice avalanches, eruptions of ice-clad volcanoes, etc. related to about 200 glaciers. A special database of glacier photographs contains 13,000 pictures from around 500 glaciers, some of them dating back to the 19th century. A key challenge is to combine and extend the traditional observations with fast evolving datasets from new technologies.

  16. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  17. Volumetric 3-component velocimetry measurements of the flow field on the rear window of a generic car model

    Directory of Open Access Journals (Sweden)

    Tounsi Nabil

    2012-01-01

    Full Text Available Volumetric 3-component Velocimetry measurements are carried out in the flow field around the rear window of a generic car model, the so-called Ahmed body. This particular flow field is known to be highly unsteady, three dimensional and characterized by strong vortices. The volumetric velocity measurements from the present experiments provide the most comprehensive data for this flow field to date. The present study focuses on the wake flow modifications which result from using a simple flow control device, such as the one recently employed by Fourrié et al. [1]. The mean data clearly show the structure of this complex flow and confirm the drag reduction mechanism suggested by Fourrié et al. The results show that strengthening the separated flow leads to weakening the longitudinal vortices and vice versa. The present paper shows that the Volumetric 3-component Velocimetry technique is a powerful tool used for a better understanding of a threedimensional unsteady complex flow such that developing around a bluffbody.

  18. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  19. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  20. Technical note: An inorganic water chemistry dataset (1972–2011 ...

    African Journals Online (AJOL)

    A national dataset of inorganic chemical data of surface waters (rivers, lakes, and dams) in South Africa is presented and made freely available. The dataset comprises more than 500 000 complete water analyses from 1972 up to 2011, collected from more than 2 000 sample monitoring stations in South Africa. The dataset ...

  1. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  2. Robust computational analysis of rRNA hypervariable tag datasets.

    Directory of Open Access Journals (Sweden)

    Maksim Sipos

    Full Text Available Next-generation DNA sequencing is increasingly being utilized to probe microbial communities, such as gastrointestinal microbiomes, where it is important to be able to quantify measures of abundance and diversity. The fragmented nature of the 16S rRNA datasets obtained, coupled with their unprecedented size, has led to the recognition that the results of such analyses are potentially contaminated by a variety of artifacts, both experimental and computational. Here we quantify how multiple alignment and clustering errors contribute to overestimates of abundance and diversity, reflected by incorrect OTU assignment, corrupted phylogenies, inaccurate species diversity estimators, and rank abundance distribution functions. We show that straightforward procedural optimizations, combining preexisting tools, are effective in handling large (10(5-10(6 16S rRNA datasets, and we describe metrics to measure the effectiveness and quality of the estimators obtained. We introduce two metrics to ascertain the quality of clustering of pyrosequenced rRNA data, and show that complete linkage clustering greatly outperforms other widely used methods.

  3. BLAST-EXPLORER helps you building datasets for phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Claverie Jean-Michel

    2010-01-01

    Full Text Available Abstract Background The right sampling of homologous sequences for phylogenetic or molecular evolution analyses is a crucial step, the quality of which can have a significant impact on the final interpretation of the study. There is no single way for constructing datasets suitable for phylogenetic analysis, because this task intimately depends on the scientific question we want to address, Moreover, database mining softwares such as BLAST which are routinely used for searching homologous sequences are not specifically optimized for this task. Results To fill this gap, we designed BLAST-Explorer, an original and friendly web-based application that combines a BLAST search with a suite of tools that allows interactive, phylogenetic-oriented exploration of the BLAST results and flexible selection of homologous sequences among the BLAST hits. Once the selection of the BLAST hits is done using BLAST-Explorer, the corresponding sequence can be imported locally for external analysis or passed to the phylogenetic tree reconstruction pipelines available on the Phylogeny.fr platform. Conclusions BLAST-Explorer provides a simple, intuitive and interactive graphical representation of the BLAST results and allows selection and retrieving of the BLAST hit sequences based a wide range of criterions. Although BLAST-Explorer primarily aims at helping the construction of sequence datasets for further phylogenetic study, it can also be used as a standard BLAST server with enriched output. BLAST-Explorer is available at http://www.phylogeny.fr

  4. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  5. Condensing Massive Satellite Datasets For Rapid Interactive Analysis

    Science.gov (United States)

    Grant, G.; Gallaher, D. W.; Lv, Q.; Campbell, G. G.; Fowler, C.; LIU, Q.; Chen, C.; Klucik, R.; McAllister, R. A.

    2015-12-01

    Our goal is to enable users to interactively analyze massive satellite datasets, identifying anomalous data or values that fall outside of thresholds. To achieve this, the project seeks to create a derived database containing only the most relevant information, accelerating the analysis process. The database is designed to be an ancillary tool for the researcher, not an archival database to replace the original data. This approach is aimed at improving performance by reducing the overall size by way of condensing the data. The primary challenges of the project include: - The nature of the research question(s) may not be known ahead of time. - The thresholds for determining anomalies may be uncertain. - Problems associated with processing cloudy, missing, or noisy satellite imagery. - The contents and method of creation of the condensed dataset must be easily explainable to users. The architecture of the database will reorganize spatially-oriented satellite imagery into temporally-oriented columns of data (a.k.a., "data rods") to facilitate time-series analysis. The database itself is an open-source parallel database, designed to make full use of clustered server technologies. A demonstration of the system capabilities will be shown. Applications for this technology include quick-look views of the data, as well as the potential for on-board satellite processing of essential information, with the goal of reducing data latency.

  6. Automated volumetric breast density estimation: A comparison with visual assessment

    International Nuclear Information System (INIS)

    Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.

    2013-01-01

    Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors

  7. Scanners and drillers: Characterizing expert visual search through volumetric images

    Science.gov (United States)

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E.; Wolfe, Jeremy M.

    2013-01-01

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a “stack” of 2-D chest CT “slices.” At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: “drilling” and “scanning.” Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. PMID:23922445

  8. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  9. [Benefits of volumetric to facial rejuvenation. Part 1: Fat grafting].

    Science.gov (United States)

    Bui, P; Lepage, C

    2017-10-01

    For a number of years, a volumetric approach using autologous fat injection has been implemented to improve cosmetic outcome in face-lift procedures and to achieve lasting rejuvenation. Autologous fat as filling tissue has been used in plastic surgery since the late 19th century, but has only recently been associated to face lift procedures. The interest of the association lies on the one hand in the pathophysiology of facial aging, involving skin sag and loss of volume, and on the other hand in the tissue induction properties of grafted fat, "rejuvenating" the injected area. The strict methodology consisting in harvesting, treating then injecting an autologous fat graft is known as LipoStructure ® or lipofilling. We here describe the technique overall, then region by region. It is now well known and seems simple, effective and reproducible, but is nevertheless delicate. For each individual, it is necessary to restore a harmonious face with well-distributed volumes. By associating volumetric to the face lift procedure, the plastic surgeon plays a new role: instead of being a tailor, cutting away excess skin, he or she becomes a sculptor, remodeling the face to restore the harmony of youth. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  10. Volumetric three-dimensional display system with rasterization hardware

    Science.gov (United States)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  11. Three-dimensional volumetric assessment of response to treatment

    International Nuclear Information System (INIS)

    Willett, C.G.; Stracher, M.A.; Linggood, R.M.; Leong, J.C.; Skates, S.J.; Miketic, L.M.; Kushner, D.C.; Jacobson, J.O.

    1988-01-01

    From 1981 to 1986, 12 patients with Stage I and II diffuse large cell lymphoma of the mediastinum were treated with 4 or more cycles of multiagent chemotherapy and for nine patients this was followed by mediastinal irradiation. The response to treatment was assessed by three-dimensional volumetric analysis utilizing thoracic CT scans. The initial mean tumor volume of the five patients relapsing was 540 ml in contrast to an initial mean tumor volume of 360 ml for the seven patients remaining in remission. Of the eight patients in whom mediastinal lymphoma volumes could be assessed 1-2 months after chemotherapy prior to mediastinal irradiation, the three patients who have relapsed had volumes of 292, 92 and 50 ml (mean volume 145 ml) in contrast to five patients who have remained in remission with residual volume abnormalities of 4-87 ml (mean volume 32 ml). Four patients in prolonged remission with CT scans taken one year after treatment have been noted to have mediastinal tumor volumes of 0-28 ml with a mean value of 10 ml. This volumetric technique to assess the extent of mediastinal large cell lymphoma from thoracic CT scans appears to be a useful method to quantitate the amount of disease at presentation as well as objectively monitor response to treatment. 13 refs.; 2 figs.; 1 table

  12. A spiral-based volumetric acquisition for MR temperature imaging.

    Science.gov (United States)

    Fielden, Samuel W; Feng, Xue; Zhao, Li; Miller, G Wilson; Geeslin, Matthew; Dallapiazza, Robert F; Elias, W Jeffrey; Wintermark, Max; Butts Pauly, Kim; Meyer, Craig H

    2018-06-01

    To develop a rapid pulse sequence for volumetric MR thermometry. Simulations were carried out to assess temperature deviation, focal spot distortion/blurring, and focal spot shift across a range of readout durations and maximum temperatures for Cartesian, spiral-out, and retraced spiral-in/out (RIO) trajectories. The RIO trajectory was applied for stack-of-spirals 3D imaging on a real-time imaging platform and preliminary evaluation was carried out compared to a standard 2D sequence in vivo using a swine brain model, comparing maximum and mean temperatures measured between the two methods, as well as the temporal standard deviation measured by the two methods. In simulations, low-bandwidth Cartesian trajectories showed substantial shift of the focal spot, whereas both spiral trajectories showed no shift while maintaining focal spot geometry. In vivo, the 3D sequence achieved real-time 4D monitoring of thermometry, with an update time of 2.9-3.3 s. Spiral imaging, and RIO imaging in particular, is an effective way to speed up volumetric MR thermometry. Magn Reson Med 79:3122-3127, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Spatio-volumetric hazard estimation in the Auckland volcanic field

    Science.gov (United States)

    Bebbington, Mark S.

    2015-05-01

    The idea of a volcanic field `boundary' is prevalent in the literature, but ill-defined at best. We use the elliptically constrained vents in the Auckland Volcanic Field to examine how spatial intensity models can be tested to assess whether they are consistent with such features. A means of modifying the anisotropic Gaussian kernel density estimate to reflect the existence of a `hard' boundary is then suggested, and the result shown to reproduce the observed elliptical distribution. A new idea, that of a spatio-volumetric model, is introduced as being more relevant to hazard in a monogenetic volcanic field than the spatiotemporal hazard model due to the low temporal rates in volcanic fields. Significant dependencies between the locations and erupted volumes of the observed centres are deduced, and expressed in the form of a spatially-varying probability density. In the future, larger volumes are to be expected in the `gaps' between existing centres, with the location of the greatest forecast volume lying in the shipping channel between Rangitoto and Castor Bay. The results argue for tectonic control over location and magmatic control over erupted volume. The spatio-volumetric model is consistent with the hypothesis of a flat elliptical area in the mantle where tensional stresses, related to the local tectonics and geology, allow decompressional melting.

  14. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  15. Volumetric Synthetic Aperture Imaging with a Piezoelectric 2-D Row-Column Probe

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann

    2016-01-01

    The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addres......The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row...

  16. Layering of Structure in the North American Upper Mantle: Combining Short Period Constraints and Full Waveform Tomography

    Science.gov (United States)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2016-12-01

    Recent receiver function (RF) studies of the north American craton suggest the presence of layering within the cratonic lithosphere with significant lateral variations in the depth. However, the location and character of these discontinuities depends on assumptions made on a background 3D velocity model. On the other hand, the implementation of the Spectral Element Method (SEM) for the computation of the seismic wavefield in 3D structures is allowing improved resolution of volumetric structure in full waveform tomography. The corresponding computations are however very heavy and limit our ability to attain short enough periods to resolve short scale features such as the existence and lateral variations of discontinuities. In order to overcome these limitations, we have developed a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. In a first step we constructed a 3D discontinuous radially anisotropic starting model combining 1D models calculated using RF and L and R wave dispersion data in a Bayesian framework using trans-dimensional MCMC inversion at a collection of 30 stations across the north American continent (Calò et al., 2016). This model was then interpolated and smoothed using a procedure based on residual homogenization (Capdeville et al. 2013) and serves as input model for full waveform tomography using a three-component waveform dataset previously collected (Yuan et al., 2014). The homogenization is necessary to avoid meshing problems and heavy SEM computations. In a second step, several iterations of the full waveform inversion are performed until convergence, using a regional SEM code for forward computations (RegSEM, Cupillard et al., 2012). Results of the inversion are volumetric velocity perturbations around the homogenized starting model, which are then added to the discontinuous 3D starting model. The final result is a multiscale discontinuous model containing both short and

  17. Methodological Details and Full Bibliography

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset has several components, The first part describes fully our literature review, providing details not included in the text. The second part provides all...

  18. Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.

    Science.gov (United States)

    Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde

    2014-07-01

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.

  19. Full Service Leasing

    OpenAIRE

    Richter, Ján

    2009-01-01

    Aim of this master thesis is to describe the service of Full Service Leasing, as a modern form of financing and management of assets, primarily automobile fleet. Description of full service leasing is designed as a comprehensive and complete guide to support reader's position when deciding to finance and manage a fleet by this service. Whether the reader is an entrepreneur, CFO, fleet manager, new employee of leasing company, or anyone who is interested in this service, this master thesis wil...

  20. Minimum pricing of alcohol versus volumetric taxation: which policy will reduce heavy consumption without adversely affecting light and moderate consumers?

    Directory of Open Access Journals (Sweden)

    Anurag Sharma

    Full Text Available We estimate the effect on light, moderate and heavy consumers of alcohol from implementing a minimum unit price for alcohol (MUP compared with a uniform volumetric tax.We analyse scanner data from a panel survey of demographically representative households (n = 885 collected over a one-year period (24 Jan 2010-22 Jan 2011 in the state of Victoria, Australia, which includes detailed records of each household's off-trade alcohol purchasing.The heaviest consumers (3% of the sample currently purchase 20% of the total litres of alcohol (LALs, are more likely to purchase cask wine and full strength beer, and pay significantly less on average per standard drink compared to the lightest consumers (A$1.31 [95% CI 1.20-1.41] compared to $2.21 [95% CI 2.10-2.31]. Applying a MUP of A$1 per standard drink has a greater effect on reducing the mean annual volume of alcohol purchased by the heaviest consumers of wine (15.78 LALs [95% CI 14.86-16.69] and beer (1.85 LALs [95% CI 1.64-2.05] compared to a uniform volumetric tax (9.56 LALs [95% CI 9.10-10.01] and 0.49 LALs [95% CI 0.46-0.41], respectively. A MUP results in smaller increases in the annual cost for the heaviest consumers of wine ($393.60 [95% CI 374.19-413.00] and beer ($108.26 [95% CI 94.76-121.75], compared to a uniform volumetric tax ($552.46 [95% CI 530.55-574.36] and $163.92 [95% CI 152.79-175.03], respectively. Both a MUP and uniform volumetric tax have little effect on changing the annual cost of wine and beer for light and moderate consumers, and likewise little effect upon their purchasing.While both a MUP and a uniform volumetric tax have potential to reduce heavy consumption of wine and beer without adversely affecting light and moderate consumers, a MUP offers the potential to achieve greater reductions in heavy consumption at a lower overall annual cost to consumers.

  1. Study of a spherical torus based volumetric neutron source for nuclear technology testing and development

    International Nuclear Information System (INIS)

    Cheng, E.T.; Cerbone, R.J.; Sviatoslavsky, I.N.; Galambos, L.D.; Peng, Y.-K.M.

    2000-01-01

    A plasma based, deuterium and tritium (DT) fueled, volumetric 14 MeV neutron source (VNS) has been considered as a possible facility to support the development of the demonstration fusion power reactor (DEMO). It can be used to test and develop necessary fusion blanket and divertor components and provide sufficient database, particularly on the reliability of nuclear components necessary for DEMO. The VNS device can be complement to ITER by reducing the cost and risk in the development of DEMO. A low cost, scientifically attractive, and technologically feasible volumetric neutron source based on the spherical torus (ST) concept has been conceived. The ST-VNS, which has a major radius of 1.07 m, aspect ratio 1.4, and plasma elongation three, can produce a neutron wall loading from 0.5 to 5 MW m -2 at the outboard test section with a modest fusion power level from 38 to 380 MW. It can be used to test necessary nuclear technologies for fusion power reactor and develop fusion core components include divertor, first wall, and power blanket. Using staged operation leading to high neutron wall loading and optimistic availability, a neutron fluence of more than 30 MW year m -2 is obtainable within 20 years of operation. This will permit the assessments of lifetime and reliability of promising fusion core components in a reactor relevant environment. A full scale demonstration of power reactor fusion core components is also made possible because of the high neutron wall loading capability. Tritium breeding in such a full scale demonstration can be very useful to ensure the self-sufficiency of fuel cycle for a candidate power blanket concept

  2. NGO Presence and Activity in Afghanistan, 2000–2014: A Provincial-Level Dataset

    Directory of Open Access Journals (Sweden)

    David F. Mitchell

    2017-06-01

    Full Text Available This article introduces a new provincial-level dataset on non-governmental organizations (NGOs in Afghanistan. The data—which are freely available for download—provide information on the locations and sectors of activity of 891 international and local (Afghan NGOs that operated in the country between 2000 and 2014. A summary and visualization of the data is presented in the article following a brief historical overview of NGOs in Afghanistan. Links to download the full dataset are provided in the conclusion.

  3. An Automatic Matcher and Linker for Transportation Datasets

    Directory of Open Access Journals (Sweden)

    Ali Masri

    2017-01-01

    Full Text Available Multimodality requires the integration of heterogeneous transportation data to construct a broad view of the transportation network. Many new transportation services are emerging while being isolated from previously-existing networks. This leads them to publish their data sources to the web, according to linked data principles, in order to gain visibility. Our interest is to use these data to construct an extended transportation network that links these new services to existing ones. The main problems we tackle in this article fall in the categories of automatic schema matching and data interlinking. We propose an approach that uses web services as mediators to help in automatically detecting geospatial properties and mapping them between two different schemas. On the other hand, we propose a new interlinking approach that enables the user to define rich semantic links between datasets in a flexible and customizable way.

  4. xarray: N-D labeled Arrays and Datasets in Python

    Directory of Open Access Journals (Sweden)

    Stephan Hoyer

    2017-04-01

    Full Text Available xarray is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays. Our approach combines an application programing interface (API inspired by pandas with the Common Data Model for self-described scientific data. Key features of the xarray package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, out-of-core computation on datasets that don’t fit into memory, a wide range of serialization and input/output (I/O options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. xarray, as a data model and analytics toolkit, has been widely adopted in the geoscience community but is also used more broadly for multi-dimensional data analysis in physics, machine learning and finance.

  5. The wildland-urban interface raster dataset of Catalonia

    Directory of Open Access Journals (Sweden)

    Fermín J. Alcasena

    2018-04-01

    Full Text Available We provide the wildland urban interface (WUI map of the autonomous community of Catalonia (Northeastern Spain. The map encompasses an area of some 3.21 million ha and is presented as a 150-m resolution raster dataset. Individual housing location, structure density and vegetation cover data were used to spatially assess in detail the interface, intermix and dispersed rural WUI communities with a geographical information system. Most WUI areas concentrate in the coastal belt where suburban sprawl has occurred nearby or within unmanaged forests. This geospatial information data provides an approximation of residential housing potential for loss given a wildfire, and represents a valuable contribution to assist landscape and urban planning in the region. Keywords: Wildland-urban interface, Wildfire risk, Urban planning, Human communities, Catalonia

  6. Survey dataset on occupational hazards on construction sites

    Directory of Open Access Journals (Sweden)

    Patience F. Tunji-Olayeni

    2018-06-01

    Full Text Available The construction site provides an unfriendly working conditions, exposing workers to one of the harshest environments at a workplace. In this dataset, a structured questionnaire was design directed to thirty-five (35 craftsmen selected through a purposive sampling technique on various construction sites in one of the most populous cities in sub-Saharan Africa. The set of descriptive statistics is presented with tables, stacked bar chats and pie charts. Common occupational health conditions affecting the cardiovascular, respiratory and musculoskeletal systems of craftsmen on construction sites were identified. The effects of occupational health hazards on craftsmen and on construction project performance can be determined when the data is analyzed. Moreover, contractors’ commitment to occupational health and safety (OHS can be obtained from the analysis of the survey data. Keywords: Accidents, Construction industry, Craftsmen, Health, Occupational hazards

  7. Orthology detection combining clustering and synteny for very large datasets.

    Directory of Open Access Journals (Sweden)

    Marcus Lechner

    Full Text Available The elucidation of orthology relationships is an important step both in gene function prediction as well as towards understanding patterns of sequence evolution. Orthology assignments are usually derived directly from sequence similarities for large data because more exact approaches exhibit too high computational costs. Here we present PoFF, an extension for the standalone tool Proteinortho, which enhances orthology detection by combining clustering, sequence similarity, and synteny. In the course of this work, FFAdj-MCS, a heuristic that assesses pairwise gene order using adjacencies (a similarity measure related to the breakpoint distance was adapted to support multiple linear chromosomes and extended to detect duplicated regions. PoFF largely reduces the number of false positives and enables more fine-grained predictions than purely similarity-based approaches. The extension maintains the low memory requirements and the efficient concurrency options of its basis Proteinortho, making the software applicable to very large datasets.

  8. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  9. Gravimetric and volumetric determination of the purity of electrolytically refined silver and the produced silver nitrate

    Directory of Open Access Journals (Sweden)

    Ačanski Marijana M.

    2007-01-01

    Full Text Available Silver is, along with gold and the platinum-group metals, one of the so called precious metals. Because of its comparative scarcity, brilliant white color, malleability and resistance to atmospheric oxidation, silver has been used in the manufacture of coins and jewelry for a long time. Silver has the highest known electrical and thermal conductivity of all metals and is used in fabricating printed electrical circuits, and also as a coating for electronic conductors. It is also alloyed with other elements such as nickel or palladium for use in electrical contacts. The most useful silver salt is silver nitrate, a caustic chemical reagent, significant as an antiseptic and as a reagent in analytical chemistry. Pure silver nitrate is an intermediate in the industrial preparation of other silver salts, including the colloidal silver compounds used in medicine and the silver halides incorporated into photographic emulsions. Silver halides become increasingly insoluble in the series: AgCl, AgBr, AgI. All silver salts are sensitive to light and are used in photographic coatings on film and paper. The ZORKA-PHARMA company (Sabac, Serbia specializes in the production of pharmaceutical remedies and lab chemicals. One of its products is chemical silver nitrate (argentum-nitricum (l. Silver nitrate is generally produced by dissolving pure electrolytically refined silver in hot 48% nitric acid. Since the purity of silver nitrate, produced in 2002, was not in compliance with the p.a. level of purity, there was doubt that the electrolytically refined silver was pure. The aim of this research was the gravimetric and volumetric determination of the purity of electrolytically refined silver and silver nitrate, produced industrially and in a laboratory. The purity determination was carried out gravimetrically, by the sedimentation of silver(I ions in the form of insoluble silver salts: AgCl, AgBr and Agi, and volumetrically, according to Mohr and Volhardt. The

  10. CLARA-A1: a cloud, albedo, and radiation dataset from 28 yr of global AVHRR data

    Directory of Open Access Journals (Sweden)

    K.-G. Karlsson

    2013-05-01

    Full Text Available A new satellite-derived climate dataset – denoted CLARA-A1 ("The CM SAF cLoud, Albedo and RAdiation dataset from AVHRR data" – is described. The dataset covers the 28 yr period from 1982 until 2009 and consists of cloud, surface albedo, and radiation budget products derived from the AVHRR (Advanced Very High Resolution Radiometer sensor carried by polar-orbiting operational meteorological satellites. Its content, anticipated accuracies, limitations, and potential applications are described. The dataset is produced by the EUMETSAT Climate Monitoring Satellite Application Facility (CM SAF project. The dataset has its strengths in the long duration, its foundation upon a homogenized AVHRR radiance data record, and in some unique features, e.g. the availability of 28 yr of summer surface albedo and cloudiness parameters over the polar regions. Quality characteristics are also well investigated and particularly useful results can be found over the tropics, mid to high latitudes and over nearly all oceanic areas. Being the first CM SAF dataset of its kind, an intensive evaluation of the quality of the datasets was performed and major findings with regard to merits and shortcomings of the datasets are reported. However, the CM SAF's long-term commitment to perform two additional reprocessing events within the time frame 2013–2018 will allow proper handling of limitations as well as upgrading the dataset with new features (e.g. uncertainty estimates and extension of the temporal coverage.

  11. Creating a Regional MODIS Satellite-Driven Net Primary Production Dataset for European Forests

    Directory of Open Access Journals (Sweden)

    Mathias Neumann

    2016-06-01

    Full Text Available Net primary production (NPP is an important ecological metric for studying forest ecosystems and their carbon sequestration, for assessing the potential supply of food or timber and quantifying the impacts of climate change on ecosystems. The global MODIS NPP dataset using the MOD17 algorithm provides valuable information for monitoring NPP at 1-km resolution. Since coarse-resolution global climate data are used, the global dataset may contain uncertainties for Europe. We used a 1-km daily gridded European climate data set with the MOD17 algorithm to create the regional NPP dataset MODIS EURO. For evaluation of this new dataset, we compare MODIS EURO with terrestrial driven NPP from analyzing and harmonizing forest inventory data (NFI from 196,434 plots in 12 European countries as well as the global MODIS NPP dataset for the years 2000 to 2012. Comparing these three NPP datasets, we found that the global MODIS NPP dataset differs from NFI NPP by 26%, while MODIS EURO only differs by 7%. MODIS EURO also agrees with NFI NPP across scales (from continental, regional to country and gradients (elevation, location, tree age, dominant species, etc.. The agreement is particularly good for elevation, dominant species or tree height. This suggests that using improved climate data allows the MOD17 algorithm to provide realistic NPP estimates for Europe. Local discrepancies between MODIS EURO and NFI NPP can be related to differences in stand density due to forest management and the national carbon estimation methods. With this study, we provide a consistent, temporally continuous and spatially explicit productivity dataset for the years 2000 to 2012 on a 1-km resolution, which can be used to assess climate change impacts on ecosystems or the potential biomass supply of the European forests for an increasing bio-based economy. MODIS EURO data are made freely available at ftp://palantir.boku.ac.at/Public/MODIS_EURO.

  12. Full faith in myself

    Indian Academy of Sciences (India)

    Lawrence

    Full faith in myself. Meenakshi Banerjee. 12. Ihad my schooling at the Irish Convent, Loreto, in Asansol,. West Bengal. Perhaps the earliest memories I have are of myself as a very determined child with a deep appreciation of and inquisitiveness regarding nature although not understanding most of it at that tender age.

  13. Hippocampal sparing radiotherapy for glioblastoma patients: a planning study using volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Hofmaier, Jan; Kantz, Steffi; Söhn, Matthias; Dohm, Oliver S.; Bächle, Stefan; Alber, Markus; Parodi, Katia; Belka, Claus; Niyazi, Maximilian

    2016-01-01

    The purpose of this study is to investigate the potential to reduce exposure of the contralateral hippocampus in radiotherapy for glioblastoma using volumetric modulated arc therapy (VMAT). Datasets of 27 patients who had received 3D conformal radiotherapy (3D-CRT) for glioblastoma with a prescribed dose of 60Gy in fractions of 2Gy were included in this planning study. VMAT plans were optimized with the aim to reduce the dose to the contralateral hippocampus as much as possible without compromising other parameters. Hippocampal dose and treatment parameters were compared to the 3D-CRT plans using the Wilcoxon signed-rank test. The influence of tumour location and PTV size on the hippocampal dose was investigated with the Mann–Whitney-U-test and Spearman’s rank correlation coefficient. The median reduction of the contralateral hippocampus generalized equivalent uniform dose (gEUD) with VMAT was 36 % compared to the original 3D-CRT plans (p < 0.05). Other dose parameters were maintained or improved. The median V30Gy brain could be reduced by 17.9 % (p < 0.05). For VMAT, a parietal and a non-temporal tumour localisation as well as a larger PTV size were predictors for a higher hippocampal dose (p < 0.05). Using VMAT, a substantial reduction of the radiotherapy dose to the contralateral hippocampus for patients with glioblastoma is feasible without compromising other treatment parameters. For larger PTV sizes, less sparing can be achieved. Whether this approach is able to preserve the neurocognitive status without compromising the oncological outcome needs to be investigated in the setting of prospective clinical trials

  14. Global-scale evaluation of 22 precipitation datasets using gauge observations and hydrological modeling

    Directory of Open Access Journals (Sweden)

    H. E. Beck

    2017-12-01

    Full Text Available We undertook a comprehensive evaluation of 22 gridded (quasi-global (sub-daily precipitation (P datasets for the period 2000–2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gauge-corrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to medium-sized ( <  50 000 km2 catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR and the satellite- and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7 or near-surface soil moisture (SM2RAIN-ASCAT, and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS. Two of the three reanalyses (ERA-Interim and JRA-55 unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0 generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU, which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1. Our results highlight large differences in estimation accuracy

  15. Natural convection in wavy enclosures with volumetric heat sources

    International Nuclear Information System (INIS)

    Oztop, H.F.; Varol, Y.; Abu-Nada, E.; Chamkha, A.

    2011-01-01

    In this paper, the effects of volumetric heat sources on natural convection heat transfer and flow structures in a wavy-walled enclosure are studied numerically. The governing differential equations are solved by an accurate finite-volume method. The vertical walls of enclosure are assumed to be heated differentially whereas the two wavy walls (top and bottom) are kept adiabatic. The effective governing parameters for this problem are the internal and external Rayleigh numbers and the amplitude of wavy walls. It is found that both the function of wavy wall and the ratio of internal Rayleigh number (Ra I ) to external Rayleigh number (Ra E ) affect the heat transfer and fluid flow significantly. The heat transfer is predicted to be a decreasing function of waviness of the top and bottom walls in case of (IRa/ERa)>1 and (IRa/ERa)<1. (authors)

  16. Quantitative volumetric Raman imaging of three dimensional cell cultures

    KAUST Repository

    Kallepitis, Charalambos

    2017-03-22

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell–material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  17. Thermal expansion and volumetric changes during indium phosphide melting

    International Nuclear Information System (INIS)

    Glazov, V.M.; Davletov, K.; Nashel'skij, A.Ya.; Mamedov, M.M.

    1977-01-01

    The results of the measurements of a thermal expansion were summed up at various temperatures as a diagram in coordinates (Δ 1/1) approximately F(t). It was shown that an appreciable deviation of the relationship (Δ1/1) approximately f(t) from the linear law corresponded to a temperature of 500-550 deg C. It was noted that the said deviation was related to an appreciable thermal decomposition of indium phosphide as temperature increased. The strength of the inter-atomic bond of indium phosphide was calculated. Investigated were the volumetric changes of indium phosphide on melting. The resultant data were analyzed with the aid of the Clausius-Clapeyron equation

  18. Volumetric dispenser for small particles from plural sources

    International Nuclear Information System (INIS)

    Bradley, R.A.; Miller, W.H.; Sease, J.D.

    1975-01-01

    Apparatus is described for rapidly and accurately dispensing measured volumes of small particles from a supply hopper. The apparatus includes an adjustable, vertically oriented measuring tube and orifice member defining the volume to be dispensed, a ball plug valve for selectively closing the bottom end of the orifice member, and a compression valve for selectively closing the top end of the measuring tube. A supply hopper is disposed above and in gravity flow communication with the measuring tube. Properly sequenced opening and closing of the two valves provides accurate volumetric discharge through the ball plug valve. A dispensing system is described wherein several appropriately sized measuring tubes, orifice members, and associated valves are arranged to operate contemporaneously to facilitate blending of different particles

  19. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  20. Quantitative volumetric Raman imaging of three dimensional cell cultures

    Science.gov (United States)

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-03-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  1. Minimum pricing of alcohol versus volumetric taxation: which policy will reduce heavy consumption without adversely affecting light and moderate consumers?

    Science.gov (United States)

    Sharma, Anurag; Vandenberg, Brian; Hollingsworth, Bruce

    2014-01-01

    We estimate the effect on light, moderate and heavy consumers of alcohol from implementing a minimum unit price for alcohol (MUP) compared with a uniform volumetric tax. We analyse scanner data from a panel survey of demographically representative households (n = 885) collected over a one-year period (24 Jan 2010-22 Jan 2011) in the state of Victoria, Australia, which includes detailed records of each household's off-trade alcohol purchasing. The heaviest consumers (3% of the sample) currently purchase 20% of the total litres of alcohol (LALs), are more likely to purchase cask wine and full strength beer, and pay significantly less on average per standard drink compared to the lightest consumers (A$1.31 [95% CI 1.20-1.41] compared to $2.21 [95% CI 2.10-2.31]). Applying a MUP of A$1 per standard drink has a greater effect on reducing the mean annual volume of alcohol purchased by the heaviest consumers of wine (15.78 LALs [95% CI 14.86-16.69]) and beer (1.85 LALs [95% CI 1.64-2.05]) compared to a uniform volumetric tax (9.56 LALs [95% CI 9.10-10.01] and 0.49 LALs [95% CI 0.46-0.41], respectively). A MUP results in smaller increases in the annual cost for the heaviest consumers of wine ($393.60 [95% CI 374.19-413.00]) and beer ($108.26 [95% CI 94.76-121.75]), compared to a uniform volumetric tax ($552.46 [95% CI 530.55-574.36] and $163.92 [95% CI 152.79-175.03], respectively). Both a MUP and uniform volumetric tax have little effect on changing the annual cost of wine and beer for light and moderate consumers, and likewise little effect upon their purchasing. While both a MUP and a uniform volumetric tax have potential to reduce heavy consumption of wine and beer without adversely affecting light and moderate consumers, a MUP offers the potential to achieve greater reductions in heavy consumption at a lower overall annual cost to consumers.

  2. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  3. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  4. 40 CFR 80.157 - Volumetric additive reconciliation (“VAR”), equipment calibration, and recordkeeping requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Volumetric additive reconciliation (â... ADDITIVES Detergent Gasoline § 80.157 Volumetric additive reconciliation (“VAR”), equipment calibration, and... other comparable VAR supporting documentation. (ii) For a facility which uses a gauge to measure the...

  5. 40 CFR 80.170 - Volumetric additive reconciliation (VAR), equipment calibration, and recordkeeping requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Volumetric additive reconciliation... ADDITIVES Detergent Gasoline § 80.170 Volumetric additive reconciliation (VAR), equipment calibration, and...) For a facility which uses a gauge to measure the inventory of the detergent storage tank, the total...

  6. Volumetric Arterial Wall Shear Stress Calculation Based on Cine Phase Contrast MRI

    NARCIS (Netherlands)

    Potters, Wouter V.; van Ooij, Pim; Marquering, Henk; VanBavel, Ed; Nederveen, Aart J.

    2015-01-01

    PurposeTo assess the accuracy and precision of a volumetric wall shear stress (WSS) calculation method applied to cine phase contrast magnetic resonance imaging (PC-MRI) data. Materials and MethodsVolumetric WSS vectors were calculated in software phantoms. WSS algorithm parameters were optimized

  7. Plate Full of Color

    Centers for Disease Control (CDC) Podcasts

    The Eagle Books are a series of four books that are brought to life by wise animal characters - Mr. Eagle, Miss Rabbit, and Coyote - who engage Rain That Dances and his young friends in the joy of physical activity, eating healthy foods, and learning from their elders about health and diabetes prevention. Plate Full of Color teaches the value of eating a variety of colorful and healthy foods.

  8. System analysis of formation and perception processes of three-dimensional images in volumetric displays

    Science.gov (United States)

    Bolshakov, Alexander; Sgibnev, Arthur

    2018-03-01

    One of the promising devices is currently a volumetric display. Volumetric displays capable to visualize complex three-dimensional information as nearly as possible to its natural – volume form without the use of special glasses. The invention and implementation of volumetric display technology will expand opportunities of information visualization in various spheres of human activity. The article attempts to structure and describe the interrelation of the essential characteristics of objects in the area of volumetric visualization. Also there is proposed a method of calculation of estimate total number of voxels perceived by observers during the 3D demonstration, generated using a volumetric display with a rotating screen. In the future, it is planned to expand the described technique and implement a system for estimation the quality of generated images, depending on the types of biplanes and their initial characteristics.

  9. Design, Implementation and Characterization of a Quantum-Dot-Based Volumetric Display

    Science.gov (United States)

    Hirayama, Ryuji; Naruse, Makoto; Nakayama, Hirotaka; Tate, Naoya; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ohtsu, Motoichi; Ito, Tomoyoshi

    2015-02-01

    In this study, we propose and experimentally demonstrate a volumetric display system based on quantum dots (QDs) embedded in a polymer substrate. Unlike conventional volumetric displays, our system does not require electrical wiring; thus, the heretofore unavoidable issue of occlusion is resolved because irradiation by external light supplies the energy to the light-emitting voxels formed by the QDs. By exploiting the intrinsic attributes of the QDs, the system offers ultrahigh definition and a wide range of colours for volumetric displays. In this paper, we discuss the design, implementation and characterization of the proposed volumetric display's first prototype. We developed an 8 × 8 × 8 display comprising two types of QDs. This display provides multicolour three-type two-dimensional patterns when viewed from different angles. The QD-based volumetric display provides a new way to represent images and could be applied in leisure and advertising industries, among others.

  10. Investigating the effect of clamping force on the fatigue life of bolted plates using volumetric approach

    International Nuclear Information System (INIS)

    Esmaeili, F.; Chakherlou, T. N.; Zehsaz, M.; Hasanifard, S.

    2013-01-01

    In this paper, the effects of bolt clamping force on the fatigue life for bolted plates made from Al7075-T6 have been studied on the values of notch strength reduction factor obtained by volumetric approach. To attain stress distribution around the notch (hole) which is required for volumetric approach, nonlinear finite element simulations were carried out. To estimate the fatigue life, the available smooth S-N curve of Al7075-T6 and the notch strength reduction factor obtained from volumetric method were used. The estimated fatigue life was compared with the available experimental test results. The investigation shows that there is a good agreement between the life predicted by the volumetric approach and the experimental results for various specimens with different amount of clamping forces. Volumetric approach and experimental results showed that the fatigue life of bolted plates improves because of the compressive stresses created around the plate hole due to clamping force.

  11. 3D Space Shift from CityGML LoD3-Based Multiple Building Elements to a 3D Volumetric Object

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2017-01-01

    Full Text Available In contrast with photorealistic visualizations, urban landscape applications, and building information system (BIM, 3D volumetric presentations highlight specific calculations and applications of 3D building elements for 3D city planning and 3D cadastres. Knowing the precise volumetric quantities and the 3D boundary locations of 3D building spaces is a vital index which must remain constant during data processing because the values are related to space occupation, tenure, taxes, and valuation. To meet these requirements, this paper presents a five-step algorithm for performing a 3D building space shift. This algorithm is used to convert multiple building elements into a single 3D volumetric building object while maintaining the precise volume of the 3D space and without changing the 3D locations or displacing the building boundaries. As examples, this study used input data and building elements based on City Geography Markup Language (CityGML LoD3 models. This paper presents a method for 3D urban space and 3D property management with the goal of constructing a 3D volumetric object for an integral building using CityGML objects, by fusing the geometries of various building elements. The resulting objects possess true 3D geometry that can be represented by solid geometry and saved to a CityGML file for effective use in 3D urban planning and 3D cadastres.

  12. Combined use of biochemical and volumetric biomarkers to assess the risk of conversion of mild cognitive impairment to Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Marta Nesteruk

    2016-12-01

    Full Text Available Introduction : The aim of our study was to evaluate the usefulness of several biomarkers in predicting the conversion of mild cognitive impairment (MCI to Alzheimer’s disease (AD: β-amyloid and tau proteins in cerebrospinal fluid and the volumetric evaluation of brain structures including the hippocampus in magnetic resonance imaging (MRI. Material and methods : MRI of the brain with the volumetric assessment of hippocampus, entorhinal cortex, posterior cingulate gyrus, parahippocampal gyrus, superior, medial and inferior temporal gyri was performed in 40 patients diagnosed with mild cognitive impairment. Each patient had a lumbar puncture to evaluate β-amyloid and tau protein (total and phosphorylated levels in the cerebrospinal fluid. The observation period was 2 years. Results : Amongst 40 patients with MCI, 9 (22.5% converted to AD within 2 years of observation. Discriminant analysis was conducted and sensitivity for MCI conversion to AD on the basis of volumetric measurements was 88.9% and specificity 90.3%; on the basis of β-amyloid and total tau, sensitivity was 77.8% and specificity 83.9%. The combined use of the results of volumetric measurements with the results of proteins in the cerebrospinal fluid did not increase the sensitivity (88.9% but increased specificity to 96.8% and the percentage of correct classification to 95%.

  13. Plate Full of Color

    Centers for Disease Control (CDC) Podcasts

    2008-08-04

    The Eagle Books are a series of four books that are brought to life by wise animal characters - Mr. Eagle, Miss Rabbit, and Coyote - who engage Rain That Dances and his young friends in the joy of physical activity, eating healthy foods, and learning from their elders about health and diabetes prevention. Plate Full of Color teaches the value of eating a variety of colorful and healthy foods.  Created: 8/4/2008 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 8/5/2008.

  14. The Dataset of Countries at Risk of Electoral Violence

    OpenAIRE

    Birch, Sarah; Muchlinski, David

    2017-01-01

    Electoral violence is increasingly affecting elections around the world, yet researchers have been limited by a paucity of granular data on this phenomenon. This paper introduces and describes a new dataset of electoral violence – the Dataset of Countries at Risk of Electoral Violence (CREV) – that provides measures of 10 different types of electoral violence across 642 elections held around the globe between 1995 and 2013. The paper provides a detailed account of how and why the dataset was ...

  15. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  16. BDML Datasets: 3 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available antella, A., Khairy, K., Bao, Z., Wittbrodt, J., and Stelzer, E.H.K. Philipp J. Keller, European Molecular Biology... Laboratory, Cell Biology and Biophysics Unit, Stelzer Laboratory See details in Keller et al. (2010)

  17. BDML Datasets: 2 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available Ce_AK C. elegans cell simulation Simulation Kimura, A. and Onami, S. Shuichi Onami, RIKEN, Quantitative Biol...ogy Center, Laboratory for Developmental Dynamics See details in Kimura, A. and Ona

  18. The SAIL databank: linking multiple health and social care datasets

    Directory of Open Access Journals (Sweden)

    Ford David V

    2009-01-01

    Full Text Available Abstract Background Vast amounts of data are collected about patients and service users in the course of health and social care service delivery. Electronic data systems for patient records have the potential to revolutionise service delivery and research. But in order to achieve this, it is essential that the ability to link the data at the individual record level be retained whilst adhering to the principles of information governance. The SAIL (Secure Anonymised Information Linkage databank has been established using disparate datasets, and over 500 million records from multiple health and social care service providers have been loaded to date, with further growth in progress. Methods Having established the infrastructure of the databank, the aim of this work was to develop and implement an accurate matching process to enable the assignment of a unique Anonymous Linking Field (ALF to person-based records to make the databank ready for record-linkage research studies. An SQL-based matching algorithm (MACRAL, Matching Algorithm for Consistent Results in Anonymised Linkage was developed for this purpose. Firstly the suitability of using a valid NHS number as the basis of a unique identifier was assessed using MACRAL. Secondly, MACRAL was applied in turn to match primary care, secondary care and social services datasets to the NHS Administrative Register (NHSAR, to assess the efficacy of this process, and the optimum matching technique. Results The validation of using the NHS number yielded specificity values > 99.8% and sensitivity values > 94.6% using probabilistic record linkage (PRL at the 50% threshold, and error rates were Conclusion With the infrastructure that has been put in place, the reliable matching process that has been developed enables an ALF to be consistently allocated to records in the databank. The SAIL databank represents a research-ready platform for record-linkage studies.

  19. Analysis of Public Datasets for Wearable Fall Detection Systems

    Directory of Open Access Journals (Sweden)

    Eduardo Casilari

    2017-06-01

    Full Text Available Due to the boom of wireless handheld devices such as smartwatches and smartphones, wearable Fall Detection Systems (FDSs have become a major focus of attention among the research community during the last years. The effectiveness of a wearable FDS must be contrasted against a wide variety of measurements obtained from inertial sensors during the occurrence of falls and Activities of Daily Living (ADLs. In this regard, the access to public databases constitutes the basis for an open and systematic assessment of fall detection techniques. This paper reviews and appraises twelve existing available data repositories containing measurements of ADLs and emulated falls envisaged for the evaluation of fall detection algorithms in wearable FDSs. The analysis of the found datasets is performed in a comprehensive way, taking into account the multiple factors involved in the definition of the testbeds deployed for the generation of the mobility samples. The study of the traces brings to light the lack of a common experimental benchmarking procedure and, consequently, the large heterogeneity of the datasets from a number of perspectives (length and number of samples, typology of the emulated falls and ADLs, characteristics of the test subjects, features and positions of the sensors, etc.. Concerning this, the statistical analysis of the samples reveals the impact of the sensor range on the reliability of the traces. In addition, the study evidences the importance of the selection of the ADLs and the need of categorizing the ADLs depending on the intensity of the movements in order to evaluate the capability of a certain detection algorithm to discriminate falls from ADLs.

  20. BIA Indian Lands Dataset (Indian Lands of the United States)

    Data.gov (United States)

    Federal Geographic Data Committee — The American Indian Reservations / Federally Recognized Tribal Entities dataset depicts feature location, selected demographics and other associated data for the 561...

  1. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  2. Socioeconomic Data and Applications Center (SEDAC) Treaty Status Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Socioeconomic Data and Application Center (SEDAC) Treaty Status Dataset contains comprehensive treaty information for multilateral environmental agreements,...

  3. Emptiness and Fullness

    DEFF Research Database (Denmark)

    Bregnbæk, Susanne; Bunkenborg, Mikkel

    As critical voices question the quality, authenticity, and value of people, goods, and words in post-Mao China, accusations of emptiness render things open to new investments of meaning, substance, and value. Exploring the production of lack and desire through fine-grained ethnography, this volume...... examines how diagnoses of emptiness operate in a range of very different domains in contemporary China: In the ostensibly meritocratic exam system and the rhetoric of officials, in underground churches, housing bubbles, and nationalist fantasies, in bodies possessed by spirits and evaluations of jade......, there is a pervasive concern with states of lack and emptiness and the contributions suggest that this play of emptiness and fullness is crucial to ongoing constructions of quality, value, and subjectivity in China....

  4. Creating a distortion characterisation dataset for visual band cameras using fiducial markers

    CSIR Research Space (South Africa)

    Jermy, R

    2015-11-01

    Full Text Available . This will allow other researchers to perform the same steps and create better algorithms to accurately locate fiducial markers and calibrate cameras. A second dataset that can be used to assess the accuracy of the stereo vision of two calibrated cameras is also...

  5. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Directory of Open Access Journals (Sweden)

    Douglas Teodoro

    Full Text Available The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  6. Evolving hard problems: Generating human genetics datasets with a complex etiology

    Directory of Open Access Journals (Sweden)

    Himmelstein Daniel S

    2011-07-01

    Full Text Available Abstract Background A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Results Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. Conclusions This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  7. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  8. Datasets of mung bean proteins and metabolites from four different cultivars

    Directory of Open Access Journals (Sweden)

    Akiko Hashiguchi

    2017-08-01

    Full Text Available Plants produce a wide array of nutrients that exert synergistic interaction among whole combinations of nutrients. Therefore comprehensive nutrient profiling is required to evaluate their nutritional/nutraceutical value and health promoting effect. In order to obtain such datasets for mung bean, which is known as a medicinal plant with heat alleviating effect, proteomic and metabolomic analyses were performed using four cultivars from China, Thailand, and Myanmar. In total, 449 proteins and 210 metabolic compounds were identified in seed coat; whereas 480 proteins and 217 metabolic compounds were detected in seed flesh, establishing the first comprehensive dataset of mung bean for nutraceutical evaluation.

  9. Full metal jacket!

    CERN Document Server

    Laëtitia Pedroso

    2011-01-01

    Ten years ago, standard issue clothing only gave CERN firemen partial protection but today our fire-fighters are equipped with state-of-the-art, full personal protective equipment.   CERN's Fire Brigade team. For many years, the members of CERN's Fire Brigade went on call-outs clad in their work trousers and fire-rescue coats, which only afforded them partial protection. Today, textile manufacturing techniques have moved on a long way and CERN's firemen are now kitted out with state-of-the-art personal protective equipment. The coat and trousers are three-layered, comprising fire-resistant aramide, a protective membrane and a thermal lining. The CERN Fire Brigade' new state-of-the-art personal protection equipment. "This equipment is fully compliant with the standards in force and is therefore resistant to cuts, abrasion, electrical arcs with thermal effects and, of course, fire," explains Patrick Berlinghi, the CERN Fire Brigade's Logistics Officer. You might think that su...

  10. Policies for full employment

    DEFF Research Database (Denmark)

    de Koning, Jaap; Layard, Richard; Nickel, Stephen

    European unemployment is too high, and employment is too low. Over 7½ per cent of Europe's workforce is unemployed, and only two thirds of people aged 15-64 are in work. At the Lisbon summit two years ago the heads of government set the target that by 2010 the employment rate should rise from 64...... per cent to at least 70 per cent. And for older workers between 55 and 64 the employment rate should rise from 38 per cent to at least one half. These are ambitious targets. They will require two big changes: more people must seek work, and among those seeking work a higher proportion must get a job....... So we need higher participation, and (for full employment) we need a much lower unemployment rate. Can it be done? A mere glance at the experience of different European countries shows that it can. As Table 1 shows, four E.U. countries already exceed the overall target for 2010 (Britain, Denmark...

  11. Volumetric Spectroscopic Imaging of Glioblastoma Multiforme Radiation Treatment Volumes

    Energy Technology Data Exchange (ETDEWEB)

    Parra, N. Andres [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Maudsley, Andrew A. [Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida (United States); Gupta, Rakesh K. [Department of Radiology and Imaging, Fortis Memorial Research Institute, Gurgaon, Haryana (India); Ishkanian, Fazilat; Huang, Kris [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Walker, Gail R. [Biostatistics and Bioinformatics Core Resource, Sylvester Cancer Center, University of Miami Miller School of Medicine, Miami, Florida (United States); Padgett, Kyle [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida (United States); Roy, Bhaswati [Department of Radiology and Imaging, Fortis Memorial Research Institute, Gurgaon, Haryana (India); Panoff, Joseph; Markoe, Arnold [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Stoyanova, Radka, E-mail: RStoyanova@med.miami.edu [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States)

    2014-10-01

    Purpose: Magnetic resonance (MR) imaging and computed tomography (CT) are used almost exclusively in radiation therapy planning of glioblastoma multiforme (GBM), despite their well-recognized limitations. MR spectroscopic imaging (MRSI) can identify biochemical patterns associated with normal brain and tumor, predominantly by observation of choline (Cho) and N-acetylaspartate (NAA) distributions. In this study, volumetric 3-dimensional MRSI was used to map these compounds over a wide region of the brain and to evaluate metabolite-defined treatment targets (metabolic tumor volumes [MTV]). Methods and Materials: Volumetric MRSI with effective voxel size of ∼1.0 mL and standard clinical MR images were obtained from 19 GBM patients. Gross tumor volumes and edema were manually outlined, and clinical target volumes (CTVs) receiving 46 and 60 Gy were defined (CTV{sub 46} and CTV{sub 60}, respectively). MTV{sub Cho} and MTV{sub NAA} were constructed based on volumes with high Cho and low NAA relative to values estimated from normal-appearing tissue. Results: The MRSI coverage of the brain was between 70% and 76%. The MTV{sub NAA} were almost entirely contained within the edema, and the correlation between the 2 volumes was significant (r=0.68, P=.001). In contrast, a considerable fraction of MTV{sub Cho} was outside of the edema (median, 33%) and for some patients it was also outside of the CTV{sub 46} and CTV{sub 60}. These untreated volumes were greater than 10% for 7 patients (37%) in the study, and on average more than one-third (34.3%) of the MTV{sub Cho} for these patients were outside of CTV{sub 60}. Conclusions: This study demonstrates the potential usefulness of whole-brain MRSI for radiation therapy planning of GBM and revealed that areas of metabolically active tumor are not covered by standard RT volumes. The described integration of MTV into the RT system will pave the way to future clinical trials investigating outcomes in patients treated based on

  12. Volumetric and MGMT parameters in glioblastoma patients: Survival analysis

    International Nuclear Information System (INIS)

    Iliadis, Georgios; Kotoula, Vassiliki; Chatzisotiriou, Athanasios; Televantou, Despina; Eleftheraki, Anastasia G; Lambaki, Sofia; Misailidou, Despina; Selviaridis, Panagiotis; Fountzilas, George

    2012-01-01

    In this study several tumor-related volumes were assessed by means of a computer-based application and a survival analysis was conducted to evaluate the prognostic significance of pre- and postoperative volumetric data in patients harboring glioblastomas. In addition, MGMT (O 6 -methylguanine methyltransferase) related parameters were compared with those of volumetry in order to observe possible relevance of this molecule in tumor development. We prospectively analyzed 65 patients suffering from glioblastoma (GBM) who underwent radiotherapy with concomitant adjuvant temozolomide. For the purpose of volumetry T1 and T2-weighted magnetic resonance (MR) sequences were used, acquired both pre- and postoperatively (pre-radiochemotherapy). The volumes measured on preoperative MR images were necrosis, enhancing tumor and edema (including the tumor) and on postoperative ones, net-enhancing tumor. Age, sex, performance status (PS) and type of operation were also included in the multivariate analysis. MGMT was assessed for promoter methylation with Multiplex Ligation-dependent Probe Amplification (MLPA), for RNA expression with real time PCR, and for protein expression with immunohistochemistry in a total of 44 cases with available histologic material. In the multivariate analysis a negative impact was shown for pre-radiochemotherapy net-enhancing tumor on the overall survival (OS) (p = 0.023) and for preoperative necrosis on progression-free survival (PFS) (p = 0.030). Furthermore, the multivariate analysis confirmed the importance of PS in PFS and OS of patients. MGMT promoter methylation was observed in 13/23 (43.5%) evaluable tumors; complete methylation was observed in 3/13 methylated tumors only. High rate of MGMT protein positivity (> 20% positive neoplastic nuclei) was inversely associated with pre-operative tumor necrosis (p = 0.021). Our findings implicate that volumetric parameters may have a significant role in the prognosis of GBM patients. Furthermore

  13. Structural dataset for the PPARγ V290M mutant

    Directory of Open Access Journals (Sweden)

    Ana C. Puhl

    2016-06-01

    Full Text Available Loss-of-function mutation V290M in the ligand-binding domain of the peroxisome proliferator activated receptor γ (PPARγ is associated with a ligand resistance syndrome (PLRS, characterized by partial lipodystrophy and severe insulin resistance. In this data article we discuss an X-ray diffraction dataset that yielded the structure of PPARγ LBD V290M mutant refined at 2.3 Å resolution, that allowed building of 3D model of the receptor mutant with high confidence and revealed continuous well-defined electron density for the partial agonist diclofenac bound to hydrophobic pocket of the PPARγ. These structural data provide significant insights into molecular basis of PLRS caused by V290M mutation and are correlated with the receptor disability of rosiglitazone binding and increased affinity for corepressors. Furthermore, our structural evidence helps to explain clinical observations which point out to a failure to restore receptor function by the treatment with a full agonist of PPARγ, rosiglitazone.

  14. BDML Datasets: 8 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available , Y.M., Stirbl, R.C., Bruck, J., and Sternberg, P.W. Paul W. Sternberg, California Institute of Technology, HHMI and Division of Biol...ogy, Sternberg Laboratory See details in Cronin et al. (2005) BMC Genetics 6, 5. CC

  15. BDML Datasets: 7 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available , T., Kobayashi, T.J. Md. Khayrul Bashar, The University of Tokyo, Institute of Industrial Science, Laboratory for Quantitative Biolo...gy See details in Bashar et al. (2012) PLoS ONE 7, e35550. CC BY-NC-SA 0.385 x 0.38

  16. Common integration sites of published datasets identified using a graph-based framework

    Directory of Open Access Journals (Sweden)

    Alessandro Vasciaveo

    2016-01-01

    Full Text Available With next-generation sequencing, the genomic data available for the characterization of integration sites (IS has dramatically increased. At present, in a single experiment, several thousand viral integration genome targets can be investigated to define genomic hot spots. In a previous article, we renovated a formal CIS analysis based on a rigid fixed window demarcation into a more stretchy definition grounded on graphs. Here, we present a selection of supporting data related to the graph-based framework (GBF from our previous article, in which a collection of common integration sites (CIS was identified on six published datasets. In this work, we will focus on two datasets, ISRTCGD and ISHIV, which have been previously discussed. Moreover, we show in more detail the workflow design that originates the datasets.

  17. Visual Comparison of Multiple Gene Expression Datasets in a Genomic Context

    Directory of Open Access Journals (Sweden)

    Borowski Krzysztof

    2008-06-01

    Full Text Available The need for novel methods of visualizing microarray data is growing. New perspectives are beneficial to finding patterns in expression data. The Bluejay genome browser provides an integrative way of visualizing gene expression datasets in a genomic context. We have now developed the functionality to display multiple microarray datasets simultaneously in Bluejay, in order to provide researchers with a comprehensive view of their datasets linked to a graphical representation of gene function. This will enable biologists to obtain valuable insights on expression patterns, by allowing them to analyze the expression values in relation to the gene locations as well as to compare expression profiles of related genomes or of di erent experiments for the same genome.

  18. Boundary expansion algorithm of a decision tree induction for an imbalanced dataset

    Directory of Open Access Journals (Sweden)

    Kesinee Boonchuay

    2017-10-01

    Full Text Available A decision tree is one of the famous classifiers based on a recursive partitioning algorithm. This paper introduces the Boundary Expansion Algorithm (BEA to improve a decision tree induction that deals with an imbalanced dataset. BEA utilizes all attributes to define non-splittable ranges. The computed means of all attributes for minority instances are used to find the nearest minority instance, which will be expanded along all attributes to cover a minority region. As a result, BEA can successfully cope with an imbalanced dataset comparing with C4.5, Gini, asymmetric entropy, top-down tree, and Hellinger distance decision tree on 25 imbalanced datasets from the UCI Repository.

  19. Synthetic ALSPAC longitudinal datasets for the Big Data VR project [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Demetris Avraam

    2017-08-01

    Full Text Available Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables without including the original data itself or disclosing participant information.  In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.

  20. Advanced Neuropsychological Diagnostics Infrastructure (ANDI: A Normative Database Created from Control Datasets.

    Directory of Open Access Journals (Sweden)

    Nathalie R. de Vent

    2016-10-01

    Full Text Available In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI, datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  1. Dataset Preservation for the Long Term: Results of the DareLux Project

    Directory of Open Access Journals (Sweden)

    Eugène Dürr

    2008-08-01

    Full Text Available The purpose of the DareLux (Data Archiving River Environment Luxembourg Project was the preservation of unique and irreplaceable datasets, for which we chose hydrology data that will be required to be used in future climatic models. The results are: an operational archive built with XML containers, the OAI-PMH protocol and an architecture based upon web services. Major conclusions are: quality control on ingest is important; digital rights management demands attention; and cost aspects of ingest and retrieval cannot be underestimated. We propose a new paradigm for information retrieval of this type of dataset. We recommend research into visualisation tools for the search and retrieval of this type of dataset.

  2. A first dataset toward a standardized community-driven global mapping of the human immunopeptidome

    Directory of Open Access Journals (Sweden)

    Pouya Faridi

    2016-06-01

    Full Text Available We present the first standardized HLA peptidomics dataset generated by the immunopeptidomics community. The dataset is composed of native HLA class I peptides as well as synthetic HLA class II peptides that were acquired in data-dependent acquisition mode using multiple types of mass spectrometers. All laboratories used the spiked-in landmark iRT peptides for retention time normalization and data analysis. The mass spectrometric data were deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier http://www.ebi.ac.uk/pride/archive/projects/PXD001872. The generated data were used to build HLA allele-specific peptide spectral and assay libraries, which were stored in the SWATHAtlas database. Data presented here are described in more detail in the original eLife article entitled ‘An open-source computational and data resource to analyze digital maps of immunopeptidomes’.

  3. Valuation of large variable annuity portfolios: Monte Carlo simulation and synthetic datasets

    Directory of Open Access Journals (Sweden)

    Gan Guojun

    2017-12-01

    Full Text Available Metamodeling techniques have recently been proposed to address the computational issues related to the valuation of large portfolios of variable annuity contracts. However, it is extremely diffcult, if not impossible, for researchers to obtain real datasets frominsurance companies in order to test their metamodeling techniques on such real datasets and publish the results in academic journals. To facilitate the development and dissemination of research related to the effcient valuation of large variable annuity portfolios, this paper creates a large synthetic portfolio of variable annuity contracts based on the properties of real portfolios of variable annuities and implements a simple Monte Carlo simulation engine for valuing the synthetic portfolio. In addition, this paper presents fair market values and Greeks for the synthetic portfolio of variable annuity contracts that are important quantities for managing the financial risks associated with variable annuities. The resulting datasets can be used by researchers to test and compare the performance of various metamodeling techniques.

  4. Genome-wide gene expression dataset used to identify potential therapeutic targets in androgenetic alopecia

    Directory of Open Access Journals (Sweden)

    R. Dey-Rao

    2017-08-01

    Full Text Available The microarray dataset attached to this report is related to the research article with the title: “A genomic approach to susceptibility and pathogenesis leads to identifying potential novel therapeutic targets in androgenetic alopecia” (Dey-Rao and Sinha, 2017 [1]. Male-pattern hair loss that is induced by androgens (testosterone in genetically predisposed individuals is known as androgenetic alopecia (AGA. The raw dataset is being made publicly available to enable critical and/or extended analyses. Our related research paper utilizes the attached raw dataset, for genome-wide gene-expression associated investigations. Combined with several in silico bioinformatics-based analyses we were able to delineate five strategic molecular elements as potential novel targets towards future AGA-therapy.

  5. Anonymising the Sparse Dataset: A New Privacy Preservation Approach while Predicting Diseases

    Directory of Open Access Journals (Sweden)

    V. Shyamala Susan

    2016-09-01

    Full Text Available Data mining techniques analyze the medical dataset with the intention of enhancing patient’s health and privacy. Most of the existing techniques are properly suited for low dimensional medical dataset. The proposed methodology designs a model for the representation of sparse high dimensional medical dataset with the attitude of protecting the patient’s privacy from an adversary and additionally to predict the disease’s threat degree. In a sparse data set many non-zero values are randomly spread in the entire data space. Hence, the challenge is to cluster the correlated patient’s record to predict the risk degree of the disease earlier than they occur in patients and to keep privacy. The first phase converts the sparse dataset right into a band matrix through the Genetic algorithm along with Cuckoo Search (GCS.This groups the correlated patient’s record together and arranges them close to the diagonal. The next segment dissociates the patient’s disease, which is a sensitive value (SA with the parameters that determine the disease normally Quasi Identifier (QI.Finally, density based clustering technique is used over the underlying data to  create anonymized groups to maintain privacy and to predict the risk level of disease. Empirical assessments on actual health care data corresponding to V.A.Medical Centre heart disease dataset reveal the efficiency of this model pertaining to information loss, utility and privacy.

  6. A public dataset of overground and treadmill walking kinematics and kinetics in healthy individuals

    Directory of Open Access Journals (Sweden)

    Claudiane A. Fukuchi

    2018-04-01

    Full Text Available In a typical clinical gait analysis, the gait patterns of pathological individuals are commonly compared with the typically faster, comfortable pace of healthy subjects. However, due to potential bias related to gait speed, this comparison may not be valid. Publicly available gait datasets have failed to address this issue. Therefore, the goal of this study was to present a publicly available dataset of 42 healthy volunteers (24 young adults and 18 older adults who walked both overground and on a treadmill at a range of gait speeds. Their lower-extremity and pelvis kinematics were measured using a three-dimensional (3D motion-capture system. The external forces during both overground and treadmill walking were collected using force plates and an instrumented treadmill, respectively. The results include both raw and processed kinematic and kinetic data in different file formats: c3d and ASCII files. In addition, a metadata file is provided that contain demographic and anthropometric data and data related to each file in the dataset. All data are available at Figshare (DOI: 10.6084/m9.figshare.5722711. We foresee several applications of this public dataset, including to examine the influences of speed, age, and environment (overground vs. treadmill on gait biomechanics, to meet educational needs, and, with the inclusion of additional participants, to use as a normative dataset.

  7. An Analysis of the GTZAN Music Genre Dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    Most research in automatic music genre recognition has used the dataset assembled by Tzanetakis et al. in 2001. The composition and integrity of this dataset, however, has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine...

  8. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  9. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  10. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  11. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  12. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  13. Toward a Philosophy and Theory of Volumetric Nonthermal Processing.

    Science.gov (United States)

    Sastry, Sudhir K

    2016-06-01

    Nonthermal processes for food preservation have been under intensive investigation for about the past quarter century, with varying degrees of success. We focus this discussion on two volumetrically acting nonthermal processes, high pressure processing (HPP) and pulsed electric fields (PEF), with emphasis on scientific understanding of each, and the research questions that need to be addressed for each to be more successful in the future. We discuss the character or "philosophy" of food preservation, with a question about the nature of the kill step(s), and the sensing challenges that need to be addressed. For HPP, key questions and needs center around whether its nonthermal effectiveness can be increased by increased pressures or pulsing, the theoretical treatment of rates of reaction as influenced by pressure, the assumption of uniform pressure distribution, and the need for (and difficulties involved in) in-situ measurement. For PEF, the questions include the rationale for pulsing, difficulties involved in continuous flow treatment chambers, the difference between electroporation theory and experimental observations, and the difficulties involved in in-situ measurement and monitoring of electric field distribution. © 2016 Institute of Food Technologists®

  14. Determination of delta ferrite volumetric fraction in austenitic stainless steel

    International Nuclear Information System (INIS)

    Almeida Macedo, W.A. de.

    1983-01-01

    Measurements of delta ferrite volumetric fraction in AISI 304 austenitic stainless steels were done by X-ray diffraction, quantitative metallography (point count) and by means of one specific commercial apparatus whose operational principle is magnetic-inductive: The Ferrite Content Meter 1053 / Institut Dr. Foerster. The results obtained were comparated with point count, the reference method. It was also investigated in these measurements the influence of the martensite induced by mechanical deformation. Determinations by X-ray diffraction, by the ratio between integrated intensities of the ferrite (211) and austenite (311) lines, are in excelent agreement with those taken by point count. One correction curve for the lectures of the commercial equipment in focus was obtained, for the range between zero and 20% of delta ferrite in 18/8 stainless steels. It is demonstrated that, depending on the employed measurement method and surface finishing of the material to be analysed, the presence of martensite produced by mechanical deformation of the austenitic matrix is one problem to be considered. (Author) [pt

  15. Volumetric real-time imaging using a CMUT ring array.

    Science.gov (United States)

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  16. Intuitive Exploration of Volumetric Data Using Dynamic Galleries.

    Science.gov (United States)

    Jönsson, Daniel; Falk, Martin; Ynnerman, Anders

    2016-01-01

    In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

  17. Femoral head osteonecrosis: Volumetric MRI assessment and outcome

    International Nuclear Information System (INIS)

    Bassounas, Athanasios E.; Karantanas, Apostolos H.; Fotiadis, Dimitrios I.; Malizos, Konstantinos N.

    2007-01-01

    Effective treatment of femoral head osteonecrosis (FHON) requires early diagnosis and accurate assessment of the disease severity. The ability to predict in the early stages the risk of collapse is important for selecting a joint salvage procedure. The aim of the present study was to evaluate the outcome in patients treated with vascularized fibular grafts in relation to preoperative MR imaging volumetry. We studied 58 patients (87 hips) with FHON. A semi-automated octant-based lesion measurement method, previously described, was performed on the T1-w MR images. The mean time of postoperative follow-up was 7.8 years. Sixty-three hips were successful and 24 failed and converted to total hip arthroplasty within a period of 2-4 years after the initial operation. The rate of failures for hips of male patients was higher than in female patients. The mean lesion size was 28% of the sphere equivalent of the femoral head, 24 ± 12% for the successful hips and 37 ± 9% for the failed (p < 0.001). The most affected octants were antero-supero-medial (58 ± 26%) and postero-supero-medial (54 ± 31%). All but postero-infero-medial and postero-infero-lateral octants, showed statistically significant differences in the lesion size between patients with successful and failed hips. In conclusion, the volumetric analysis of preoperative MRI provides useful information with regard to a successful outcome in patients treated with vascularized fibular grafts

  18. A volumetric flow sensor for automotive injection systems

    International Nuclear Information System (INIS)

    Schmid, U; Krötz, G; Schmitt-Landsiedel, D

    2008-01-01

    For further optimization of the automotive power train of diesel engines, advanced combustion processes require a highly flexible injection system, provided e.g. by the common rail (CR) injection technique. In the past, the feasibility to implement injection nozzle volumetric flow sensors based on the thermo-resistive measurement principle has been demonstrated up to injection pressures of 135 MPa (1350 bar). To evaluate the transient behaviour of the system-integrated flow sensors as well as an injection amount indicator used as a reference method, hydraulic simulations on the system level are performed for a CR injection system. Experimentally determined injection timings were found to be in good agreement with calculated values, especially for the novel sensing element which is directly implemented into the hydraulic system. For the first time pressure oscillations occurring after termination of the injection pulse, predicted theoretically, could be verified directly in the nozzle. In addition, the injected amount of fuel is monitored with the highest resolution ever reported in the literature

  19. Parkinson's disease: diagnostic utility of volumetric imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Che; Chen, Meng-Hsiang [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); Chou, Kun-Hsien [National Yang-Ming University, Brain Research Center, Taipei (China); Lee, Pei-Lin [National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Tsai, Nai-Wen; Lu, Cheng-Hsien [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Neurology, Kaohsiung (China); Chen, Hsiu-Ling [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Hsu, Ai-Ling [National Taiwan University, Institute of Biomedical Electronics and Bioinformatics, Taipei (China); Huang, Yung-Cheng [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Nuclear Medicine, Kaohsiung (China); Lin, Ching-Po [National Yang-Ming University, Brain Research Center, Taipei (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China)

    2017-04-15

    This paper aims to examine the effectiveness of structural imaging as an aid in the diagnosis of Parkinson's disease (PD). High-resolution T{sub 1}-weighted magnetic resonance imaging was performed in 72 patients with idiopathic PD (mean age, 61.08 years) and 73 healthy subjects (mean age, 58.96 years). The whole brain was parcellated into 95 regions of interest using composite anatomical atlases, and region volumes were calculated. Three diagnostic classifiers were constructed using binary multiple logistic regression modeling: the (i) basal ganglion prior classifier, (ii) data-driven classifier, and (iii) basal ganglion prior/data-driven hybrid classifier. Leave-one-out cross validation was used to unbiasedly evaluate the predictive accuracy of imaging features. Pearson's correlation analysis was further performed to correlate outcome measurement using the best PD classifier with disease severity. Smaller volume in susceptible regions is diagnostic for Parkinson's disease. Compared with the other two classifiers, the basal ganglion prior/data-driven hybrid classifier had the highest diagnostic reliability with a sensitivity of 74%, specificity of 75%, and accuracy of 74%. Furthermore, outcome measurement using this classifier was associated with disease severity. Brain structural volumetric analysis with multiple logistic regression modeling can be a complementary tool for diagnosing PD. (orig.)

  20. Volumetric Modulated Arc Therapy (VMAT) Treatment Planning for Superficial Tumors

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    The physician's planning objective is often a uniform dose distribution throughout the planning target volume (PTV), including superficial PTVs on or near the surface of a patient's body. Varian's Eclipse treatment planning system uses a progressive resolution optimizer (PRO), version 8.2.23, for RapidArc dynamic multileaf collimator volumetric modulated arc therapy planning. Because the PRO is a fast optimizer, optimization convergence errors (OCEs) produce dose nonuniformity in the superficial area of the PTV. We present a postsurgical cranial case demonstrating the recursive method our clinic uses to produce RapidArc treatment plans. The initial RapidArc treatment plan generated using one 360 o arc resulted in substantial dose nonuniformity in the superficial section of the PTV. We demonstrate the use of multiple arcs to produce improved dose uniformity in this region. We also compare the results of this superficial dose compensation method to the results of a recursive method of dose correction that we developed in-house to correct optimization convergence errors in static intensity-modulated radiation therapy treatment plans. The results show that up to 4 arcs may be necessary to provide uniform dose to the surface of the PTV with the current version of the PRO.

  1. Normative biometrics for fetal ocular growth using volumetric MRI reconstruction.

    Science.gov (United States)

    Velasco-Annis, Clemente; Gholipour, Ali; Afacan, Onur; Prabhu, Sanjay P; Estroff, Judy A; Warfield, Simon K

    2015-04-01

    To determine normative ranges for fetal ocular biometrics between 19 and 38 weeks gestational age (GA) using volumetric MRI reconstruction. The 3D images of 114 healthy fetuses between 19 and 38 weeks GA were created using super-resolution volume reconstructions from MRI slice acquisitions. These 3D images were semi-automatically segmented to measure fetal orbit volume, binocular distance (BOD), interocular distance (IOD), and ocular diameter (OD). All biometry correlated with GA (Volume, Pearson's correlation coefficient (CC) = 0.9680; BOD, CC = 0.9552; OD, CC = 0.9445; and IOD, CC = 0.8429), and growth curves were plotted against linear and quadratic growth models. Regression analysis showed quadratic models to best fit BOD, IOD, and OD and a linear model to best fit volume. Orbital volume had the greatest correlation with GA, although BOD and OD also showed strong correlation. The normative data found in this study may be helpful for the detection of congenital fetal anomalies with more consistent measurements than are currently available. © 2015 John Wiley & Sons, Ltd. © 2015 John Wiley & Sons, Ltd.

  2. Determination of delta ferrite volumetric fraction in austenitic stainless steels

    International Nuclear Information System (INIS)

    Almeida Macedo, W.A. de.

    1983-01-01

    Measurements of delta ferrite volumetric fraction in AISI 304 austenitic stainless steels were done by X-ray difraction, quantitative metallography (point count) and by means of one specific commercial apparatus whose operational principle is magnetic-inductive: The Ferrite Content Meter 1053 / Institut Dr. Forster. The results obtained were comparated with point count, the reference method. It was also investigated in these measurements the influence of the martensite induced by mechanical deformation. Determinations by X-ray diffraction, by the ratio between integrated intensities of the ferrite (211) and austenite (311) lines, are in excelent agreement with those taken by point count. One correction curve for the lectures of the commercial equipment in focus was obtained, for the range between zero and 20% of delta ferrite in 18/8 stainless steels. It is demonstrated that, depending on the employed measurement method and surface finishing of the material to be analysed, the presence of martensite produced by mechanical deformation of the austenitic matrix is one problem to be considered. (Author) [pt

  3. A volumetric flow sensor for automotive injection systems

    Science.gov (United States)

    Schmid, U.; Krötz, G.; Schmitt-Landsiedel, D.

    2008-04-01

    For further optimization of the automotive power train of diesel engines, advanced combustion processes require a highly flexible injection system, provided e.g. by the common rail (CR) injection technique. In the past, the feasibility to implement injection nozzle volumetric flow sensors based on the thermo-resistive measurement principle has been demonstrated up to injection pressures of 135 MPa (1350 bar). To evaluate the transient behaviour of the system-integrated flow sensors as well as an injection amount indicator used as a reference method, hydraulic simulations on the system level are performed for a CR injection system. Experimentally determined injection timings were found to be in good agreement with calculated values, especially for the novel sensing element which is directly implemented into the hydraulic system. For the first time pressure oscillations occurring after termination of the injection pulse, predicted theoretically, could be verified directly in the nozzle. In addition, the injected amount of fuel is monitored with the highest resolution ever reported in the literature.

  4. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual hull reconstruction, resulting in a first 3D approximation of the target, followed by a region-of-interest estimation, tasked with identifying features of interest, which in turn are used to locally refine the voxel grid and extract a higher-resolution surface representation for those regions. This approach is illustrated for the reconstruction of avatars for use in tele-immersion environments, where head and hand regions are of higher interest. To allow reproducability and direct comparison a publicly available data set for human visual hull reconstruction is used. This paper shows that region-of-interest reconstruction of the target is faster and visually comparable to higher resolution focused visual hull reconstructions. This approach reduces the amount of data generated through the reconstruction, allowing faster post processing, as rendering or networking of the surface voxels. Reconstruction speeds support smooth interactions between the avatar and the virtual environment, while the improved resolution of its facial region and hands creates a higher-degree of immersion and potentially impacts the perception of body language, facial expressions and eye-to-eye contact. Copyright © 2010 by the Association for Computing Machinery, Inc.

  5. Volumetric accuracy of cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Park, Cheol Woo; Kim, Jin Ho; Seo, Yu Kyeong; Lee, Sae Rom; Kang, Ju Hee; Oh, Song Hee; Kim, Gyu Tae; Choi, Yong Suk; Hwang, Eui Hwan [Dept. of Oral and Maxillofacial Radiology, Graduate School, Kyung Hee University, Seoul (Korea, Republic of)

    2017-09-15

    This study was performed to investigate the influence of object shape and distance from the center of the image on the volumetric accuracy of cone-beam computed tomography (CBCT) scans, according to different parameters of tube voltage and current. Four geometric objects (cylinder, cube, pyramid, and hexagon) with predefined dimensions were fabricated. The objects consisted of Teflon-perfluoroalkoxy embedded in a hydrocolloid matrix (Dupli-Coe-Loid TM; GC America Inc., Alsip, IL, USA), encased in an acrylic resin cylinder assembly. An Alphard Vega Dental CT system (Asahi Roentgen Ind. Co., Ltd, Kyoto, Japan) was used to acquire CBCT images. OnDemand 3D (CyberMed Inc., Seoul, Korea) software was used for object segmentation and image analysis. The accuracy was expressed by the volume error (VE). The VE was calculated under 3 different exposure settings. The measured volumes of the objects were compared to the true volumes for statistical analysis. The mean VE ranged from −4.47% to 2.35%. There was no significant relationship between an object's shape and the VE. A significant correlation was found between the distance of the object to the center of the image and the VE. Tube voltage affected the volume measurements and the VE, but tube current did not. The evaluated CBCT device provided satisfactory volume measurements. To assess volume measurements, it might be sufficient to use serial scans with a high resolution, but a low dose. This information may provide useful guidance for assessing volume measurements.

  6. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    Science.gov (United States)

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  7. Volumetric accuracy of cone-beam computed tomography

    International Nuclear Information System (INIS)

    Park, Cheol Woo; Kim, Jin Ho; Seo, Yu Kyeong; Lee, Sae Rom; Kang, Ju Hee; Oh, Song Hee; Kim, Gyu Tae; Choi, Yong Suk; Hwang, Eui Hwan

    2017-01-01

    This study was performed to investigate the influence of object shape and distance from the center of the image on the volumetric accuracy of cone-beam computed tomography (CBCT) scans, according to different parameters of tube voltage and current. Four geometric objects (cylinder, cube, pyramid, and hexagon) with predefined dimensions were fabricated. The objects consisted of Teflon-perfluoroalkoxy embedded in a hydrocolloid matrix (Dupli-Coe-Loid TM; GC America Inc., Alsip, IL, USA), encased in an acrylic resin cylinder assembly. An Alphard Vega Dental CT system (Asahi Roentgen Ind. Co., Ltd, Kyoto, Japan) was used to acquire CBCT images. OnDemand 3D (CyberMed Inc., Seoul, Korea) software was used for object segmentation and image analysis. The accuracy was expressed by the volume error (VE). The VE was calculated under 3 different exposure settings. The measured volumes of the objects were compared to the true volumes for statistical analysis. The mean VE ranged from −4.47% to 2.35%. There was no significant relationship between an object's shape and the VE. A significant correlation was found between the distance of the object to the center of the image and the VE. Tube voltage affected the volume measurements and the VE, but tube current did not. The evaluated CBCT device provided satisfactory volume measurements. To assess volume measurements, it might be sufficient to use serial scans with a high resolution, but a low dose. This information may provide useful guidance for assessing volume measurements

  8. An MRI-based semiautomated volumetric quantification of hip osteonecrosis

    International Nuclear Information System (INIS)

    Malizos, K.N.; Siafakas, M.S.; Karachalios, T.S.; Fotiadis, D.I.; Soucacos, P.N.

    2001-01-01

    Objective: To objectively and precisely define the spatial distribution of osteonecrosis and to investigate the influence of various factors including etiology. Design: A volumetric method is presented to describe the size and spatial distribution of necrotic lesions of the femoral head, using MRI scans. The technique is based on the definition of an equivalent sphere model for the femoral head. Patients: The gender, age, number of hips involved, disease duration, pain intensity, limping disability and etiology were correlated with the distribution of the pathologic bone. Seventy-nine patients with 122 hips affected by osteonecrosis were evaluated. Results: The lesion size ranged from 7% to 73% of the sphere equivalent. The lateral octants presented considerable variability, ranging from wide lateral lesions extending beyond the lip of the acetabulum, to narrow medial lesions, leaving a lateral supporting pillar of intact bone. Patients with sickle cell disease and steroid administration presented the largest lesions. The extent of the posterior superior medial octant involvement correlated with the symptom intensity, a younger age and male gender. Conclusion: The methodology presented here has proven a reliable and straightforward imaging tool for precise assessment of necrotic lesions. It also enables us to target accurately the drilling and grafting procedures. (orig.)

  9. Volumetric PIV behind mangrove-type root models

    Science.gov (United States)

    Kazemi, Amirkhosro; van de Riet, Keith; Curet, Oscar M.

    2017-11-01

    Mangrove trees form dense networks of prop roots in coastal intertidal zones. The interaction of mangroves with the tidal flow is fundamental in estuaries and shoreline by providing water filtration, protection against erosion and habitat for aquatic animals. In this work, we modeled the mangrove prop roots with a cluster of rigid circular cylinders (patch) to investigate its hydrodynamics. We conducted 2-D PIV and V3V in the near- and far-wake in the recirculating water channel. Two models were considered: (1) a rigid patch, and (2) a flexible patch modeled as rigid cylinders with a flexible hinge. We found that Strouhal number changes with porosity while the patch diameter is constant. Based on the wake signature, we defined an effective diameter length scale. The volumetric flow measurements revealed a regular shedding forming von Kármán vortices for the rigid patch while the flexible patch produced a less uniform wake where vortices were substantially distorted. We compare the wake structure between that 2-D PIV and V3V. This analysis of the hydrodynamics of mangrove-root like models can also be extended to understand other complex flows including bio-inspired coastal infrastructures, damping-wave systems, and energy harvesting devices.

  10. Volumetric neuroimaging in Usher syndrome: evidence of global involvement.

    Science.gov (United States)

    Schaefer, G B; Bodensteiner, J B; Thompson, J N; Kimberling, W J; Craft, J M

    1998-08-27

    Usher syndrome is a group of genetic disorders consisting of congenital sensorineural hearing loss and retinitis pigmentosa of variable onset and severity depending on the genetic type. It was suggested that the psychosis of Usher syndrome might be secondary to a metabolic degeneration involving the brain more diffusely. There have been reports of focal and diffuse atrophic changes in the supratentorial brain as well as atrophy of some of the structures of the posterior fossa. We previously performed quantitative analysis of magnetic resonance imaging studies of 19 Usher syndrome patients (12 with type I and 7 with type II) looking at the cerebellum and various cerebellar components. We found atrophy of the cerebellum in both types and sparing of cerebellar vermis lobules I-V in type II Usher syndrome patients only. We now have studied another group of 19 patients (with some overlap in the patients studied from the previous report) with Usher syndrome (8 with type I, 11 with type II). We performed quantitative volumetric measurements of various brain structures compared to age- and sex-matched controls. We found a significant decrease in intracranial volume and in size of the brain and cerebellum with a trend toward an increase in the size of the subarachnoid spaces. These data suggest that the disease process in Usher syndrome involves the entire brain and is not limited to the posterior fossa or auditory and visual systems.

  11. Comparison of surface contour and volumetric three-dimensional imaging of the musculoskeletal system

    International Nuclear Information System (INIS)

    Guilford, W.B.; Ullrich, C.G.; Moore, T.

    1988-01-01

    Both surface contour and volumetric three-dimensional image processing from CT data can provide accurate demonstration of skeletal anatomy. While realistic, surface contour images may obscure fine detail such as nondisplaced fractures, and thin bone may disappear. Volumetric processing can provide high detail, but the transparency effect is unnatural and may yield a confusing image. Comparison of both three-dimensional modes is presented to demonstrate those findings best shown with each and to illustrate helpful techniques to improve volumetric display, such as disarticulation of unnecessary anatomy, short-angle repeating rotation (dithering), and image combination into overlay displays

  12. A multimodal data-set of a unidirectional glass fibre reinforced polymer composite

    Directory of Open Access Journals (Sweden)

    Monica J. Emerson

    2018-06-01

    Full Text Available A unidirectional (UD glass fibre reinforced polymer (GFRP composite was scanned at varying resolutions in the micro-scale with several imaging modalities. All six scans capture the same region of the sample, containing well-aligned fibres inside a UD load-carrying bundle. Two scans of the cross-sectional surface of the bundle were acquired at a high resolution, by means of scanning electron microscopy (SEM and optical microscopy (OM, and four volumetric scans were acquired through X-ray computed tomography (CT at different resolutions. Individual fibres can be resolved from these scans to investigate the micro-structure of the UD bundle. The data is hosted at https://doi.org/10.5281/zenodo.1195879 and it was used in Emerson et al. (2018 [1] to demonstrate that precise and representative characterisations of fibre geometry are possible with relatively low X-ray CT resolutions if the analysis method is robust to image quality. Keywords: Geometrical characterisation, Polymer-matrix composites (PMCs, Volumetric fibre segmentation, Automated fibre tracking, X-ray imaging, Microscopy, Non-destructive testing

  13. Improved algorithm for surface display from volumetric data

    International Nuclear Information System (INIS)

    Lobregt, S.; Schaars, H.W.G.K.; OpdeBeek, J.C.A.; Zonneveld, F.W.

    1988-01-01

    A high-resolution surface display is produced from three-dimensional datasets (computed tomography or magnetic resonance imaging). Unlike other voxel-based methods, this algorithm does not show a cuberille surface structure, because the surface orientation is calculated from original gray values. The applied surface shading is a function of local orientation and position of the surface and of a virtual light source, giving a realistic impression of the surface of bone and soft tissue. The projection and shading are table driven, combining variable viewpoint and illumination conditions with speed. Other options are cutplane gray-level display and surface transparency. Combined with volume scanning, this algorithm offers powerful application possibilities

  14. Full traveltime inversion in source domain

    KAUST Repository

    Liu, Lu

    2017-06-01

    This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).

  15. Volumetric evaluation of dual-energy perfusion CT by the presence of intrapulmonary clots using a 64-slice dual-source CT

    International Nuclear Information System (INIS)

    Okada, Munemasa; Nakashima, Yoshiteru; Kunihiro, Yoshie; Nakao, Sei; Matsunaga, Naofumi; Morikage, Noriyasu; Sano, Yuichi; Suga, Kazuyoshi

    2013-01-01

    Background: Dual-energy perfusion CT (DE p CT) directly represents the iodine distribution in lung parenchyma and low perfusion areas caused by intrapulmonary clots (IPCs) are visualized as low attenuation areas. Purpose: To evaluate if volumetric evaluation of DE p CT can be used as a predictor of right heart strain by the presence of IPCs. Material and Methods: One hundred and ninety-six patients suspected of having acute pulmonary embolism (PE) underwent DE p CT using a 64-slice dual-source CT. DE p CT images were three-dimensionally reconstructed with four threshold ranges: 1-120 HU (V 120 ), 1-15 HU (V 15 ), 1-10 HU (V 10 ), and 1-5 HU (V 5 ). Each relative ratio per V 120 was expressed as the %V 15 , %V 10 , and %V 5 . Volumetric data-sets were compared with D-dimer, pulmonary arterial (PA) pressure, right ventricular (RV) diameter, RV/left ventricular (RV/LV) diameter ratio, PA diameter, and PA/aorta (PA/Ao) diameter ratio. The areas under the ROC curves (AUCs) were examined for their relationship to the presence of IPCs. This study was approved by the local ethics committee. Results: PA pressure and D-dimer were significantly higher in the patients who had IPCs. In the patients with IPCs, V 15 , V 10 , V 5 , %V 15 , %V 10 , and %V 5 were also significantly higher than those without IPC (P = 0.001). %V 5 had a better correlation with D-dimer (r = 0.30, P p CT had a correlation with D-dimer and RV/LV diameter ratio, and the relative ratio of volumetric CT measurements with a lower attenuation threshold might be recommended for the analysis of acute PE

  16. Dataset for: An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Djellouli, Rabia

    2017-01-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model.

  17. Viability of Controlling Prosthetic Hand Utilizing Electroencephalograph (EEG) Dataset Signal

    Science.gov (United States)

    Miskon, Azizi; A/L Thanakodi, Suresh; Raihan Mazlan, Mohd; Mohd Haziq Azhar, Satria; Nooraya Mohd Tawil, Siti

    2016-11-01

    This project presents the development of an artificial hand controlled by Electroencephalograph (EEG) signal datasets for the prosthetic application. The EEG signal datasets were used as to improvise the way to control the prosthetic hand compared to the Electromyograph (EMG). The EMG has disadvantages to a person, who has not used the muscle for a long time and also to person with degenerative issues due to age factor. Thus, the EEG datasets found to be an alternative for EMG. The datasets used in this work were taken from Brain Computer Interface (BCI) Project. The datasets were already classified for open, close and combined movement operations. It served the purpose as an input to control the prosthetic hand by using an Interface system between Microsoft Visual Studio and Arduino. The obtained results reveal the prosthetic hand to be more efficient and faster in response to the EEG datasets with an additional LiPo (Lithium Polymer) battery attached to the prosthetic. Some limitations were also identified in terms of the hand movements, weight of the prosthetic, and the suggestions to improve were concluded in this paper. Overall, the objective of this paper were achieved when the prosthetic hand found to be feasible in operation utilizing the EEG datasets.

  18. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    Science.gov (United States)

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  19. Enhancing the discrimination accuracy between metastases, gliomas and meningiomas on brain MRI by volumetric textural features and ensemble pattern recognition methods.

    Science.gov (United States)

    Georgiadis, Pantelis; Cavouras, Dionisis; Kalatzis, Ioannis; Glotsos, Dimitris; Athanasiadis, Emmanouil; Kostopoulos, Spiros; Sifaki, Koralia; Malamas, Menelaos; Nikiforidis, George; Solomou, Ekaterini

    2009-01-01

    Three-dimensional (3D) texture analysis of volumetric brain magnetic resonance (MR) images has been identified as an important indicator for discriminating among different brain pathologies. The purpose of this study was to evaluate the efficiency of 3D textural features using a pattern recognition system in the task of discriminating benign, malignant and metastatic brain tissues on T1 postcontrast MR imaging (MRI) series. The dataset consisted of 67 brain MRI series obtained from patients with verified and untreated intracranial tumors. The pattern recognition system was designed as an ensemble classification scheme employing a support vector machine classifier, specially modified in order to integrate the least squares features transformation logic in its kernel function. The latter, in conjunction with using 3D textural features, enabled boosting up the performance of the system in discriminating metastatic, malignant and benign brain tumors with 77.14%, 89.19% and 93.33% accuracy, respectively. The method was evaluated using an external cross-validation process; thus, results might be considered indicative of the generalization performance of the system to "unseen" cases. The proposed system might be used as an assisting tool for brain tumor characterization on volumetric MRI series.

  20. Sigma-2 receptor ligands QSAR model dataset

    Directory of Open Access Journals (Sweden)

    Antonio Rescifina

    2017-08-01

    Full Text Available The data have been obtained from the Sigma-2 Receptor Selective Ligands Database (S2RSLDB and refined according to the QSAR requirements. These data provide information about a set of 548 Sigma-2 (σ2 receptor ligands selective over Sigma-1 (σ1 receptor. The development of the QSAR model has been undertaken with the use of CORAL software using SMILES, molecular graphs and hybrid descriptors (SMILES and graph together. Data here reported include the regression for σ2 receptor pKi QSAR models. The QSAR model was also employed to predict the σ2 receptor pKi values of the FDA approved drugs that are herewith included.

  1. A dataset from bottom trawl survey around Taiwan

    Directory of Open Access Journals (Sweden)

    Kwang-tsao Shao

    2012-05-01

    Full Text Available Bottom trawl fishery is one of the most important coastal fisheries in Taiwan both in production and economic values. However, its annual production started to decline due to overfishing since the 1980s. Its bycatch problem also damages the fishery resource seriously. Thus, the government banned the bottom fishery within 3 nautical miles along the shoreline in 1989. To evaluate the effectiveness of this policy, a four year survey was conducted from 2000–2003, in the waters around Taiwan and Penghu (Pescadore Islands, one region each year respectively. All fish specimens collected from trawling were brought back to lab for identification, individual number count and body weight measurement. These raw data have been integrated and established in Taiwan Fish Database (http://fishdb.sinica.edu.tw. They have also been published through TaiBIF (http://taibif.tw, FishBase and GBIF (website see below. This dataset contains 631 fish species and 3,529 records, making it the most complete demersal fish fauna and their temporal and spatial distributional data on the soft marine habitat in Taiwan.

  2. Privacy-preserving record linkage on large real world datasets.

    Science.gov (United States)

    Randall, Sean M; Ferrante, Anna M; Boyd, James H; Bauer, Jacqueline K; Semmens, James B

    2014-08-01

    Record linkage typically involves the use of dedicated linkage units who are supplied with personally identifying information to determine individuals from within and across datasets. The personally identifying information supplied to linkage units is separated from clinical information prior to release by data custodians. While this substantially reduces the risk of disclosure of sensitive information, some residual risks still exist and remain a concern for some custodians. In this paper we trial a method of record linkage which reduces privacy risk still further on large real world administrative data. The method uses encrypted personal identifying information (bloom filters) in a probability-based linkage framework. The privacy preserving linkage method was tested on ten years of New South Wales (NSW) and Western Australian (WA) hospital admissions data, comprising in total over 26 million records. No difference in linkage quality was found when the results were compared to traditional probabilistic methods using full unencrypted personal identifiers. This presents as a possible means of reducing privacy risks related to record linkage in population level research studies. It is hoped that through adaptations of this method or similar privacy preserving methods, risks related to information disclosure can be reduced so that the benefits of linked research taking place can be fully realised. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Genomics dataset on unclassified published organism (patent US 7547531

    Directory of Open Access Journals (Sweden)

    Mohammad Mahfuz Ali Khan Shawan

    2016-12-01

    Full Text Available Nucleotide (DNA sequence analysis provides important clues regarding the characteristics and taxonomic position of an organism. With the intention that, DNA sequence analysis is very crucial to learn about hierarchical classification of that particular organism. This dataset (patent US 7547531 is chosen to simplify all the complex raw data buried in undisclosed DNA sequences which help to open doors for new collaborations. In this data, a total of 48 unidentified DNA sequences from patent US 7547531 were selected and their complete sequences were retrieved from NCBI BioSample database. Quick response (QR code of those DNA sequences was constructed by DNA BarID tool. QR code is useful for the identification and comparison of isolates with other organisms. AT/GC content of the DNA sequences was determined using ENDMEMO GC Content Calculator, which indicates their stability at different temperature. The highest GC content was observed in GP445188 (62.5% which was followed by GP445198 (61.8% and GP445189 (59.44%, while lowest was in GP445178 (24.39%. In addition, New England BioLabs (NEB database was used to identify cleavage code indicating the 5, 3 and blunt end and enzyme code indicating the methylation site of the DNA sequences was also shown. These data will be helpful for the construction of the organisms’ hierarchical classification, determination of their phylogenetic and taxonomic position and revelation of their molecular characteristics.

  4. The Effect of Volumetric Porosity on Roughness Element Drag

    Science.gov (United States)

    Gillies, John; Nickling, William; Nikolich, George; Etyemezian, Vicken

    2016-04-01

    Much attention has been given to understanding how the porosity of two dimensional structures affects the drag force exerted by boundary-layer flow on these flow obstructions. Porous structures such as wind breaks and fences are typically used to control the sedimentation of sand and snow particles or create micro-habitats in their lee. Vegetation in drylands also exerts control on sediment transport by wind due to aerodynamic effects and interaction with particles in transport. Recent research has also demonstrated that large spatial arrays of solid three dimensional roughness elements can be used to reduce sand transport to specified targets for control of wind erosion through the effect of drag partitioning and interaction of the moving sand with the large (>0.3 m high) roughness elements, but porous elements may improve the effectiveness of this approach. A thorough understanding of the role porosity plays in affecting the drag force on three-dimensional forms is lacking. To provide basic understanding of the relationship between the porosity of roughness elements and the force of drag exerted on them by fluid flow, we undertook a wind tunnel study that systematically altered the porosity of roughness elements of defined geometry (cubes, rectangular cylinders, and round cylinders) and measured the associated change in the drag force on the elements under similar Reynolds number conditions. The elements tested were of four basic forms: 1) same sized cubes with tubes of known diameter milled through them creating three volumetric porosity values and increasing connectivity between the tubes, 2) cubes and rectangular cylinders constructed of brass screen that nested within each other, and 3) round cylinders constructed of brass screen that nested within each other. The two-dimensional porosity, defined as the ratio of total surface area of the empty space to the solid surface area of the side of the element presented to the fluid flow was conserved at 0.519 for

  5. Tension in the recent Type Ia supernovae datasets

    International Nuclear Information System (INIS)

    Wei, Hao

    2010-01-01

    In the present work, we investigate the tension in the recent Type Ia supernovae (SNIa) datasets Constitution and Union. We show that they are in tension not only with the observations of the cosmic microwave background (CMB) anisotropy and the baryon acoustic oscillations (BAO), but also with other SNIa datasets such as Davis and SNLS. Then, we find the main sources responsible for the tension. Further, we make this more robust by employing the method of random truncation. Based on the results of this work, we suggest two truncated versions of the Union and Constitution datasets, namely the UnionT and ConstitutionT SNIa samples, whose behaviors are more regular.

  6. Automatic interactive optimization for volumetric modulated arc therapy planning

    International Nuclear Information System (INIS)

    Tol, Jim P; Dahele, Max; Peltola, Jarkko; Nord, Janne; Slotman, Ben J; Verbakel, Wilko FAR

    2015-01-01

    Intensity modulated radiotherapy treatment planning for sites with many different organs-at-risk (OAR) is complex and labor-intensive, making it hard to obtain consistent plan quality. With the aim of addressing this, we developed a program (automatic interactive optimizer, AIO) designed to automate the manual interactive process for the Eclipse treatment planning system. We describe AIO and present initial evaluation data. Our current institutional volumetric modulated arc therapy (RapidArc) planning approach for head and neck tumors places 3-4 adjustable OAR optimization objectives along the dose-volume histogram (DVH) curve that is displayed in the optimization window. AIO scans this window and uses color-coding to differentiate between the DVH-lines, allowing it to automatically adjust the location of the optimization objectives frequently and in a more consistent fashion. We compared RapidArc AIO plans (using 9 optimization objectives per OAR) with the clinical plans of 10 patients, and evaluated optimal AIO settings. AIO consistency was tested by replanning a single patient 5 times. Average V95&V107 of the boost planning target volume (PTV) and V95 of the elective PTV differed by ≤0.5%, while average elective PTV V107 improved by 1.5%. Averaged over all patients, AIO reduced mean doses to individual salivary structures by 0.9-1.6Gy and provided mean dose reductions of 5.6Gy and 3.9Gy to the composite swallowing structures and oral cavity, respectively. Re-running AIO five times, resulted in the aforementioned parameters differing by less than 3%. Using the same planning strategy as manually optimized head and neck plans, AIO can automate the interactive Eclipse treatment planning process and deliver dosimetric improvements over existing clinical plans

  7. An MRI volumetric study for leg muscles in congenital clubfoot.

    Science.gov (United States)

    Ippolito, Ernesto; Dragoni, Massimiliano; Antonicoli, Marco; Farsetti, Pasquale; Simonetti, Giovanni; Masala, Salvatore

    2012-10-01

    To investigate both volume and length of the three muscle compartments of the normal and the affected leg in unilateral congenital clubfoot. Volumetric magnetic resonance imaging (VMRI) of the anterior, lateral and postero-medial muscular compartments of both the normal and the clubfoot leg was obtained in three groups of seven patients each, whose mean age was, respectively, 4.8 months, 11.1 months and 4.7 years. At diagnosis, all the unilateral congenital clubfeet had a Pirani score ranging from 4.5 to 5.5 points, and all of them had been treated according to a strict Ponseti protocol. All the feet had percutaneous lengthening of the Achilles tendon. A mean difference in both volume and length was found between the three muscular compartments of the leg, with the muscles of the clubfoot side being thinner and shorter than those of the normal side. The distal tendon of the tibialis anterior, peroneus longus and triceps surae (Achilles tendon) were longer than normal on the clubfoot side. Our study shows that the three muscle compartments of the clubfoot leg are thinner and shorter than normal in the patients of the three groups. The difference in the musculature volume of the postero-medial compartment between the normal and the affected side increased nine-fold from age group 2 to 3, while the difference in length increased by 20 %, thus, showing that the muscles of the postero-medial compartment tend to grow in both thickness and length much less than the muscles of the other leg compartments.

  8. Semiautomatic segmentation of liver metastases on volumetric CT images

    International Nuclear Information System (INIS)

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-01-01

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  9. Hepatosplenic volumetric assessment at MDCT for staging liver fibrosis

    Energy Technology Data Exchange (ETDEWEB)

    Pickhardt, Perry J.; Malecki, Kyle; Hunt, Oliver F.; Beaumont, Claire; Kloke, John; Ziemlewicz, Timothy J.; Lubner, Meghan G. [University of Wisconsin School of Medicine and Public Health, Department of Radiology, Madison, WI (United States)

    2017-07-15

    To investigate hepatosplenic volumetry at MDCT for non-invasive prediction of hepatic fibrosis. Hepatosplenic volume analysis in 624 patients (mean age, 48.8 years; 311 M/313 F) at MDCT was performed using dedicated software and compared against pathological fibrosis stage (F0 = 374; F1 = 48; F2 = 40; F3 = 65; F4 = 97). The liver segmental volume ratio (LSVR) was defined by Couinaud segments I-III over segments IV-VIII. All pre-cirrhotic fibrosis stages (METAVIR F1-F3) were based on liver biopsy within 1 year of MDCT. LSVR and total splenic volumes increased with stage of fibrosis, with mean(±SD) values of: F0: 0.26 ± 0.06 and 215.1 ± 88.5 mm{sup 3}; F1: 0.25 ± 0.08 and 294.8 ± 153.4 mm{sup 3}; F2: 0.331 ± 0.12 and 291.6 ± 197.1 mm{sup 3}; F3: 0.39 ± 0.15 and 509.6 ± 402.6 mm{sup 3}; F4: 0.56 ± 0.30 and 790.7 ± 450.3 mm{sup 3}, respectively. Total hepatic volumes showed poor discrimination (F0: 1674 ± 320 mm{sup 3}; F4: 1631 ± 691 mm{sup 3}). For discriminating advanced fibrosis (≥F3), the ROC AUC values for LSVR, total liver volume, splenic volume and LSVR/spleen combined were 0.863, 0.506, 0.890 and 0.947, respectively. Relative changes in segmental liver volumes and total splenic volume allow for non-invasive staging of hepatic fibrosis, whereas total liver volume is a poor predictor. Unlike liver biopsy or elastography, these CT volumetric biomarkers can be obtained retrospectively on routine scans obtained for other indications. (orig.)

  10. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    Science.gov (United States)

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  11. Need and trends of volumetric tests in recurring inspection of pressurized components in pressurized water reactors

    International Nuclear Information System (INIS)

    Bergemann, W.

    1982-01-01

    On the basis of the types of stress occurring in nuclear power plants and of practical results it has been shown that cracks in primary circuit components arise due to operating stresses in both the materials surfaces and the bulk of the materials. For this reason, volumetric materials testing is necessary in addition to surface testing. An outlook is given on the trends of volumetric testing. (author)

  12. Comparing the accuracy of food outlet datasets in an urban environment

    Directory of Open Access Journals (Sweden)

    Michelle S. Wong

    2017-05-01

    Full Text Available Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP, an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7% and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%. We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.

  13. Volumetric modulated arc therapy for lung stereotactic radiation therapy can achieve high local control rates.

    Science.gov (United States)

    Yamashita, Hideomi; Haga, Akihiro; Takahashi, Wataru; Takenaka, Ryousuke; Imae, Toshikazu; Takenaka, Shigeharu; Nakagawa, Keiichi

    2014-11-11

    The aim of this study was to report the outcome of primary or metastatic lung cancer patients undergoing volumetric modulated arc therapy for stereotactic body radiation therapy (VMAT-SBRT). From October 2010 to December 2013, consecutive 67 lung cancer patients received single-arc VMAT-SBRT using an Elekta-synergy system. All patients were treated with an abdominal compressor. The gross tumor volumes were contoured on 10 respiratory phases computed tomography (CT) datasets from 4-dimensional (4D) CT and merged into internal target volumes (ITVs). The planning target volume (PTV) margin was isotropically taken as 5 mm. Treatment was performed with a D95 prescription of 50 Gy (43 cases) or 55 Gy (12 cases) in 4 fractions for peripheral tumor or 56 Gy in 7 fractions (12 cases) for central tumor. Among the 67 patients, the median age was 73 years (range, 59-95 years). Of the patients, male was 72% and female 28%. The median Karnofsky performance status was 90-100% in 39 cases (58%) and 80-90% in 20 cases (30%). The median follow-up was 267 days (range, 40-1162 days). Tissue diagnosis was performed in 41 patients (61%). There were T1 primary lung tumor in 42 patients (T1a in 28 patients, T1b in 14 patients), T2 in 6 patients, three T3 in 3 patients, and metastatic lung tumor in 16 patients. The median mean lung dose was 6.87 Gy (range, 2.5-15 Gy). Six patients (9%) developed radiation pneumonitis required by steroid administration. Actuarial local control rate were 100% and 100% at 1 year, 92% and 75% at 2 years, and 92% and 75% at 3 years in primary and metastatic lung cancer, respectively (p =0.59). Overall survival rate was 83% and 84% at 1 year, 76% and 53% at 2 years, and 46% and 20% at 3 years in primary and metastatic lung cancer, respectively (p =0.12). Use of VMAT-based delivery of SBRT in primary in metastatic lung tumors demonstrates high local control rates and low risk of normal tissue complications.

  14. Volumetric modulated arc therapy for lung stereotactic radiation therapy can achieve high local control rates

    International Nuclear Information System (INIS)

    Yamashita, Hideomi; Haga, Akihiro; Takahashi, Wataru; Takenaka, Ryousuke; Imae, Toshikazu; Takenaka, Shigeharu; Nakagawa, Keiichi

    2014-01-01

    The aim of this study was to report the outcome of primary or metastatic lung cancer patients undergoing volumetric modulated arc therapy for stereotactic body radiation therapy (VMAT-SBRT). From October 2010 to December 2013, consecutive 67 lung cancer patients received single-arc VMAT-SBRT using an Elekta-synergy system. All patients were treated with an abdominal compressor. The gross tumor volumes were contoured on 10 respiratory phases computed tomography (CT) datasets from 4-dimensional (4D) CT and merged into internal target volumes (ITVs). The planning target volume (PTV) margin was isotropically taken as 5 mm. Treatment was performed with a D95 prescription of 50 Gy (43 cases) or 55 Gy (12 cases) in 4 fractions for peripheral tumor or 56 Gy in 7 fractions (12 cases) for central tumor. Among the 67 patients, the median age was 73 years (range, 59–95 years). Of the patients, male was 72% and female 28%. The median Karnofsky performance status was 90-100% in 39 cases (58%) and 80-90% in 20 cases (30%). The median follow-up was 267 days (range, 40–1162 days). Tissue diagnosis was performed in 41 patients (61%). There were T1 primary lung tumor in 42 patients (T1a in 28 patients, T1b in 14 patients), T2 in 6 patients, three T3 in 3 patients, and metastatic lung tumor in 16 patients. The median mean lung dose was 6.87 Gy (range, 2.5-15 Gy). Six patients (9%) developed radiation pneumonitis required by steroid administration. Actuarial local control rate were 100% and 100% at 1 year, 92% and 75% at 2 years, and 92% and 75% at 3 years in primary and metastatic lung cancer, respectively (p = 0.59). Overall survival rate was 83% and 84% at 1 year, 76% and 53% at 2 years, and 46% and 20% at 3 years in primary and metastatic lung cancer, respectively (p = 0.12). Use of VMAT-based delivery of SBRT in primary in metastatic lung tumors demonstrates high local control rates and low risk of normal tissue complications

  15. Dataset of cocoa aspartic protease cleavage sites

    Directory of Open Access Journals (Sweden)

    Katharina Janek

    2016-09-01

    Full Text Available The data provide information in support of the research article, “The cleavage specificity of the aspartic protease of cocoa beans involved in the generation of the cocoa-specific aroma precursors” (Janek et al., 2016 [1]. Three different protein substrates were partially digested with the aspartic protease isolated from cocoa beans and commercial pepsin, respectively. The obtained peptide fragments were analyzed by matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF/TOF-MS/MS and identified using the MASCOT server. The N- and C-terminal ends of the peptide fragments were used to identify the corresponding in-vitro cleavage sites by comparison with the amino acid sequences of the substrate proteins. The same procedure was applied to identify the cleavage sites used by the cocoa aspartic protease during cocoa fermentation starting from the published amino acid sequences of oligopeptides isolated from fermented cocoa beans. Keywords: Aspartic protease, Cleavage sites, Cocoa, In-vitro proteolysis, Mass spectrometry, Peptides

  16. Dataset of protein species from human liver

    Directory of Open Access Journals (Sweden)

    Stanislav Naryzhny

    2017-06-01

    Full Text Available This article contains data related to the research article entitled “Zipf׳s law in proteomics” (Naryzhny et al., 2017 [1]. The protein composition in the human liver or hepatocarcinoma (HepG2 cells extracts was estimated using a filter-aided sample preparation (FASP protocol. The protein species/proteoform composition in the human liver was determined by two-dimensional electrophoresis (2-DE followed by Electrospray Ionization Liquid Chromatography-Tandem Mass Spectrometry (ESI LC-MS/MS. In the case of two-dimensional electrophoresis (2-DE, the gel was stained with Coomassie Brilliant Blue R350, and image analysis was performed with ImageMaster 2D Platinum software (GE Healthcare. The 96 sections in the 2D gel were selected and cut for subsequent ESI LC-MS/MS and protein identification. If the same protein was detected in different sections, it was considered to exist as different protein species/proteoforms. A list of human liver proteoforms detected in this way is presented.

  17. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    Science.gov (United States)

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  18. TU-CD-BRB-04: Automated Radiomic Features Complement the Prognostic Value of VASARI in the TCGA-GBM Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, E Rios [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Narayan, V [Dana-Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Grossmann, P [Dana-Farber Cancer Institute/Harvard Medical School, Boston, MA (United States); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: To compare the complementary prognostic value of automated Radiomic features to that of radiologist-annotated VASARI features in TCGA-GBM MRI dataset. Methods: For 96 GBM patients, pre-operative MRI images were obtained from The Cancer Imaging Archive. The abnormal tumor bulks were manually defined on post-contrast T1w images. The contrast-enhancing and necrotic regions were segmented using FAST. From these sub-volumes and the total abnormal tumor bulk, a set of Radiomic features quantifying phenotypic differences based on the tumor intensity, shape and texture, were extracted from the post-contrast T1w images. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative Radiomic, VASARI and combined Radiomic-VASARI features in 70% of the dataset (training-set). Multivariate Cox-proportional hazards models were evaluated in 30% of the dataset (validation-set) using the C-index for OS. A bootstrap procedure was used to assess significance while comparing the C-Indices of the different models. Results: Overall, the Radiomic features showed a moderate correlation with the radiologist-annotated VASARI features (r = −0.37 – 0.49); however that correlation was stronger for the Tumor Diameter and Proportion of Necrosis VASARI features (r = −0.71 – 0.69). After MRMR feature selection, the best-performing Radiomic, VASARI, and Radiomic-VASARI Cox-PH models showed a validation C-index of 0.56 (p = NS), 0.58 (p = NS) and 0.65 (p = 0.01), respectively. The combined Radiomic-VASARI model C-index was significantly higher than that obtained from either the Radiomic or VASARI model alone (p = <0.001). Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative annotated VASARI feature set. The prognostic value of informative qualitative VASARI features such as Eloquent Brain and Multifocality is increased with the addition of quantitative volumetric and textural features from the

  19. HLA diversity in the 1000 genomes dataset.

    Directory of Open Access Journals (Sweden)

    Pierre-Antoine Gourraud

    Full Text Available The 1000 Genomes Project aims to provide a deep characterization of human genome sequence variation by sequencing at a level that should allow the genome-wide detection of most variants with frequencies as low as 1%. However, in the major histocompatibility complex (MHC, only the top 10 most frequent haplotypes are in the 1% frequency range whereas thousands of haplotypes are present at lower frequencies. Given the limitation of both the coverage and the read length of the sequences generated by the 1000 Genomes Project, the highly variable positions that define HLA alleles may be difficult to identify. We used classical Sanger sequencing techniques to type the HLA-A, HLA-B, HLA-C, HLA-DRB1 and HLA-DQB1 genes in the available 1000 Genomes samples and combined the results with the 103,310 variants in the MHC region genotyped by the 1000 Genomes Project. Using pairwise identity-by-descent distances between individuals and principal component analysis, we established the relationship between ancestry and genetic diversity in the MHC region. As expected, both the MHC variants and the HLA phenotype can identify the major ancestry lineage, informed mainly by the most frequent HLA haplotypes. To some extent, regions of the genome with similar genetic or similar recombination rate have similar properties. An MHC-centric analysis underlines departures between the ancestral background of the MHC and the genome-wide picture. Our analysis of linkage disequilibrium (LD decay in these samples suggests that overestimation of pairwise LD occurs due to a limited sampling of the MHC diversity. This collection of HLA-specific MHC variants, available on the dbMHC portal, is a valuable resource for future analyses of the role of MHC in population and disease studies.

  20. Dataset definition for CMS operations and physics analyses

    Science.gov (United States)

    Franzoni, Giovanni; Compact Muon Solenoid Collaboration

    2016-04-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets and secondary datasets/dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concepts of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the LHC run I, and we discuss the plans for the run II.

  1. U.S. Climate Divisional Dataset (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data has been superseded by a newer version of the dataset. Please refer to NOAA's Climate Divisional Database for more information. The U.S. Climate Divisional...

  2. Karna Particle Size Dataset for Tables and Figures

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains 1) table of bulk Pb-XAS LCF results, 2) table of bulk As-XAS LCF results, 3) figure data of particle size distribution, and 4) figure data for...

  3. NOAA Global Surface Temperature Dataset, Version 4.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Global Surface Temperature Dataset (NOAAGlobalTemp) is derived from two independent analyses: the Extended Reconstructed Sea Surface Temperature (ERSST)...

  4. National Hydrography Dataset (NHD) - USGS National Map Downloadable Data Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS National Hydrography Dataset (NHD) Downloadable Data Collection from The National Map (TNM) is a comprehensive set of digital spatial data that encodes...

  5. Watershed Boundary Dataset (WBD) - USGS National Map Downloadable Data Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Watershed Boundary Dataset (WBD) from The National Map (TNM) defines the perimeter of drainage areas formed by the terrain and other landscape characteristics....

  6. BASE MAP DATASET, LE FLORE COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  7. USGS National Hydrography Dataset from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS The National Map - National Hydrography Dataset (NHD) is a comprehensive set of digital spatial data that encodes information about naturally occurring and...

  8. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  9. AFSC/REFM: Seabird Necropsy dataset of North Pacific

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The seabird necropsy dataset contains information on seabird specimens that were collected under salvage and scientific collection permits primarily by...

  10. Dataset definition for CMS operations and physics analyses

    CERN Document Server

    AUTHOR|(CDS)2051291

    2016-01-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets, secondary datasets, and dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concept of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the first run, and we discuss the plans for the second LHC run.

  11. USGS National Boundary Dataset (NBD) Downloadable Data Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS Governmental Unit Boundaries dataset from The National Map (TNM) represents major civil areas for the Nation, including States or Territories, counties (or...

  12. Environmental Dataset Gateway (EDG) CS-W Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  13. Global Man-made Impervious Surface (GMIS) Dataset From Landsat

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Man-made Impervious Surface (GMIS) Dataset From Landsat consists of global estimates of fractional impervious cover derived from the Global Land Survey...

  14. Newton SSANTA Dr Water using POU filters dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains information about all the features extracted from the raw data files, the formulas that were assigned to some of these features, and the...

  15. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    Directory of Open Access Journals (Sweden)

    Mark Driscoll

    2013-01-01

    Full Text Available A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices.

  16. The Brain of the Black (Diceros bicornis and White (Ceratotherium simum African Rhinoceroses: Morphology and Volumetrics from Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Adhil Bhagwandin

    2017-08-01

    Full Text Available The morphology and volumetrics of the understudied brains of two iconic large terrestrial African mammals: the black (Diceros bicornis and white (Ceratotherium simum rhinoceroses are described. The black rhinoceros is typically solitary whereas the white rhinoceros is social, and both are members of the Perissodactyl order. Here, we provide descriptions of the surface of the brain of each rhinoceros. For both species, we use magnetic resonance images (MRI to develop a description of the internal anatomy of the rhinoceros brain and to calculate the volume of the amygdala, cerebellum, corpus callosum, hippocampus, and ventricular system as well as to determine the gyrencephalic index. The morphology of both black and white rhinoceros brains is very similar to each other, although certain minor differences, seemingly related to diet, were noted, and both brains evince the general anatomy of the mammalian brain. The rhinoceros brains display no obvious neuroanatomical specializations in comparison to other mammals previously studied. In addition, the volumetric analyses indicate that the size of the various regions of the rhinoceros brain measured, as well as the extent of gyrification, are what would be predicted for a mammal with their brain mass when compared allometrically to previously published data. We conclude that the brains of the black and white rhinoceros exhibit a typically mammalian organization at a superficial level, but histological studies may reveal specializations of interest in relation to rhinoceros behavior.

  17. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on-treatment imaging

    International Nuclear Information System (INIS)

    McBain, Catherine A.; Henry, Ann M.; Sykes, Jonathan; Amer, Ali; Marchant, Tom; Moore, Christopher M.; Davies, Julie; Stratford, Julia; McCarthy, Claire; Porritt, Bridget; Williams, Peter; Khoo, Vincent S.; Price, Pat

    2006-01-01

    Purpose: X-ray volumetric imaging (XVI) for the first time allows for the on-treatment acquisition of three-dimensional (3D) kV cone beam computed tomography (CT) images. Clinical imaging using the Synergy System (Elekta, Crawley, UK) commenced in July 2003. This study evaluated image quality and dose delivered and assessed clinical utility for treatment verification at a range of anatomic sites. Methods and Materials: Single XVIs were acquired from 30 patients undergoing radiotherapy for tumors at 10 different anatomic sites. Patients were imaged in their setup position. Radiation doses received were measured using TLDs on the skin surface. The utility of XVI in verifying target volume coverage was qualitatively assessed by experienced clinicians. Results: X-ray volumetric imaging acquisition was completed in the treatment position at all anatomic sites. At sites where a full gantry rotation was not possible, XVIs were reconstructed from projection images acquired from partial rotations. Soft-tissue definition of organ boundaries allowed direct assessment of 3D target volume coverage at all sites. Individual image quality depended on both imaging parameters and patient characteristics. Radiation dose ranged from 0.003 Gy in the head to 0.03 Gy in the pelvis. Conclusions: On-treatment XVI provided 3D verification images with soft-tissue definition at all anatomic sites at acceptably low radiation doses. This technology sets a new standard in treatment verification and will facilitate novel adaptive radiotherapy techniques

  18. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Stambaugh, Cassandra; Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo G.; Nelms, Benjamin E.

    2014-01-01

    Background and purpose: Delta 4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta 4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta 4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta 4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  19. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  20. Toward computational cumulative biology by combining models of biological datasets.

    Science.gov (United States)

    Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel

    2014-01-01

    A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.

  1. Testing the Neutral Theory of Biodiversity with Human Microbiome Datasets

    OpenAIRE

    Li, Lianwei; Ma, Zhanshan (Sam)

    2016-01-01

    The human microbiome project (HMP) has made it possible to test important ecological theories for arguably the most important ecosystem to human health?the human microbiome. Existing limited number of studies have reported conflicting evidence in the case of the neutral theory; the present study aims to comprehensively test the neutral theory with extensive HMP datasets covering all five major body sites inhabited by the human microbiome. Utilizing 7437 datasets of bacterial community samples...

  2. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  3. Artificial intelligence (AI) systems for interpreting complex medical datasets.

    Science.gov (United States)

    Altman, R B

    2017-05-01

    Advances in machine intelligence have created powerful capabilities in algorithms that find hidden patterns in data, classify objects based on their measured characteristics, and associate similar patients/diseases/drugs based on common features. However, artificial intelligence (AI) applications in medical data have several technical challenges: complex and heterogeneous datasets, noisy medical datasets, and explaining their output to users. There are also social challenges related to intellectual property, data provenance, regulatory issues, economics, and liability. © 2017 ASCPT.

  4. Volumetric modulated arc therapy for spine SBRT patients to reduce treatment time and intrafractional motion

    Directory of Open Access Journals (Sweden)

    Ahmad Amoush

    2015-01-01

    Full Text Available Volumetric modulated arc therapy (VMAT is an efficient technique to reduce the treatment time and intrafractional motion to treat spine patients presented with severe back pain. Five patients treated with spine stereotactic body radiation therapy (SBRT using 9 beams intensity modulated radiation therapy (IMRT were retrospectively selected for this study. The patients were replanned using two arcs VMAT technique. The average mean dose was 104% ± 1.2% and 104.1% ± 1.0% in IMRT and VMAT, respectively (p = 0.9. Accordingly, the average conformal index (CI was 1.3 ± 0.1 and 1.5 ± 0.3, respectively (p = 0.5. The average dose gradient (DG distance was 1.5 ± 0.1 cm and 1.4 ± 0.1 cm, respectively (p = 0.3. The average spinal cord maximum dose was 11.6 ± 1.0 Gy and 11.8 ± 1.1 Gy (p = 0.8 and V10Gy was 7.4 ± 1.4 cc and 8.6 ± 1.7 cc (p = 0.4 for IMRT and VMAT, respectively. Accordingly, the average number of monitor units (MUs was 6771.7 ± 1323.3 MU and 3978 ± 576.7 MU respectively (p = 0.02. The use of VMAT for spine SBRT patients with severe back pain can reduce the treatment time and intrafractional motion.

  5. Superficial Collagen Fibril Modulus and Pericellular Fixed Charge Density Modulate Chondrocyte Volumetric Behaviour in Early Osteoarthritis

    Directory of Open Access Journals (Sweden)

    Petri Tanska

    2013-01-01

    Full Text Available The aim of this study was to investigate if the experimentally detected altered chondrocyte volumetric behavior in early osteoarthritis can be explained by changes in the extracellular and pericellular matrix properties of cartilage. Based on our own experimental tests and the literature, the structural and mechanical parameters for normal and osteoarthritic cartilage were implemented into a multiscale fibril-reinforced poroelastic swelling model. Model simulations were compared with experimentally observed cell volume changes in mechanically loaded cartilage, obtained from anterior cruciate ligament transected rabbit knees. We found that the cell volume increased by 7% in the osteoarthritic cartilage model following mechanical loading of the tissue. In contrast, the cell volume decreased by 4% in normal cartilage model. These findings were consistent with the experimental results. Increased local transversal tissue strain due to the reduced collagen fibril stiffness accompanied with the reduced fixed charge density of the pericellular matrix could increase the cell volume up to 12%. These findings suggest that the increase in the cell volume in mechanically loaded osteoarthritic cartilage is primarily explained by the reduction in the pericellular fixed charge density, while the superficial collagen fibril stiffness is suggested to contribute secondarily to the cell volume behavior.

  6. High Pressure Adsorption Isotherm of CO2 on Activated Carbon using Volumetric Method

    Directory of Open Access Journals (Sweden)

    Awaludin Martin

    2011-05-01

    Full Text Available Adsorption system is ones of the most effective methods for CO2 separating with other substances that produced from the burning of fossil fuels. In the design for that application, beside of characteristics of porous material (adsorbent data, CO2 adsorption data on the adsorbent (kinetic and thermodynamic are also needed. The aim of this research is resulting isothermal adsorption data at pressures up to 3.5 MPa by indirect methods (volumetric method at isothermal temperature of 300, 308, 318 and 338 K. Adsorbent that used in this research is activated carbon made from East of Kalimantan coals by physical activation method (CO2 which is the surface area of activated carbon is 668 m2/g and pore volume is 0.47 mL/g. Carbon dioxide (CO2 that used in this research is high purity carbon dioxide with a purity of 99.9%. Data from the experiment results then correlated using the Langmuir and Toth equations model. The results showed that the maximum adsorption capacity is 0.314 kg/kg at 300 K and 3384.69 kPa. The results of regression of experiment data using Langmuir and Toth models were 3.4% and 1.7%.

  7. Enhanced gamma ray sensitivity in bismuth triiodide sensors through volumetric defect control

    International Nuclear Information System (INIS)

    Johns, Paul M.; Baciak, James E.; Nino, Juan C.

    2016-01-01

    Some of the more attractive semiconducting compounds for ambient temperature radiation detector applications are impacted by low charge collection efficiency due to the presence of point and volumetric defects. This has been particularly true in the case of BiI_3, which features very attractive properties (density, atomic number, band gap, etc.) to serve as a gamma ray detector, but has yet to demonstrate its full potential. We show that by applying growth techniques tailored to reduce defects, the spectral performance of this promising semiconductor can be realized. Gamma ray spectra from >100 keV source emissions are now obtained from high quality Sb:BiI_3 bulk crystals with limited concentrations of defects (point and extended). The spectra acquired in these high quality crystals feature photopeaks with resolution of 2.2% at 662 keV. Infrared microscopy is used to compare the local microstructure between radiation sensitive and non-responsive crystals. This work demonstrates that BiI_3 can be prepared in melt-grown detector-grade samples with superior quality and can acquire the spectra from a variety of gamma ray sources.

  8. ConnectomeExplorer: Query-guided visual analysis of large volumetric neuroscience data

    KAUST Repository

    Beyer, Johanna

    2013-12-01

    This paper presents ConnectomeExplorer, an application for the interactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) data sets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner. Queries are built step by step in a visual query builder, building more complex queries from combinations of simpler queries. Our application is based on a scalable volume visualization framework that scales to multiple volumes of several teravoxels each, enabling the concurrent visualization and querying of the original EM volume, additional segmentation volumes, neuronal connectivity, and additional meta data comprising a variety of neuronal data attributes. We evaluate our application on a data set of roughly one terabyte of EM data and 750 GB of segmentation data, containing over 4,000 segmented structures and 1,000 synapses. We demonstrate typical use-case scenarios of our collaborators in neuroscience, where our system has enabled them to answer specific scientific questions using interactive querying and analysis on the full-size data for the first time. © 1995-2012 IEEE.

  9. Hybrid Approach of Aortic Diseases: Zone 1 Delivery and Volumetric Analysis on the Descending Aorta

    Directory of Open Access Journals (Sweden)

    José Augusto Duncan

    Full Text Available Abstract Introduction: Conventional techniques of surgical correction of arch and descending aortic diseases remains as high-risk procedures. Endovascular treatments of abdominal and descending thoracic aorta have lower surgical risk. Evolution of both techniques - open debranching of the arch and endovascular approach of the descending aorta - may extend a less invasive endovascular treatment for a more extensive disease with necessity of proximal landing zone in the arch. Objective: To evaluate descending thoracic aortic remodeling by means of volumetric analysis after hybrid approach of aortic arch debranching and stenting the descending aorta. Methods: Retrospective review of seven consecutive patients treated between September 2014 and August 2016 for diseases of proximal descending aorta (aneurysms and dissections by hybrid approach to deliver the endograft at zone 1. Computed tomography angiography were analyzed using a specific software to calculate descending thoracic aorta volumes pre- and postoperatively. Results: Follow-up was done in 100% of patients with a median time of 321 days (range, 41-625 days. No deaths or permanent neurological complications were observed. There were no endoleaks or stent migrations. Freedom from reintervention was 100% at 300 days and 66% at 600 days. Median volume reduction was of 45.5 cm3, representing a median volume shrinkage by 9.3%. Conclusion: Hybrid approach of arch and descending thoracic aorta diseases is feasible and leads to a favorable aortic remodeling with significant volume reduction.

  10. Detection and Severity Scoring of Chronic Obstructive Pulmonary Disease Using Volumetric Analysis of Lung CT Images

    International Nuclear Information System (INIS)

    Hosseini, Mohammad Parsa; Soltanian-Zadeh, Hamid; Akhlaghpoor, Shahram

    2012-01-01

    Chronic obstructive pulmonary disease (COPD) is a devastating disease.While there is no cure for COPD and the lung damage associated with this disease cannot be reversed, it is still very important to diagnose it as early as possible. In this paper, we propose a novel method based on the measurement of air trapping in the lungs from CT images to detect COPD and to evaluate its severity. Twenty-five patients and twelve normal adults were included in this study. The proposed method found volumetric changes of the lungs from inspiration to expiration. To this end, trachea CT images at full inspiration and expiration were compared and changes in the areas and volumes of the lungs between inspiration and expiration were used to define quantitative measures (features). Using these features,the subjects were classified into two groups of normal and COPD patients using a Bayesian classifier. In addition, t-tests were applied to evaluate discrimination powers of the features for this classification. For the cases studied, the proposed method estimated air trapping in the lungs from CT images without human intervention. Based on the results, a mathematical model was developed to relate variations of lung volumes to the severity of the disease. As a computer aided diagnosis (CAD) system, the proposed method may assist radiologists in the detection of COPD. It quantifies air trapping in the lungs and thus may assist them with the scoring of the disease by quantifying the severity of the disease

  11. Volumetric Modulated Arc (Radio Therapy in Pets Treatment: The “La Cittadina Fondazione” Experience

    Directory of Open Access Journals (Sweden)

    Mario Dolera

    2018-01-01

    Full Text Available Volumetric Modulated Arc Therapy (VMAT is a modern technique, widely used in human radiotherapy, which allows a high dose to be delivered to tumor volumes and low doses to the surrounding organs at risk (OAR. Veterinary clinics takes advantage of this feature due to the small target volumes and distances between the target and the OAR. Sparing the OAR permits dose escalation, and hypofractionation regimens reduce the number of treatment sessions with a simpler manageability in the veterinary field. Multimodal volumes definition is mandatory for the small volumes involved and a positioning device precisely reproducible with a setup confirmation is needed before each session for avoiding missing the target. Additionally, the elaborate treatment plan must pursue hard constraints and objectives, and its feasibility must be evaluated with a per patient quality control. The aim of this work is to report results with regard to brain meningiomas and gliomas, trigeminal nerve tumors, brachial plexus tumors, adrenal tumors with vascular invasion and rabbit thymomas, in comparison with literature to determine if VMAT is a safe and viable alternative to surgery or chemotherapy alone, or as an adjuvant therapy in pets.

  12. Heuristics for Relevancy Ranking of Earth Dataset Search Results

    Science.gov (United States)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2016-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  13. Error characterisation of global active and passive microwave soil moisture datasets

    Directory of Open Access Journals (Sweden)

    W. A. Dorigo

    2010-12-01

    Full Text Available Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT and passive (AMSR-E and SSM/I microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active, the use of different frequencies (C-, X-, and Ku-band for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH. The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different

  14. Dosimetric effects of sectional adjustments of collimator angles on volumetric modulated arc therapy for irregularly-shaped targets.

    Directory of Open Access Journals (Sweden)

    Beom Seok Ahn

    Full Text Available To calculate an optimal collimator angle at each of sectional arcs in a full-arc volumetric modulated arc therapy (VMAT plan and evaluate dosimetric quality of these VMAT plans comparing full-arc VMAT plans with a fixed collimator angle.Seventeen patients who had irregularly-shaped target in abdominal, head and neck, and chest cases were selected retrospectively. To calculate an optimal collimator angle at each of sectional arcs in VMAT, integrated MLC apertures which could cover all shapes of target determined by beam's-eye view (BEV within angular sections were obtained for each VMAT plan. The angular sections were 40°, 60°, 90° and 120°. When the collimator settings were rotated at intervals of 2°, we obtained the optimal collimator angle to minimize area size difference between the integrated MLC aperture and collimator settings with 5 mm-margins to the integrated MLC aperture. The VMAT plans with the optimal collimator angles (Colli-VMAT were generated in the EclipseTM. For comparison purposes, one full-arc VMAT plans with a fixed collimator angles (Std-VMAT were generated. The dose-volumetric parameters and total MUs were evaluated.The mean dose-volumetric parameters for target volume of Colli-VMAT were comparable to Std-VMAT. Colli-VMAT improved sparing of most normal organs but for brain stem, compared to Std-VMAT for all cases. There were decreasing tendencies in mean total MUs with decreasing angular section. The mean total MUs for Colli-VMAT with the angular section of 40° (434 ± 95 MU, 317 ± 81 MU, and 371 ± 43 MU for abdominal, head and neck, and chest cases, respectively were lower than those for Std-VMAT (654 ± 182 MU, 517 ± 116 MU, and 533 ± 25 MU, respectively.For an irregularly-shaped target, Colli-VMAT with the angular section of 40° reduced total MUs and improved sparing of normal organs, compared to Std-VMAT.

  15. Merged SAGE II, Ozone_cci and OMPS ozone profile dataset and evaluation of ozone trends in the stratosphere

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2017-10-01

    Full Text Available In this paper, we present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of the European Space Agency Climate Change Initiative (Ozone_cci with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and intercompared; only those datasets which are in good agreement, and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE–CCI–OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE–CCI–OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90° S to 90° N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at midlatitudes and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s and stratospheric cooling.

  16. Volumetric evaluation of dual-energy perfusion CT by the presence of intrapulmonary clots using a 64-slice dual-source CT

    Energy Technology Data Exchange (ETDEWEB)

    Okada, Munemasa; Nakashima, Yoshiteru; Kunihiro, Yoshie; Nakao, Sei; Matsunaga, Naofumi [Dept. of Radiology, Yamaguchi Univ. Graduate School of Medicine, Yamaguchi (Japan)], e-mail: radokada@yamaguchi-u.ac.jp; Morikage, Noriyasu [Medical Bioregulation Dept. of Organ Regulatory Surgery, Yamaguchi Univ. Graduate School of Medicine, Yamaguchi (Japan); Sano, Yuichi [Dept. of Radiology, Yamaguchi Univ. Hospital, Yamaguchi (Japan); Suga, Kazuyoshi [Dept. of Radiology, St Hills Hospital, Yamaguchi (Japan)

    2013-07-15

    Background: Dual-energy perfusion CT (DE{sub p}CT) directly represents the iodine distribution in lung parenchyma and low perfusion areas caused by intrapulmonary clots (IPCs) are visualized as low attenuation areas. Purpose: To evaluate if volumetric evaluation of DE{sub p}CT can be used as a predictor of right heart strain by the presence of IPCs. Material and Methods: One hundred and ninety-six patients suspected of having acute pulmonary embolism (PE) underwent DE{sub p}CT using a 64-slice dual-source CT. DE{sub p}CT images were three-dimensionally reconstructed with four threshold ranges: 1-120 HU (V{sub 120}), 1-15 HU (V{sub 15}), 1-10 HU (V{sub 10}), and 1-5 HU (V{sub 5}). Each relative ratio per V{sub 120} was expressed as the %V{sub 15}, %V{sub 10}, and %V{sub 5}. Volumetric data-sets were compared with D-dimer, pulmonary arterial (PA) pressure, right ventricular (RV) diameter, RV/left ventricular (RV/LV) diameter ratio, PA diameter, and PA/aorta (PA/Ao) diameter ratio. The areas under the ROC curves (AUCs) were examined for their relationship to the presence of IPCs. This study was approved by the local ethics committee. Results: PA pressure and D-dimer were significantly higher in the patients who had IPCs. In the patients with IPCs, V{sub 15}, V{sub 10}, V{sub 5}, %V{sub 15}, %V{sub 10}, and %V{sub 5} were also significantly higher than those without IPC (P = 0.001). %V{sub 5} had a better correlation with D-dimer (r = 0.30, P < 0.001) and RV/LV diameter ratio (r = 0.27, P < 0.001), and showed a higher AUC (0.73) than the other CT measurements. Conclusion: The volumetric evaluation by DE{sub p}CT had a correlation with D-dimer and RV/LV diameter ratio, and the relative ratio of volumetric CT measurements with a lower attenuation threshold might be recommended for the analysis of acute PE.

  17. A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging

    International Nuclear Information System (INIS)

    Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang

    2016-01-01

    Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against

  18. A Technique for Generating Volumetric Cine MRI (VC-MRI)

    Science.gov (United States)

    Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang

    2016-01-01

    Purpose To develop a technique to generate on-board volumetric-cine MRI (VC-MRI) using patient prior images, motion modeling and on-board 2D-cine MRI. Methods One phase of a 4D-MRI acquired during patient simulation is used as patient prior images. 3 major respiratory deformation patterns of the patient are extracted from 4D-MRI based on principal-component-analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2D-cine MRI. The method was evaluated using both XCAT simulation of lung cancer patients and MRI data from four real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using Volume-Percent-Difference(VPD), Center-of-Mass-Shift(COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest(ROI) selection, patient breathing pattern change and noise on the estimation accuracy were also evaluated. Results Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was on average 8.43±1.52% and the COMS was on average 0.93±0.58mm across all time-steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR=20. For patient data, average tracking errors were less than 2 mm in all directions for all patients. Conclusions Preliminary studies demonstrated the

  19. Long-Term Volumetric Eruption Rates and Magma Budgets

    Energy Technology Data Exchange (ETDEWEB)

    Scott M. White Dept. Geological Sciences University of South Carolina Columbia, SC 29208; Joy A. Crisp Jet Propulsion Laboratory, California Institute of Technology Pasadena, CA 91109; Frank J. Spera Dept. Earth Science University of California, Santa Barbara Santa Barbara, CA 93106

    2005-01-01

    A global compilation of 170 time-averaged volumetric volcanic output rates (Qe) is evaluated in terms of composition and petrotectonic setting to advance the understanding of long-term rates of magma generation and eruption on Earth. Repose periods between successive eruptions at a given site and intrusive:extrusive ratios were compiled for selected volcanic centers where long-term (>104 years) data were available. More silicic compositions, rhyolites and andesites, have a more limited range of eruption rates than basalts. Even when high Qe values contributed by flood basalts (9 ± 2 Å~ 10-1 km3/yr) are removed, there is a trend in decreasing average Qe with lava composition from basaltic eruptions (2.6 ± 1.0 Å~ 10-2 km3/yr) to andesites (2.3 ± 0.8 Å~ 10-3 km3/yr) and rhyolites (4.0 ± 1.4 Å~ 10-3 km3/yr). This trend is also seen in the difference between oceanic and continental settings, as eruptions on oceanic crust tend to be predominately basaltic. All of the volcanoes occurring in oceanic settings fail to have statistically different mean Qe and have an overall average of 2.8 ± 0.4 Å~ 10-2 km3/yr, excluding flood basalts. Likewise, all of the volcanoes on continental crust also fail to have statistically different mean Qe and have an overall average of 4.4 ± 0.8 Å~ 10-3 km3/yr. Flood basalts also form a distinctive class with an average Qe nearly two orders of magnitude higher than any other class. However, we have found no systematic evidence linking increased intrusive:extrusive ratios with lower volcanic rates. A simple heat balance analysis suggests that the preponderance of volcanic systems must be open magmatic systems with respect to heat and matter transport in order to maintain eruptible magma at shallow depth throughout the observed lifetime of the volcano. The empirical upper limit of Å`10-2 km3/yr for magma eruption rate in systems with relatively high intrusive:extrusive ratios may be a consequence of the fundamental parameters

  20. 3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy

    Science.gov (United States)

    Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.

    2009-05-01

    Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow