WorldWideScience

Sample records for volumetric dataset full

  1. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7...

  2. Out-of-core clustering of volumetric datasets

    Institute of Scientific and Technical Information of China (English)

    GRANBERG Carl J.; LI Ling

    2006-01-01

    In this paper we present a novel method for dividing and clustering large volumetric scalar out-of-core datasets. This work is based on the Ordered Cluster Binary Tree (OCBT) structure created using a top-down or divisive clustering method. The OCBT structure allows fast and efficient sub volume queries to be made in combination with level of detail (LOD) queries of the tree. The initial partitioning of the large out-of-core dataset is done by using non-axis aligned planes calculated using Principal Component Analysis (PCA). A hybrid OCBT structure is also proposed where an in-core cluster binary tree is combined with a large out-of-core file.

  3. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  4. An automatic algorithm for detecting stent endothelialization from volumetric optical coherence tomography datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bonnema, Garret T; Barton, Jennifer K [College of Optical Sciences, University of Arizona, Tucson, AZ (United States); Cardinal, Kristen O' Halloran [Biomedical and General Engineering, California Polytechnic State University (United States); Williams, Stuart K [Cardiovascular Innovation Institute, University of Louisville, Louisville, KY 40292 (United States)], E-mail: barton@u.arizona.edu

    2008-06-21

    Recent research has suggested that endothelialization of vascular stents is crucial to reducing the risk of late stent thrombosis. With a resolution of approximately 10 {mu}m, optical coherence tomography (OCT) may be an appropriate imaging modality for visualizing the vascular response to a stent and measuring the percentage of struts covered with an anti-thrombogenic cellular lining. We developed an image analysis program to locate covered and uncovered stent struts in OCT images of tissue-engineered blood vessels. The struts were found by exploiting the highly reflective and shadowing characteristics of the metallic stent material. Coverage was evaluated by comparing the luminal surface with the depth of the strut reflection. Strut coverage calculations were compared to manual assessment of OCT images and epi-fluorescence analysis of the stented grafts. Based on the manual assessment, the strut identification algorithm operated with a sensitivity of 93% and a specificity of 99%. The strut coverage algorithm was 81% sensitive and 96% specific. The present study indicates that the program can automatically determine percent cellular coverage from volumetric OCT datasets of blood vessel mimics. The program could potentially be extended to assessments of stent endothelialization in native stented arteries.

  5. An automatic algorithm for detecting stent endothelialization from volumetric optical coherence tomography datasets

    Science.gov (United States)

    Bonnema, Garret T.; O'Halloran Cardinal, Kristen; Williams, Stuart K.; Barton, Jennifer K.

    2008-06-01

    Recent research has suggested that endothelialization of vascular stents is crucial to reducing the risk of late stent thrombosis. With a resolution of approximately 10 µm, optical coherence tomography (OCT) may be an appropriate imaging modality for visualizing the vascular response to a stent and measuring the percentage of struts covered with an anti-thrombogenic cellular lining. We developed an image analysis program to locate covered and uncovered stent struts in OCT images of tissue-engineered blood vessels. The struts were found by exploiting the highly reflective and shadowing characteristics of the metallic stent material. Coverage was evaluated by comparing the luminal surface with the depth of the strut reflection. Strut coverage calculations were compared to manual assessment of OCT images and epi-fluorescence analysis of the stented grafts. Based on the manual assessment, the strut identification algorithm operated with a sensitivity of 93% and a specificity of 99%. The strut coverage algorithm was 81% sensitive and 96% specific. The present study indicates that the program can automatically determine percent cellular coverage from volumetric OCT datasets of blood vessel mimics. The program could potentially be extended to assessments of stent endothelialization in native stented arteries.

  6. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    Science.gov (United States)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  7. Reconstructing flaw image using dataset of full matrix capture technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Tae Hun; Kim, Yong Sik; Lee, Jeong Seok [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2017-02-15

    A conventional phased array ultrasonic system offers the ability to steer an ultrasonic beam by applying independent time delays of individual elements in the array and produce an ultrasonic image. In contrast, full matrix capture (FMC) is a data acquisition process that collects a complete matrix of A-scans from every possible independent transmit-receive combination in a phased array transducer and makes it possible to reconstruct various images that cannot be produced by conventional phased array with the post processing as well as images equivalent to a conventional phased array image. In this paper, a basic algorithm based on the LLL mode total focusing method (TFM) that can image crack type flaws is described. And this technique was applied to reconstruct flaw images from the FMC dataset obtained from the experiments and ultrasonic simulation.

  8. Volumetric breast density estimation from full-field digital mammograms.

    NARCIS (Netherlands)

    Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.

    2006-01-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast

  9. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  10. Single step full volumetric reconstruction optical coherence tomography utilizing compressive sensing

    Science.gov (United States)

    Chen, Luoyang; Liu, Jiansheng; cheng, Jiangtao; Liu, Haitao; Zhou, Hongwen

    2017-03-01

    3D optical coherence tomography imaging (OCT) combined with compressive sensing (CS) has been proved to be an attractive and effective tool in a variety of fields, such as medicine and biology. To achieve high quality imaging while using as less CS sampling rate as possible is the goal of this approach. Here we present an innovative single step fully 3D CS-OCT volumetric image recovery method, in which 3D OCT volumetric image of the object is compressively sampled via our proposed CS coding strategies in all three dimensions while its sparsity is simultaneously taken into consideration in every direction. The object can be directly recovered as the whole volume reconstruction via our advanced full 3D CS reconstruction algorithm. The numerical simulations of a human retina OCT volumetric image reconstruction by our method demonstrate a PSNR of as high as 38dB at a sampling rate of less than 10%.

  11. Area and volumetric density estimation in processed full-field digital mammograms for risk assessment of breast cancer.

    Directory of Open Access Journals (Sweden)

    Abbas Cheddad

    Full Text Available INTRODUCTION: Mammographic density, the white radiolucent part of a mammogram, is a marker of breast cancer risk and mammographic sensitivity. There are several means of measuring mammographic density, among which are area-based and volumetric-based approaches. Current volumetric methods use only unprocessed, raw mammograms, which is a problematic restriction since such raw mammograms are normally not stored. We describe fully automated methods for measuring both area and volumetric mammographic density from processed images. METHODS: The data set used in this study comprises raw and processed images of the same view from 1462 women. We developed two algorithms for processed images, an automated area-based approach (CASAM-Area and a volumetric-based approach (CASAM-Vol. The latter method was based on training a random forest prediction model with image statistical features as predictors, against a volumetric measure, Volpara, for corresponding raw images. We contrast the three methods, CASAM-Area, CASAM-Vol and Volpara directly and in terms of association with breast cancer risk and a known genetic variant for mammographic density and breast cancer, rs10995190 in the gene ZNF365. Associations with breast cancer risk were evaluated using images from 47 breast cancer cases and 1011 control subjects. The genetic association analysis was based on 1011 control subjects. RESULTS: All three measures of mammographic density were associated with breast cancer risk and rs10995190 (p0.10 for risk, p>0.03 for rs10995190. CONCLUSIONS: Our results show that it is possible to obtain reliable automated measures of volumetric and area mammographic density from processed digital images. Area and volumetric measures of density on processed digital images performed similar in terms of risk and genetic association.

  12. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  13. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity to the migrated image. We propose a combined FWI and RWI optimization problem through dividing the velocity into the background and perturbed components. We optimize both the background and perturbed components, as independent parameters. The new objective function is quadratic with respect to the perturbed component, which will reduce the nonlinearity of the optimization problem. Solving this optimization provides a true amplitude image and utilizes the diving waves to update the velocity of the shallow parts. To insure a proper wavenumber continuation, we use an efficient scattering angle filter to direct the inversion at the early stages to direct energy corresponding to large (smooth velocity) scattering angles to the background velocity update and the small (high wavenumber) scattering angles to the perturbed velocity update. This efficient implementation of the filter is fast and requires less memory than the conventional approach based on extended images. Thus, the new FWI procedure updates the background velocity mainly along the wavepath for both diving and reflected waves in the initial stages. At the same time, it updates the perturbation with mainly reflections (filtering out the diving waves). To demonstrate the capability of this method, we apply it to a real 2D marine dataset.

  14. Public participation in Full dome digital visualisations of large datasets in a planetarium sky theater : An experiment in progress

    Science.gov (United States)

    Rathnasree, Nandivada

    2015-08-01

    A full dome digital planetarium system with a userfriendly content creation possibility can be used very effectively for communicating points of interest in large Astronomical datsets, to public and student visitors to a planetarium. Periodic public lectures by Astronomers, "Under the Stars", which use full dome visualisations of data sets, foster a regular interest group which becomes associated with the planetarium, ensuring a regular inflow of students (and a smaller number of non student visitors) willing to contribute to the entries in the full dome datasets.Regardless of whether or not completion is achieved for any of the data sets, the very process of this project is extremely rewarding in terms of generating a quickening of interest, for the casual visitor to a planetarium, in aspects related to intricacies of datasets. The casual visitor who gets interested, may just make one entry in the dataset, following instructions provided in the planetarium public interaction. For students who show sustained interest in this data entry project, it becomes a really fruitful learning process.Combining this purely data entry process with some interactions and discussions with Astronomers on the excitements in the areas related to specific data sets, allows a more organised enrichment possibility for student participants, nudging them towards exploring related possibilities of some "Hands on Astronomy" analysis oriented projects.Datasets like Gamma Ray bursts, variable stars, TGSS, and so on, are being entered within the planetarium production software at the New Delhi planetarium, by public and student visitors to the planetarium, as weekend activities.The Digital Universe data sets pre-existing in the planetarium system, allow preliminary discussions for weekend crowds related to Astronomical data sets, introduction of ever increasing multiwavelength data sets and onwwards to facilitating public participation in data entry within the planetarium software, for some

  15. Full-field mapping of internal strain distribution in red sandstone specimen under compression using digital volumetric speckle photography and X-ray computed tomography

    Directory of Open Access Journals (Sweden)

    Lingtao Mao

    2015-04-01

    Full Text Available It is always desirable to know the interior deformation pattern when a rock is subjected to mechanical load. Few experimental techniques exist that can represent full-field three-dimensional (3D strain distribution inside a rock specimen. And yet it is crucial that this information is available for fully understanding the failure mechanism of rocks or other geomaterials. In this study, by using the newly developed digital volumetric speckle photography (DVSP technique in conjunction with X-ray computed tomography (CT and taking advantage of natural 3D speckles formed inside the rock due to material impurities and voids, we can probe the interior of a rock to map its deformation pattern under load and shed light on its failure mechanism. We apply this technique to the analysis of a red sandstone specimen under increasing uniaxial compressive load applied incrementally. The full-field 3D displacement fields are obtained in the specimen as a function of the load, from which both the volumetric and the deviatoric strain fields are calculated. Strain localization zones which lead to the eventual failure of the rock are identified. The results indicate that both shear and tension are contributing factors to the failure mechanism.

  16. Three-dimensional full-range complex Fourier domain optical coherence tomography for in-vivo volumetric imaging of human skin

    Science.gov (United States)

    Nan, Nan; Bu, Peng; Guo, Xin; Wang, Xiangzhao

    2012-03-01

    A three dimensional full-range complex Fourier domain optical coherence tomography (complex FDOCT) system based on sinusoidal phase-modulating method is proposed. With the system, the range of imaging depth is doubled and the sensitivity degradation with the lateral scan distance is avoided. Fourier analysis of B-scan data along lateral scan distance is used for reconstructing the complex spectral interferograms. The B-scan based Fourier method improves the system tolerance of sample movement and makes data processing less time consuming. In vivo volumetric imaging of human skin with the proposed full-range FDOCT system is demonstrated. The mirror image rejection ratio is about 30 dB. The stratum corneum, the epidermis and the upper dermis of skin can be clearly identified in the reconstructed three dimensional FDOCT images.

  17. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    Energy Technology Data Exchange (ETDEWEB)

    Rios Velazquez, E [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Meier, R [Institute for Surgical Technology and Biomechanics, Bern, NA (Switzerland); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Alexander, B [Dana- Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Wiest, R; Reyes, M [Institute for Surgical Technology and Biomechanics, University of Bern, Bern, NA (Switzerland); Bauer, S [Institute for Surgical Technology and Biomechanics, Support Center for Adva, Bern, NA (Switzerland); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showed high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.

  18. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms.

    Science.gov (United States)

    Li, Hui; Giger, Maryellen L; Huynh, Benjamin Q; Antropova, Natalia O

    2017-10-01

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text

  19. Volumetric breast density from full-field digital mammograms and its association with breast cancer risk factors: a comparison with a threshold method.

    NARCIS (Netherlands)

    Lokate, M.; Kallenberg, M.G.J.; Karssemeijer, N.; Bosch, M.H.J. van den; Peeters, P.H.M.; Gils, C.H. van

    2010-01-01

    INTRODUCTION: Breast density, a strong breast cancer risk factor, is usually measured on the projected breast area from film screen mammograms. This is far from ideal, as breast thickness and technical characteristics are not taken into account. We investigated whether volumetric density measurement

  20. 360度可环视全屏体三维显示系统%360° Viewable Volumetric 3D Display System with Full Screen

    Institute of Scientific and Technical Information of China (English)

    刘影; 潘文平; 张赵行; 刘永; 陈兴华

    2011-01-01

    研究了一种全屏体三维显示系统.以数字微镜器件(DMD)作为空间光调制器(SLM),将三维模型的螺旋切片序列投射到高速旋转的螺旋屏上,由于视觉暂留效应,切片序列便被人眼感知为360度可环视的三维立体图像.重点分析了基于全屏的成像空间的性能,研究了基于增强型体心立方(EBCC)采样算法的体素化策略.实验结果表明,增强型BCC采样算法较基于笛卡尔栅格的采样算法减少了30%的体素量,同时避免了体素模型出现空洞; 基于全屏的体三维空间超过原来半屏体三维显示系统的4倍,生成的图像亮度均匀;在400mm×300mm×250mm的长方体形成像空间内显示的立体图像,可从360°范围内任意视点裸眼观看.%Abstract: A volumetric display system with full helical screen is presented. When a series of helical slices of a 3D model is projected onto a rotating helical screen through a Digital Micro-mirror Device (DMD) as a fast Space Light Modulator (SLM), due to the persistence of vision, human observers are able to perceive a 360° viewable 3D image by fusing together the successive helical slices into a 3D image. We analyze the performance of the 3D imaging space created by a rotating helical screen, and emphasize the volexlization approach based on Enhanced Body-center Cubic (EBCC) sampling algorithm. Experimental results demonstrate that, compared with the voxelization approach based on Cartesian lattice, voxel hole avoidance and a voxel reduction of more than 30% are both achieved through utilizing EBCC sampling algorithm, and the prototype is capable of generating images with consistent brightness in a imaging space three times bigger than our former system with half screen. In the cuboid imaging space with length 400 mm, width 300 mm and height of 250 mm, 3D images can be viewed from any viewpoint without any special eyewear.

  1. Search for the lepton flavour violating decay μ{sup +} → e{sup +}γ with the full dataset of the MEG experiment

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Cerri, C.; Dussoni, S.; Galli, L.; Grassi, M.; Morsani, F.; Pazzi, R.; Raffaelli, F.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Egger, J.; Hildebrandt, M.; Kettle, P.R.; Mtchedilishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [ICEPP, The University of Tokyo, Tokyo (Japan); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [Pisa Univ. (Italy). Dipt. di Fisica; INFN Sezione di Pisa, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genoa (Italy); Genoa Univ., Dipartimento di Fisica (Italy); Boca, G.; De Bari, A.; Nardo, R.; Simonetta, M. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica (Italy); Cascella, M. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); University College London, Department of Physics and Astronomy, London (United Kingdom); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Voena, C. [Rome Univ. ' ' Sapienza' ' (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Chiri, C.; Corvaglia, A.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); De Gerone, M. [Genoa Univ. (Italy); INFN Sezione di Genova, Genoa (Italy); Doke, T. [Waseda University, Research Institute for Science and Engineering, Tokyo (Japan); Fujii, Y.; Ieki, K.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Orito, S.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [ICEPP, The University of Tokyo, Tokyo (Japan); Grancagnolo, F.; Tassielli, G.F. [Universita del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Maki, A.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (JP); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (RU); Novosibirsk State University, Novosibirsk (RU); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z.; Zanello, D. [University of California, Irvine, CA (US); Khomutov, N.; Korenchenko, A.; Kravchuk, N.; Mzavia, D. [Joint Institute for Nuclear Research, Dubna (RU); Renga, F. [Paul Scherrer Institut PSI, Villigen (CH); INFN Sezione di Roma, Rome (IT); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (IT); Venturini, M. [INFN Sezione di Pisa, Pisa (IT); Pisa Univ., Scuola Normale Superiore (IT); Collaboration: MEG Collaboration

    2016-08-15

    The final results of the search for the lepton flavour violating decay μ{sup +} → e{sup +}γ based on the full dataset collected by the MEG experiment at the Paul Scherrer Institut in the period 2009-2013 and totalling 7.5 x 10{sup 14} stopped muons on target are presented. No significant excess of events is observed in the dataset with respect to the expected background and a new upper limit on the branching ratio of this decay of B(μ{sup +} → e{sup +}γ) < 4.2 x 10{sup -13} (90 % confidence level) is established, which represents the most stringent limit on the existence of this decay to date. (orig.)

  2. Validation of the DIFFAL, HPAC and HotSpot Dispersion Models Using the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials Witness Plate Deposition Dataset.

    Science.gov (United States)

    Purves, Murray; Parkes, David

    2016-05-01

    Three atmospheric dispersion models--DIFFAL, HPAC, and HotSpot--of differing complexities have been validated against the witness plate deposition dataset taken during the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials. The small-scale nature of these trials in comparison to many other historical radiological dispersion trials provides a unique opportunity to evaluate the near-field performance of the models considered. This paper performs validation of these models using two graphical methods of comparison: deposition contour plots and hotline profile graphs. All of the models tested are assessed to perform well, especially considering that previous model developments and validations have been focused on larger-scale scenarios. Of the models, HPAC generally produced the most accurate results, especially at locations within ∼100 m of GZ. Features present within the observed data, such as hot spots, were not well modeled by any of the codes considered. Additionally, it was found that an increase in the complexity of the meteorological data input to the models did not necessarily lead to an improvement in model accuracy; this is potentially due to the small-scale nature of the trials.

  3. Dataset of Atmospheric Environment Publication in 2016, Source emission and model evaluation of formaldehyde from composite and solid wood furniture in a full-scale chamber

    Data.gov (United States)

    U.S. Environmental Protection Agency — The data presented in this data file is a product of a journal publication. The dataset contains formaldehyde air concentrations in the emission test chamber and...

  4. Volumetric Virtual Environments

    Institute of Scientific and Technical Information of China (English)

    HE Taosong

    2000-01-01

    Driven by fast development of both virtual reality and volume visualization, we discuss some critical techniques towards building a volumetric VR system, specifically the modeling, rendering, and manipulations of a volumetric scene.Techniques such as voxel-based object simplification, accelerated volume rendering,fast stereo volume rendering, and volumetric "collision detection" are introduced and improved, with the idea of demonstrating the possibilities and potential benefits of incorporating volumetric models into VR systems.

  5. Identification of Habitat-Specific Biomes of Aquatic Fungal Communities Using a Comprehensive Nearly Full-Length 18S rRNA Dataset Enriched with Contextual Data.

    Science.gov (United States)

    Panzer, Katrin; Yilmaz, Pelin; Weiß, Michael; Reich, Lothar; Richter, Michael; Wiese, Jutta; Schmaljohann, Rolf; Labes, Antje; Imhoff, Johannes F; Glöckner, Frank Oliver; Reich, Marlis

    2015-01-01

    Molecular diversity surveys have demonstrated that aquatic fungi are highly diverse, and that they play fundamental ecological roles in aquatic systems. Unfortunately, comparative studies of aquatic fungal communities are few and far between, due to the scarcity of adequate datasets. We combined all publicly available fungal 18S ribosomal RNA (rRNA) gene sequences with new sequence data from a marine fungi culture collection. We further enriched this dataset by adding validated contextual data. Specifically, we included data on the habitat type of the samples assigning fungal taxa to ten different habitat categories. This dataset has been created with the intention to serve as a valuable reference dataset for aquatic fungi including a phylogenetic reference tree. The combined data enabled us to infer fungal community patterns in aquatic systems. Pairwise habitat comparisons showed significant phylogenetic differences, indicating that habitat strongly affects fungal community structure. Fungal taxonomic composition differed considerably even on phylum and class level. Freshwater fungal assemblage was most different from all other habitat types and was dominated by basal fungal lineages. For most communities, phylogenetic signals indicated clustering of sequences suggesting that environmental factors were the main drivers of fungal community structure, rather than species competition. Thus, the diversification process of aquatic fungi must be highly clade specific in some cases.The combined data enabled us to infer fungal community patterns in aquatic systems. Pairwise habitat comparisons showed significant phylogenetic differences, indicating that habitat strongly affects fungal community structure. Fungal taxonomic composition differed considerably even on phylum and class level. Freshwater fungal assemblage was most different from all other habitat types and was dominated by basal fungal lineages. For most communities, phylogenetic signals indicated clustering of

  6. The Occurrence of Potentially Habitable Planets Orbiting M Dwarfs Estimated from the Full Kepler Dataset and an Empirical Measurement of the Detection Sensitivity

    CERN Document Server

    Dressing, Courtney D

    2015-01-01

    We present an improved estimate of the occurrence rate of small planets around small stars by searching the full four-year Kepler data set for transiting planets using our own planet detection pipeline and conducting transit injection and recovery simulations to empirically measure the search completeness of our pipeline. We identified 157 planet candidates, including 2 objects that were not previously identified as Kepler Objects of Interest (KOIs). We inspected all publicly available follow-up images, observing notes, and centroid analyses, and corrected for the likelihood of false positives. We evaluate the sensitivity of our detection pipeline on a star-by-star basis by injecting 2000 transit signals in the light curve of each target star. For periods shorter than 50 days, we found an occurrence rate of 0.57 (+0.06/-0.05) Earth-size planets (1-1.5 Earth radii) and 0.51 (+0.07/-0.06) super-Earths (1.5-2 Earth radii) per M dwarf. Within a conservatively defined habitable zone based on the moist greenhouse i...

  7. Volumetric composition of nanocomposites

    DEFF Research Database (Denmark)

    Madsen, Bo; Lilholt, Hans; Mannila, Juha

    2015-01-01

    Detailed characterisation of the properties of composite materials with nanoscale fibres is central for the further progress in optimization of their manufacturing and properties. In the present study, a methodology for the determination and analysis of the volumetric composition of nanocomposites...... is presented, using cellulose/epoxy and aluminosilicate/polylactate nanocomposites as case materials. The buoyancy method is used for the accurate measurements of materials density. The accuracy of the method is determined to be high, allowing the measured nanocomposite densities to be reported with 5...... significant figures. The plotting of the measured nanocomposite density as a function of the nanofibre weight content is shown to be a first good approach of assessing the porosity content of the materials. The known gravimetric composition of the nanocomposites is converted into a volumetric composition...

  8. Flexible Volumetric Structure

    Science.gov (United States)

    Cagle, Christopher M. (Inventor); Schlecht, Robin W. (Inventor)

    2014-01-01

    A flexible volumetric structure has a first spring that defines a three-dimensional volume and includes a serpentine structure elongatable and compressible along a length thereof. A second spring is coupled to at least one outboard edge region of the first spring. The second spring is a sheet-like structure capable of elongation along an in-plane dimension thereof. The second spring is oriented such that its in-plane dimension is aligned with the length of the first spring's serpentine structure.

  9. Characterizing volumetric deformation behavior of naturally occuring bituminous sand materials

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2009-05-01

    Full Text Available newly proposed hydrostatic compression test procedure. The test procedure applies field loading conditions of off-road construction and mining equipment to closely simulate the volumetric deformation and stiffness behaviour of oil sand materials. Based...

  10. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    Science.gov (United States)

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin

    2016-05-01

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45

  11. Soil volumetric water content measurements using TDR technique

    Directory of Open Access Journals (Sweden)

    S. Vincenzi

    1996-06-01

    Full Text Available A physical model to measure some hydrological and thermal parameters in soils will to be set up. The vertical profiles of: volumetric water content, matric potential and temperature will be monitored in different soils. The volumetric soil water content is measured by means of the Time Domain Reflectometry (TDR technique. The result of a test to determine experimentally the reproducibility of the volumetric water content measurements is reported together with the methodology and the results of the analysis of the TDR wave forms. The analysis is based on the calculation of the travel time of the TDR signal in the wave guide embedded in the soil.

  12. Photographic dataset: random peppercorns

    CERN Document Server

    Helenius, Teemu

    2016-01-01

    This is a photographic dataset collected for testing image processing algorithms. The idea is to have sets of different but statistically similar images. In this work the images show randomly distributed peppercorns. The dataset is made available at www.fips.fi/photographic_dataset.php .

  13. Quantitative Techniques in Volumetric Analysis

    Science.gov (United States)

    Zimmerman, John; Jacobsen, Jerrold J.

    1996-12-01

    Quantitative Techniques in Volumetric Analysis is a visual library of techniques used in making volumetric measurements. This 40-minute VHS videotape is designed as a resource for introducing students to proper volumetric methods and procedures. The entire tape, or relevant segments of the tape, can also be used to review procedures used in subsequent experiments that rely on the traditional art of quantitative analysis laboratory practice. The techniques included are: Quantitative transfer of a solid with a weighing spoon Quantitative transfer of a solid with a finger held weighing bottle Quantitative transfer of a solid with a paper strap held bottle Quantitative transfer of a solid with a spatula Examples of common quantitative weighing errors Quantitative transfer of a solid from dish to beaker to volumetric flask Quantitative transfer of a solid from dish to volumetric flask Volumetric transfer pipet A complete acid-base titration Hand technique variations The conventional view of contemporary quantitative chemical measurement tends to focus on instrumental systems, computers, and robotics. In this view, the analyst is relegated to placing standards and samples on a tray. A robotic arm delivers a sample to the analysis center, while a computer controls the analysis conditions and records the results. In spite of this, it is rare to find an analysis process that does not rely on some aspect of more traditional quantitative analysis techniques, such as careful dilution to the mark of a volumetric flask. Figure 2. Transfer of a solid with a spatula. Clearly, errors in a classical step will affect the quality of the final analysis. Because of this, it is still important for students to master the key elements of the traditional art of quantitative chemical analysis laboratory practice. Some aspects of chemical analysis, like careful rinsing to insure quantitative transfer, are often an automated part of an instrumental process that must be understood by the

  14. Dataset Lifecycle Policy

    Science.gov (United States)

    Armstrong, Edward; Tauer, Eric

    2013-01-01

    The presentation focused on describing a new dataset lifecycle policy that the NASA Physical Oceanography DAAC (PO.DAAC) has implemented for its new and current datasets to foster improved stewardship and consistency across its archive. The overarching goal is to implement this dataset lifecycle policy for all new GHRSST GDS2 datasets and bridge the mission statements from the GHRSST Project Office and PO.DAAC to provide the best quality SST data in a cost-effective, efficient manner, preserving its integrity so that it will be available and usable to a wide audience.

  15. Dataset Lifecycle Policy

    Science.gov (United States)

    Armstrong, Edward; Tauer, Eric

    2013-01-01

    The presentation focused on describing a new dataset lifecycle policy that the NASA Physical Oceanography DAAC (PO.DAAC) has implemented for its new and current datasets to foster improved stewardship and consistency across its archive. The overarching goal is to implement this dataset lifecycle policy for all new GHRSST GDS2 datasets and bridge the mission statements from the GHRSST Project Office and PO.DAAC to provide the best quality SST data in a cost-effective, efficient manner, preserving its integrity so that it will be available and usable to a wide audience.

  16. Test Facility for Volumetric Absorber

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, M.; Dibowski, G.; Pfander, M.; Sack, J. P.; Schwarzbozl, P.; Ulmer, S.

    2006-07-01

    Long-time testing of volumetric absorber modules is an inevitable measure to gain the experience and reliability required for the commercialization of the open volumetric receiver technology. While solar tower test facilities are necessary for performance measurements of complete volumetric receivers, the long-term stability of individual components can be tested in less expensive test setups. For the qualification of the aging effects of operating cycles on single elements of new absorber materials and designs, a test facility was developed and constructed in the framework of the KOSMOSOL project. In order to provide the concentrated solar radiation level, the absorber test facility is integrated into a parabolic dish system at the Plataforma Solar de Almeria (PSA) in Spain. Several new designs of ceramic absorbers were developed and tested during the last months. (Author)

  17. Fixing Dataset Search

    Science.gov (United States)

    Lynnes, Chris

    2014-01-01

    Three current search engines are queried for ozone data at the GES DISC. The results range from sub-optimal to counter-intuitive. We propose a method to fix dataset search by implementing a robust relevancy ranking scheme. The relevancy ranking scheme is based on several heuristics culled from more than 20 years of helping users select datasets.

  18. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-01-01

    Full Text Available Segmentation of the left atrium (LA from cardiac magnetic resonance imaging (MRI datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs and active contour model (ACM approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC and average surface-to-surface distance (S2S, were computed as 0.9227±0.0598 and 1.14±1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively.

  19. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.

    2015-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198

  20. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  1. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  2. 2016 TRI Preliminary Dataset

    Science.gov (United States)

    The TRI preliminary dataset includes the most current TRI data available and reflects toxic chemical releases and pollution prevention activities that occurred at TRI facilities during the 2016 calendar year.

  3. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  4. USEWOD 2016 Research Dataset

    OpenAIRE

    Luczak-Roesch, Markus; Aljaloud, Saud; Berendt, Bettina; Hollink, Laura

    2016-01-01

    The USEWOD 2016 research dataset is a collection of usage data from Web of Data sources, which have been collected in 2015. It covers sources such as DBpedia, the Linked Data Fragments interface to DBpedia as well as Wikidata page views.\\ud \\ud This dataset can be requested via http://library.soton.ac.uk/datarequest - please also email a scanned copy of the signed Usage Agreement (to ).

  5. Volumetric polymerization shrinkage of contemporary composite resins

    Directory of Open Access Journals (Sweden)

    Halim Nagem Filho

    2007-10-01

    Full Text Available The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill Magic, Alert, and Solitaire to determine whether there are differences among these materials. The tests were conducted with precision of 0.1 mg. The volumetric shrinkage was measured by hydrostatic weighing before and after polymerization and calculated by known mathematical equations. One-way ANOVA (a or = 0.05 was used to determine statistically significant differences in volumetric shrinkage among the tested composite resins. Suprafill (1.87±0.01 and Definite (1.89±0.01 shrank significantly less than the other composite resins. SureFil (2.01±0.06, Filtek Z250 (1.99±0.03, and Fill Magic (2.02±0.02 presented intermediate levels of polymerization shrinkage. Alert and Solitaire presented the highest degree of polymerization shrinkage. Knowing the polymerization shrinkage rates of the commercially available composite resins, the dentist would be able to choose between using composite resins with lower polymerization shrinkage rates or adopting technical or operational procedures to minimize the adverse effects deriving from resin contraction during light-activation.

  6. IMITATION OF STANDARD VOLUMETRIC ACTIVITY METAL SAMPLES

    Directory of Open Access Journals (Sweden)

    A. I. Zhukouski

    2016-01-01

    Full Text Available Due to the specific character of problems in the field of ionizing radiation spectroscopy, the R&D and making process of standard volumetric activity metal samples (standard samples for calibration and verification of spectrometric equipment is not only expensive, but also requires the use of highly qualified experts and a unique specific equipment. Theoretical and experimental studies performed have shown the possibility to use imitators as a set of alternating point sources of gamma radiation and metal plates and their use along with standard volumetric activity metal samples for calibration of scintillation-based detectors used in radiation control in metallurgy. Response functions or instrumental spectra of such spectrometer to radionuclides like 137Cs, 134Cs, 152Eu, 154Eu, 60Co, 54Mn, 232Th, 226Ra, 65Zn, 125Sb+125mTe, 106Ru+106Rh, 94Nb, 110mAg, 233U, 234U, 235U and 238U are required for calibration in a given measurement geometry. Standard samples in the form of a probe made of melt metal of a certain diameter and height are used in such measurements. However, the production of reference materials is costly and even problematic for such radionuclides as 94Nb, 125Sb+125mTe, 234U, 235U  etc. A recognized solution to solve this problem is to use the Monte-Carlo simulation method. Instrumental experimental and theoretical spectra obtained by using standard samples and their imitators show a high compliance between experimental spectra of real samples and the theoretical ones of their Monte-Carlo models, between spectra of real samples and the ones of their imitators and finally, between experimental spectra of real sample imitators and the theoretical ones of their Monte-Carlo models. They also have shown the adequacy and consistency of the approach in using a combination of metal scattering layers and reference point gamma-ray sources instead of standard volumetric activity metal samples. As for using several reference point gamma-ray sources

  7. Megraft: a software package to graft ribosomal small subunit (16S/18S) fragments onto full-length sequences for accurate species richness and sequencing depth analysis in pyrosequencing-length metagenomes and similar environmental datasets.

    Science.gov (United States)

    Bengtsson, Johan; Hartmann, Martin; Unterseher, Martin; Vaishampayan, Parag; Abarenkov, Kessy; Durso, Lisa; Bik, Elisabeth M; Garey, James R; Eriksson, K Martin; Nilsson, R Henrik

    2012-07-01

    Metagenomic libraries represent subsamples of the total DNA found at a study site and offer unprecedented opportunities to study ecological and functional aspects of microbial communities. To examine the depth of a community sequencing effort, rarefaction analysis of the ribosomal small subunit (SSU/16S/18S) gene in the metagenome is usually performed. The fragmentary, non-overlapping nature of SSU sequences in metagenomic libraries poses a problem for this analysis, however. We introduce a software package - Megraft - that grafts SSU fragments onto full-length SSU sequences, accounting for observed and unobserved variability, for accurate assessment of species richness and sequencing depth in metagenomics endeavors.

  8. Aspects of volumetric efficiency measurement for reciprocating engines

    Directory of Open Access Journals (Sweden)

    Pešić Radivoje B.

    2013-01-01

    Full Text Available The volumetric efficiency significantly influences engine output. Both design and dimensions of an intake and exhaust system have large impact on volumetric efficiency. Experimental equipment for measuring of airflow through the engine, which is placed in the intake system, may affect the results of measurements and distort the real picture of the impact of individual structural factors. This paper deals with the problems of experimental determination of intake airflow using orifice plates and the influence of orifice plate diameter on the results of the measurements. The problems of airflow measurements through a multi-process Otto/Diesel engine were analyzed. An original method for determining volumetric efficiency was developed based on in-cylinder pressure measurement during motored operation, and appropriate calibration of the experimental procedure was performed. Good correlation between the results of application of the original method for determination of volumetric efficiency and the results of theoretical model used in research of influence of the intake pipe length on volumetric efficiency was determined. [Acknowledgments. The paper is the result of the research within the project TR 35041 financed by the Ministry of Science and Technological Development of the Republic of Serbia

  9. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents...

  10. Volumetric Three-Dimensional Display Systems

    Science.gov (United States)

    Blundell, Barry G.; Schwarz, Adam J.

    2000-03-01

    A comprehensive study of approaches to three-dimensional visualization by volumetric display systems This groundbreaking volume provides an unbiased and in-depth discussion on a broad range of volumetric three-dimensional display systems. It examines the history, development, design, and future of these displays, and considers their potential for application to key areas in which visualization plays a major role. Drawing substantially on material that was previously unpublished or available only in patent form, the authors establish the first comprehensive technical and mathematical formalization of the field, and examine a number of different volumetric architectures. System level design strategies are presented, from which proposals for the next generation of high-definition predictable volumetric systems are developed. To ensure that researchers will benefit from work already completed, they provide: * Descriptions of several recent volumetric display systems prepared from material supplied by the teams that created them * An abstract volumetric display system design paradigm * An historical summary of 90 years of development in volumetric display system technology * An assessment of the strengths and weaknesses of many of the systems proposed to date * A unified presentation of the underlying principles of volumetric display systems * A comprehensive bibliography Beautifully supplemented with 17 color plates that illustrate volumetric images and prototype displays, Volumetric Three-Dimensional Display Systems is an indispensable resource for professionals in imaging systems development, scientific visualization, medical imaging, computer graphics, aerospace, military planning, and CAD/CAE.

  11. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an accompanyin

  12. SAMHSA Federated Datasets

    Data.gov (United States)

    Substance Abuse and Mental Health Services Administration, Department of Health and Human Services — This link provides a temporary method of accessing SAMHSA datasets that are found on the interactive portion of the Data.gov catalog. This is a temporary solution...

  13. An Analysis Methodology for Stochastic Characteristic of Volumetric Error in Multiaxis CNC Machine Tool

    Directory of Open Access Journals (Sweden)

    Qiang Cheng

    2013-01-01

    Full Text Available Traditional approaches about error modeling and analysis of machine tool few consider the probability characteristics of the geometric error and volumetric error systematically. However, the individual geometric error measured at different points is variational and stochastic, and therefore the resultant volumetric error is aslo stochastic and uncertain. In order to address the stochastic characteristic of the volumetric error for multiaxis machine tool, a new probability analysis mathematical model of volumetric error is proposed in this paper. According to multibody system theory, a mean value analysis model for volumetric error is established with consideration of geometric errors. The probability characteristics of geometric errors are obtained by statistical analysis to the measured sample data. Based on probability statistics and stochastic process theory, the variance analysis model of volumetric error is established in matrix, which can avoid the complex mathematics operations during the direct differential. A four-axis horizontal machining center is selected as an illustration example. The analysis results can reveal the stochastic characteristic of volumetric error and are also helpful to make full use of the best workspace to reduce the random uncertainty of the volumetric error and improve the machining accuracy.

  14. Wiki-talk Datasets

    OpenAIRE

    Sun, Jun; Kunegis, Jérôme

    2016-01-01

    User interaction networks of Wikipedia of 28 different languages. Nodes (orininal wikipedia user IDs) represent users of the Wikipedia, and an edge from user A to user B denotes that user A wrote a message on the talk page of user B at a certain timestamp. More info: http://yfiua.github.io/academic/2016/02/14/wiki-talk-datasets.html

  15. Muon Identification and Isolation efficiency on full 2016 dataset

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The performance of muon reconstruction and identification in CMS has been studied on data collected in pp collisions at $\\sqrt{s}$ = 13 TeV at the LHC. It has been studied on a sample of muons corresponding to an integrated luminosity of up to 36 $fb^{-1}$. We present measurements of muon reconstruction and trigger, identification and isolation efficiencies, computed with the tag-and-probe method, in different periods of the data taking. Results obtained using data are compared with Monte-Carlo predictions.

  16. Nanomaterial datasets to advance tomography in scanning transmission electron microscopy

    CERN Document Server

    Levin, Barnaby D A; Chen, Chien-Chun; Scott, M C; Xu, Rui; Theis, Wolfgang; Jiang, Yi; Yang, Yongsoo; Ophus, Colin; Zhang, Haitao; Ha, Don-Hyung; Wang, Deli; Yu, Yingchao; Abruna, Hector D; Robinson, Richard D; Ercius, Peter; Kourkoutis, Lena F; Miao, Jianwei; Muller, David A; Hovden, Robert

    2016-01-01

    Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co2P nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the complete 180{\\deg} tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of p...

  17. BDML Datasets - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SSBD BDML Datasets Data detail Data name BDML Datasets DOI 10.18908/lsdba.nbdc01349-001 Desc...This Database Database Description Download License Update History of This Database Site Policy | Contact Us BDML Datasets - SSBD | LSDB Archive ...

  18. Microarray Analysis Dataset

    Science.gov (United States)

    This file contains a link for Gene Expression Omnibus and the GSE designations for the publicly available gene expression data used in the study and reflected in Figures 6 and 7 for the Das et al., 2016 paper.This dataset is associated with the following publication:Das, K., C. Wood, M. Lin, A.A. Starkov, C. Lau, K.B. Wallace, C. Corton, and B. Abbott. Perfluoroalky acids-induced liver steatosis: Effects on genes controlling lipid homeostasis. TOXICOLOGY. Elsevier Science Ltd, New York, NY, USA, 378: 32-52, (2017).

  19. A reduced volumetric expansion factor plot

    Science.gov (United States)

    Hendricks, R. C.

    1979-01-01

    A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.

  20. Volumetric motion quantification by 3D tissue phase mapped CMR

    Directory of Open Access Journals (Sweden)

    Lutz Anja

    2012-10-01

    Full Text Available Abstract Background The objective of this study was the quantification of myocardial motion from 3D tissue phase mapped (TPM CMR. Recent work on myocardial motion quantification by TPM has been focussed on multi-slice 2D acquisitions thus excluding motion information from large regions of the left ventricle. Volumetric motion assessment appears an important next step towards the understanding of the volumetric myocardial motion and hence may further improve diagnosis and treatments in patients with myocardial motion abnormalities. Methods Volumetric motion quantification of the complete left ventricle was performed in 12 healthy volunteers and two patients applying a black-blood 3D TPM sequence. The resulting motion field was analysed regarding motion pattern differences between apical and basal locations as well as for asynchronous motion pattern between different myocardial segments in one or more slices. Motion quantification included velocity, torsion, rotation angle and strain derived parameters. Results All investigated motion quantification parameters could be calculated from the 3D-TPM data. Parameters quantifying hypokinetic or asynchronous motion demonstrated differences between motion impaired and healthy myocardium. Conclusions 3D-TPM enables the gapless volumetric quantification of motion abnormalities of the left ventricle, which can be applied in future application as additional information to provide a more detailed analysis of the left ventricular function.

  1. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  2. Pilgrims Face Recognition Dataset -- HUFRD

    OpenAIRE

    Aly, Salah A.

    2012-01-01

    In this work, we define a new pilgrims face recognition dataset, called HUFRD dataset. The new developed dataset presents various pilgrims' images taken from outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah seasons. Such dataset will be used to test our developed facial recognition and detection algorithms, as well as assess in the missing and found recognition system \\cite{crowdsensing}.

  3. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    Science.gov (United States)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  4. DIFFERENTIAL ANALYSIS OF VOLUMETRIC STRAINS IN POROUS MATERIALS IN TERMS OF WATER FREEZING

    Directory of Open Access Journals (Sweden)

    Rusin Z.

    2013-06-01

    Full Text Available The paper presents the differential analysis of volumetric strain (DAVS. The method allows measurements of volumetric deformations of capillary-porous materials caused by water-ice phase change. The VSE indicator (volumetric strain effect, which under certain conditions can be interpreted as the minimum degree of phase change of water contained in the material pores, is proposed. The test results (DAVS for three materials with diversified microstructure: clinker brick, calcium-silicate brick and Portland cement mortar were compared with the test results for pore characteristics obtained with the mercury intrusion porosimetry.

  5. Volumetric measurement of pulmonary nodules at low-dose chest CT : effect of reconstruction setting on measurement variability

    NARCIS (Netherlands)

    Wang, Y.; de Bock, G.H.; van Klaveren, R.J.; van Ooyen, P.; Tukker, W.; Zhao, Y.; Dorrius, M.D.; Proenca, R.V.; Post, W.J.; Oudkerk, M.

    2010-01-01

    To assess volumetric measurement variability in pulmonary nodules detected at low-dose chest CT with three reconstruction settings. The volume of 200 solid pulmonary nodules was measured three times using commercially available semi-automated software of low-dose chest CT data-sets reconstructed wit

  6. Surfactant enhanced volumetric sweep efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Harwell, J.H.; Scamehorn, J.F.

    1989-10-01

    Surfactant-enhanced waterflooding is a novel EOR method aimed to improve the volumetric sweep efficiencies in reservoirs. The technique depends upon the ability to induce phase changes in surfactant solutions by mixing with surfactants of opposite charge or with salts of appropriate type. One surfactant or salt solution is injected into the reservoir. It is followed later by injection of another surfactant or salt solution. The sequence of injections is arranged so that the two solutions do not mix until they are into the permeable regions well away from the well bore. When they mix at this point, by design they form a precipitate or gel-like coacervate phase, plugging this permeable region, forcing flow through less permeable regions of the reservoir, improving sweep efficiency. The selectivity of the plugging process is demonstrated by achieving permeability reductions in the high permeable regions of Berea sandstone cores. Strategies were set to obtain a better control over the plug placement and the stability of plugs. A numerical simulator has been developed to investigate the potential increases in oil production of model systems. Furthermore, the hardness tolerance of anionic surfactant solutions is shown to be enhanced by addition of monovalent electrolyte or nonionic surfactants. 34 refs., 32 figs., 8 tabs.

  7. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  8. Subpixel based defocused points removal in photon-limited volumetric dataset

    Science.gov (United States)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.

    2017-03-01

    The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.

  9. Volumetric optical coherence microscopy enabled by aberrated optics (Conference Presentation)

    Science.gov (United States)

    Mulligan, Jeffrey A.; Liu, Siyang; Adie, Steven G.

    2017-02-01

    Optical coherence microscopy (OCM) is an interferometric imaging technique that enables high resolution, non-invasive imaging of 3D cell cultures and biological tissues. Volumetric imaging with OCM suffers a trade-off between high transverse resolution and poor depth-of-field resulting from defocus, optical aberrations, and reduced signal collection away from the focal plane. While defocus and aberrations can be compensated with computational methods such as interferometric synthetic aperture microscopy (ISAM) or computational adaptive optics (CAO), reduced signal collection must be physically addressed through optical hardware. Axial scanning of the focus is one approach, but comes at the cost of longer acquisition times, larger datasets, and greater image reconstruction times. Given the capabilities of CAO to compensate for general phase aberrations, we present an alternative method to address the signal collection problem without axial scanning by using intentionally aberrated optical hardware. We demonstrate the use of an astigmatic spectral domain (SD-)OCM imaging system to enable single-acquisition volumetric OCM in 3D cell culture over an extended depth range, compared to a non-aberrated SD-OCM system. The transverse resolution of the non-aberrated and astigmatic imaging systems after application of CAO were 2 um and 2.2 um, respectively. The depth-range of effective signal collection about the nominal focal plane was increased from 100 um in the non-aberrated system to over 300 um in the astigmatic system, extending the range over which useful data may be acquired in a single OCM dataset. We anticipate that this method will enable high-throughput cellular-resolution imaging of dynamic biological systems over extended volumes.

  10. Laser Based 3D Volumetric Display System

    Science.gov (United States)

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  11. Pulse sequence for dynamic volumetric imaging of hyperpolarized metabolic products

    Science.gov (United States)

    Cunningham, Charles H.; Chen, Albert P.; Lustig, Michael; Hargreaves, Brian A.; Lupo, Janine; Xu, Duan; Kurhanewicz, John; Hurd, Ralph E.; Pauly, John M.; Nelson, Sarah J.; Vigneron, Daniel B.

    2008-07-01

    Dynamic nuclear polarization and dissolution of a 13C-labeled substrate enables the dynamic imaging of cellular metabolism. Spectroscopic information is typically acquired, making the acquisition of dynamic volumetric data a challenge. To enable rapid volumetric imaging, a spectral-spatial excitation pulse was designed to excite a single line of the carbon spectrum. With only a single resonance present in the signal, an echo-planar readout trajectory could be used to resolve spatial information, giving full volume coverage of 32 × 32 × 16 voxels every 3.5 s. This high frame rate was used to measure the different lactate dynamics in different tissues in a normal rat model and a mouse model of prostate cancer.

  12. Magnetic Resonance Image Segmentation and its Volumetric Measurement

    Directory of Open Access Journals (Sweden)

    Rahul R. Ambalkar

    2013-02-01

    Full Text Available Image processing techniques make it possible to extract meaningful information from medical images. Magnetic resonance (MR imaging has been widely applied in biological research and diagnostics because of its excellent soft tissue contrast, non-invasive character, high spatial resolution and easy slice selection at any orientation. The MRI-based brain volumetric is concerned with the analysis of volumes and shapes of the structural components of the human brain. It also provides a criterion, by which we recognize the presence of degenerative diseases and characterize their rates of progression to make the diagnosis and treatments as a easy task. In this paper we have proposed an automated method for volumetric measurement of Magnetic Resonance Imaging and used Self Organized Map (SOM clustering method for their segmentations. We have used the MRI data set of 61 slices of 256×256 pixels in DICOM standard format

  13. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. Potential Applications of Flat-Panel Volumetric CT in Morphologic, Functional Small Animal Imaging

    Directory of Open Access Journals (Sweden)

    Susanne Greschus

    2005-08-01

    Full Text Available Noninvasive radiologic imaging has recently gained considerable interest in basic, preclinical research for monitoring disease progression, therapeutic efficacy. In this report, we introduce flat-panel volumetric computed tomography (fpVCT as a powerful new tool for noninvasive imaging of different organ systems in preclinical research. The three-dimensional visualization that is achieved by isotropic high-resolution datasets is illustrated for the skeleton, chest, abdominal organs, brain of mice. The high image quality of chest scans enables the visualization of small lung nodules in an orthotopic lung cancer model, the reliable imaging of therapy side effects such as lung fibrosis. Using contrast-enhanced scans, fpVCT displayed the vascular trees of the brain, liver, kidney down to the subsegmental level. Functional application of fpVCT in dynamic contrast-enhanced scans of the rat brain delivered physiologically reliable data of perfusion, tissue blood volume. Beyond scanning of small animal models as demonstrated here, fpVCT provides the ability to image animals up to the size of primates.

  15. Quality Visualization of Microarray Datasets Using Circos

    Directory of Open Access Journals (Sweden)

    Martin Koch

    2012-08-01

    Full Text Available Quality control and normalization is considered the most important step in the analysis of microarray data. At present there are various methods available for quality assessments of microarray datasets. However there seems to be no standard visualization routine, which also depicts individual microarray quality. Here we present a convenient method for visualizing the results of standard quality control tests using Circos plots. In these plots various quality measurements are drawn in a circular fashion, thus allowing for visualization of the quality and all outliers of each distinct array within a microarray dataset. The proposed method is intended for use with the Affymetrix Human Genome platform (i.e., GPL 96, GPL570 and GPL571. Circos quality measurement plots are a convenient way for the initial quality estimate of Affymetrix datasets that are stored in publicly available databases.

  16. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis.

  17. FLUXNET2015 Dataset: Batteries included

    Science.gov (United States)

    Pastorello, G.; Papale, D.; Agarwal, D.; Trotta, C.; Chu, H.; Canfora, E.; Torn, M. S.; Baldocchi, D. D.

    2016-12-01

    The synthesis datasets have become one of the signature products of the FLUXNET global network. They are composed from contributions of individual site teams to regional networks, being then compiled into uniform data products - now used in a wide variety of research efforts: from plant-scale microbiology to global-scale climate change. The FLUXNET Marconi Dataset in 2000 was the first in the series, followed by the FLUXNET LaThuile Dataset in 2007, with significant additions of data products and coverage, solidifying the adoption of the datasets as a research tool. The FLUXNET2015 Dataset counts with another round of substantial improvements, including extended quality control processes and checks, use of downscaled reanalysis data for filling long gaps in micrometeorological variables, multiple methods for USTAR threshold estimation and flux partitioning, and uncertainty estimates - all of which accompanied by auxiliary flags. This "batteries included" approach provides a lot of information for someone who wants to explore the data (and the processing methods) in detail. This inevitably leads to a large number of data variables. Although dealing with all these variables might seem overwhelming at first, especially to someone looking at eddy covariance data for the first time, there is method to our madness. In this work we describe the data products and variables that are part of the FLUXNET2015 Dataset, and the rationale behind the organization of the dataset, covering the simplified version (labeled SUBSET), the complete version (labeled FULLSET), and the auxiliary products in the dataset.

  18. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  19. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  20. Nonequilibrium volumetric response of shocked polymers

    Energy Technology Data Exchange (ETDEWEB)

    Clements, B E [Los Alamos National Laboratory

    2009-01-01

    Polymers are well known for their non-equilibrium deviatoric behavior. However, investigations involving both high rate shock experiments and equilibrium measured thermodynamic quantities remind us that the volumetric behavior also exhibits a non-equilibrium response. Experiments supporting the notion of a non-equilibrium volumetric behavior will be summarized. Following that discussion, a continuum-level theory is proposed that will account for both the equilibrium and non-equilibrium response. Upon finding agreement with experiment, the theory is used to study the relaxation of a shocked polymer back towards its shocked equilibrium state.

  1. Pgu-Face: A dataset of partially covered facial images

    Directory of Open Access Journals (Sweden)

    Seyed Reza Salari

    2016-12-01

    Full Text Available In this article we introduce a human face image dataset. Images were taken in close to real-world conditions using several cameras, often mobile phone׳s cameras. The dataset contains 224 subjects imaged under four different figures (a nearly clean-shaven countenance, a nearly clean-shaven countenance with sunglasses, an unshaven or stubble face countenance, an unshaven or stubble face countenance with sunglasses in up to two recording sessions. Existence of partially covered face images in this dataset could reveal the robustness and efficiency of several facial image processing algorithms. In this work we present the dataset and explain the recording method.

  2. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  3. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  4. Genomic Datasets for Cancer Research

    Science.gov (United States)

    A variety of datasets from genome-wide association studies of cancer and other genotype-phenotype studies, including sequencing and molecular diagnostic assays, are available to approved investigators through the Extramural National Cancer Institute Data Access Committee.

  5. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  6. Atlantic Offshore Seabird Dataset Catalog

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Several bureaus within the Department of Interior compiled available information from seabird observation datasets from the Atlantic Outer Continental Shelf into a...

  7. Optimal planning strategy among various arc arrangements for prostate stereotactic body radiotherapy with volumetric modulated arc therapy technique

    Directory of Open Access Journals (Sweden)

    Kang Sang Won

    2017-03-01

    Full Text Available The aim of this study was to determine the optimal strategy among various arc arrangements in prostate plans of stereotactic body radiotherapy with volumetric modulated arc therapy (SBRT-VMAT.

  8. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  9. Process conditions and volumetric composition in composites

    DEFF Research Database (Denmark)

    Madsen, Bo

    2013-01-01

    The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated as a fun...... is increased. Altogether, the model is demonstrated to be a valuable tool for a quantitative analysis of the effect of process conditions. Based on the presented findings and considerations, examples of future work are mentioned for the further improvement of the model.......The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated...... as a function of the fiber weight fraction, and where parameters are included for the composite microstructure, and the fiber assembly compaction behavior. Based on experimental data of composites manufactured with different process conditions, together with model predictions, different types of process related...

  10. Indexing Volumetric Shapes with Matching and Packing.

    Science.gov (United States)

    Koes, David Ryan; Camacho, Carlos J

    2015-04-01

    We describe a novel algorithm for bulk-loading an index with high-dimensional data and apply it to the problem of volumetric shape matching. Our matching and packing algorithm is a general approach for packing data according to a similarity metric. First an approximate k-nearest neighbor graph is constructed using vantage-point initialization, an improvement to previous work that decreases construction time while improving the quality of approximation. Then graph matching is iteratively performed to pack related items closely together. The end result is a dense index with good performance. We define a new query specification for shape matching that uses minimum and maximum shape constraints to explicitly specify the spatial requirements of the desired shape. This specification provides a natural language for performing volumetric shape matching and is readily supported by the geometry-based similarity search (GSS) tree, an indexing structure that maintains explicit representations of volumetric shape. We describe our implementation of a GSS tree for volumetric shape matching and provide a comprehensive evaluation of parameter sensitivity, performance, and scalability. Compared to previous bulk-loading algorithms, we find that matching and packing can construct a GSS-tree index in the same amount of time that is denser, flatter, and better performing, with an observed average performance improvement of 2X.

  11. Selective-plane illumination microscopy for high-content volumetric biological imaging

    Science.gov (United States)

    McGorty, Ryan; Huang, Bo

    2016-03-01

    Light-sheet microscopy, also named selective-plane illumination microscopy, enables optical sectioning with minimal light delivered to the sample. Therefore, it allows one to gather volumetric datasets of developing embryos and other light-sensitive samples over extended times. We have configured a light-sheet microscope that, unlike most previous designs, can image samples in formats compatible with high-content imaging. Our microscope can be used with multi-well plates or with microfluidic devices. In designing our optical system to accommodate these types of sample holders we encounter large optical aberrations. We counter these aberrations with both static optical components in the imaging path and with adaptive optics. Potential applications of this microscope include studying the development of a large number of embryos in parallel and over long times with subcellular resolution and doing high-throughput screens on organisms or cells where volumetric data is necessary.

  12. Scalable Machine Learning for Massive Astronomical Datasets

    Science.gov (United States)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  13. Detecting bimodality in astronomical datasets

    Science.gov (United States)

    Ashman, Keith A.; Bird, Christina M.; Zepf, Stephen E.

    1994-01-01

    We discuss statistical techniques for detecting and quantifying bimodality in astronomical datasets. We concentrate on the KMM algorithm, which estimates the statistical significance of bimodality in such datasets and objectively partitions data into subpopulations. By simulating bimodal distributions with a range of properties we investigate the sensitivity of KMM to datasets with varying characteristics. Our results facilitate the planning of optimal observing strategies for systems where bimodality is suspected. Mixture-modeling algorithms similar to the KMM algorithm have been used in previous studies to partition the stellar population of the Milky Way into subsystems. We illustrate the broad applicability of KMM by analyzing published data on globular cluster metallicity distributions, velocity distributions of galaxies in clusters, and burst durations of gamma-ray sources. FORTRAN code for the KMM algorithm and directions for its use are available from the authors upon request.

  14. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  15. Nanomaterial datasets to advance tomography in scanning transmission electron microscopy.

    Science.gov (United States)

    Levin, Barnaby D A; Padgett, Elliot; Chen, Chien-Chun; Scott, M C; Xu, Rui; Theis, Wolfgang; Jiang, Yi; Yang, Yongsoo; Ophus, Colin; Zhang, Haitao; Ha, Don-Hyung; Wang, Deli; Yu, Yingchao; Abruña, Hector D; Robinson, Richard D; Ercius, Peter; Kourkoutis, Lena F; Miao, Jianwei; Muller, David A; Hovden, Robert

    2016-06-07

    Electron tomography in materials science has flourished with the demand to characterize nanoscale materials in three dimensions (3D). Access to experimental data is vital for developing and validating reconstruction methods that improve resolution and reduce radiation dose requirements. This work presents five high-quality scanning transmission electron microscope (STEM) tomography datasets in order to address the critical need for open access data in this field. The datasets represent the current limits of experimental technique, are of high quality, and contain materials with structural complexity. Included are tomographic series of a hyperbranched Co2P nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the complete 180° tilt range, a platinum nanoparticle and a tungsten needle both imaged at atomic resolution by equal slope tomography, and a through-focal tilt series of PtCu nanoparticles. A volumetric reconstruction from every dataset is provided for comparison and development of post-processing and visualization techniques. Researchers interested in creating novel data processing and reconstruction algorithms will now have access to state of the art experimental test data.

  16. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  17. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  18. Statewide Datasets for Idaho StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a workspace (folder) containing four gridded datasets and a personal geodatabase. The gridded datasets are a grid of mean annual...

  19. Statewide datasets for Hawaii StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a workspace (folder) containing 41 gridded datasets and a personal geodatabase. The gridded datasets consist of 28 precipitation-frequency...

  20. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples.

  1. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.

    2012-02-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  2. Cellular resolution volumetric in vivo retinal imaging with adaptive optics–optical coherence tomography◊

    Science.gov (United States)

    Zawadzki, Robert J.; Choi, Stacey S.; Fuller, Alfred R.; Evans, Julia W.; Hamann, Bernd; Werner, John S.

    2009-01-01

    Ultrahigh-resolution adaptive optics–optical coherence tomography (UHR-AO-OCT) instrumentation allowing monochromatic and chromatic aberration correction was used for volumetric in vivo retinal imaging of various retinal structures including the macula and optic nerve head (ONH). Novel visualization methods that simplify AO-OCT data viewing are presented, and include co-registration of AO-OCT volumes with fundus photography and stitching of multiple AO-OCT sub-volumes to create a large field of view (FOV) high-resolution volume. Additionally, we explored the utility of Interactive Science Publishing by linking all presented AO-OCT datasets with the OSA ISP software. PMID:19259248

  3. Cellular resolution volumetric in vivo retinal imaging with adaptive optics-optical coherence tomography.

    Science.gov (United States)

    Zawadzki, Robert J; Choi, Stacey S; Fuller, Alfred R; Evans, Julia W; Hamann, Bernd; Werner, John S

    2009-03-02

    Ultrahigh-resolution adaptive optics-optical coherence tomography (UHR-AO-OCT) instrumentation allowing monochromatic and chromatic aberration correction was used for volumetric in vivo retinal imaging of various retinal structures including the macula and optic nerve head (ONH). Novel visualization methods that simplify AO-OCT data viewing are presented, and include co-registration of AO-OCT volumes with fundus photography and stitching of multiple AO-OCT sub-volumes to create a large field of view (FOV) high-resolution volume. Additionally, we explored the utility of Interactive Science Publishing by linking all presented AO-OCT datasets with the OSA ISP software.

  4. Enhancing Volumetric Bouligand-Minkowski Fractal Descriptors by using Functional Data Analysis

    CERN Document Server

    Florindo, João Batista; Bruno, Odemir Martinez; 10.1142/S0129183111016701

    2012-01-01

    This work proposes and study the concept of Functional Data Analysis transform, applying it to the performance improving of volumetric Bouligand-Minkowski fractal descriptors. The proposed transform consists essentially in changing the descriptors originally defined in the space of the calculus of fractal dimension into the space of coefficients used in the functional data representation of these descriptors. The transformed decriptors are used here in texture classification problems. The enhancement provided by the FDA transform is measured by comparing the transformed to the original descriptors in terms of the correctness rate in the classification of well known datasets.

  5. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  6. MR volumetric assessment of endolymphatic hydrops

    Energy Technology Data Exchange (ETDEWEB)

    Guerkov, R.; Berman, A.; Jerin, C.; Krause, E. [University of Munich, Department of Otorhinolaryngology Head and Neck Surgery, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); Dietrich, O.; Flatz, W.; Ertl-Wagner, B. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); Keeser, D. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); University of Munich, Department of Psychiatry and Psychotherapy, Innenstadtkliniken Medical Centre, Munich (Germany)

    2014-10-16

    We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)

  7. Illustration-inspired depth enhanced volumetric medical visualization.

    Science.gov (United States)

    Svakhine, Nikolai A; Ebert, David S; Andrews, William M

    2009-01-01

    Volume illustration can be used to provide insight into source data from CT/MRI scanners in much the same way as medical illustration depicts the important details of anatomical structures. As such, proven techniques used in medical illustration should be transferable to volume illustration, providing scientists with new tools to visualize their data. In recent years, a number of techniques have been developed to enhance the rendering pipeline and create illustrative effects similar to the ones found in medical textbooks and surgery manuals. Such effects usually highlight important features of the subject while subjugating its context and providing depth cues for correct perception. Inspired by traditional visual and line-drawing techniques found in medical illustration, we have developed a collection of fast algorithms for more effective emphasis/de-emphasis of data as well as conveyance of spatial relationships. Our techniques utilize effective outlining techniques and selective depth enhancement to provide perceptual cues of object importance as well as spatial relationships in volumetric datasets. Moreover, we have used illustration principles to effectively combine and adapt basic techniques so that they work together to provide consistent visual information and a uniform style.

  8. The effects of dimensional mould sizes on volumetric shrinkage strain of lateritic soil

    Directory of Open Access Journals (Sweden)

    John Engbonye SANI

    2016-07-01

    Full Text Available Dimensional influences of specimen size on the volumetric shrinkage strain values of a lateritic soil for waste containment system have not been researched upon. Therefore, this paper presents the result of a laboratory study on the volumetric shrinkage strain (VSS of lateritic soil at three different dimensional sizes of mould (split former mould, proctor mould and California bearing ratio mould at three energy levels; British standard light (BSL, West African standard (WAS and British standard heavy (BSH respectively. Compactions were done at different molding water content of -2% to +6% optimum moisture content (OMC. At -2% to +2% molding water content for the split former mould the volumetric shrinkage strain met the requirement of not more than 4% while at +4% and +6% only the WAS and BSH met the requirement. The proctor mould and the CBR mould on the other hand gave a lower value of volumetric shrinkage strain in all compactive effort and the values are lower than the 4% safe VSS suggested by Tay et al., (2001. Based on the VSS values obtained if the CBR mould can be used to model site condition it is recommended for use to simulate site condition for Volumetric shrinkage strain for all molding water content and compactive effort.

  9. Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers

    OpenAIRE

    Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S; Neele, F.P.; Zou, Q

    2014-01-01

    Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where CO2 storage capacity is evaluated both volumetrically (with “open” and/or “closed” approaches) and through flow modeling. These examples show that the “open aquifer” CO2 storage capacity estimation ...

  10. Volumetric Diffuse Optical Tomography for Small Animals Using a CCD-Camera-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Zi-Jing Lin

    2012-01-01

    Full Text Available We report the feasibility of three-dimensional (3D volumetric diffuse optical tomography for small animal imaging by using a CCD-camera-based imaging system with a newly developed depth compensation algorithm (DCA. Our computer simulations and laboratory phantom studies have demonstrated that the combination of a CCD camera and DCA can significantly improve the accuracy in depth localization and lead to reconstruction of 3D volumetric images. This approach may present great interests for noninvasive 3D localization of an anomaly hidden in tissue, such as a tumor or a stroke lesion, for preclinical small animal models.

  11. 4D ultrafast ultrasound flow imaging: in vivo quantification of arterial volumetric flow rate in a single heartbeat

    Science.gov (United States)

    Correia, Mafalda; Provost, Jean; Tanter, Mickael; Pernot, Mathieu

    2016-12-01

    We present herein 4D ultrafast ultrasound flow imaging, a novel ultrasound-based volumetric imaging technique for the quantitative mapping of blood flow. Complete volumetric blood flow distribution imaging was achieved through 2D tilted plane-wave insonification, 2D multi-angle cross-beam beamforming, and 3D vector Doppler velocity components estimation by least-squares fitting. 4D ultrafast ultrasound flow imaging was performed in large volumetric fields of view at very high volume rate (>4000 volumes s-1) using a 1024-channel 4D ultrafast ultrasound scanner and a 2D matrix-array transducer. The precision of the technique was evaluated in vitro by using 3D velocity vector maps to estimate volumetric flow rates in a vessel phantom. Volumetric Flow rate errors of less than 5% were found when volumetric flow rates and peak velocities were respectively less than 360 ml min-1 and 100 cm s-1. The average volumetric flow rate error increased to 18.3% when volumetric flow rates and peak velocities were up to 490 ml min-1 and 1.3 m s-1, respectively. The in vivo feasibility of the technique was shown in the carotid arteries of two healthy volunteers. The 3D blood flow velocity distribution was assessed during one cardiac cycle in a full volume and it was used to quantify volumetric flow rates (375  ±  57 ml min-1 and 275  ±  43 ml min-1). Finally, the formation of 3D vortices at the carotid artery bifurcation was imaged at high volume rates.

  12. MATHEMATICAL MODEL FOR DETERMINATION OF VOLUMETRIC OUTPUT OF LUMBER FROM LOGS, CONTAINING SEVERAL QUALITY AREAS

    Directory of Open Access Journals (Sweden)

    Mikryukova E. V.

    2014-12-01

    Full Text Available In the article we present a method of cutting logs, containing several quality areas. For this method, a mathematical model was developed to determine the volumetric output of lumber, which allows to determine the geometric dimensions of the lumber cut from the different quality areas separated concentric circles, depending on size and quality characteristics of logs

  13. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  14. Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers

    NARCIS (Netherlands)

    Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S.; Neele, F.P.; Zou, Q.

    2014-01-01

    Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where C

  15. Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers

    NARCIS (Netherlands)

    Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S.; Neele, F.P.; Zou, Q.

    2014-01-01

    Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where

  16. Matchmaking, datasets and physics analysis

    CERN Document Server

    Donno, Flavia; Eulisse, Giulio; Mazzucato, Mirco; Steenberg, Conrad; CERN. Geneva. IT Department; 10.1109/ICPPW.2005.48

    2005-01-01

    Grid enabled physics analysis requires a workload management system (WMS) that takes care of finding suitable computing resources to execute data intensive jobs. A typical example is the WMS available in the LCG2 (also referred to as EGEE-0) software system, used by several scientific experiments. Like many other current grid systems, LCG2 provides a file level granularity for accessing and analysing data. However, application scientists such as high energy physicists often require a higher abstraction level for accessing data, i.e. they prefer to use datasets rather than files in their physics analysis. We have improved the current WMS (in particular the Matchmaker) to allow physicists to express their analysis job requirements in terms of datasets. This required modifications to the WMS and its interface to potential data catalogues. As a result, we propose a simple data location interface that is based on a Web service approach and allows for interoperability of the WMS with new dataset and file catalogues...

  17. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  18. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  19. A SUBDIVISION SCHEME FOR VOLUMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    GhulamMustafa; LiuXuefeng

    2005-01-01

    In this paper, a subdivision scheme which generalizes a surface scheme in previous papers to volume meshes is designed. The scheme exhibits significant control over shrink-age/size of volumetric models. It also has the ability to conveniently incorporate boundaries and creases into a smooth limit shape of models. The method presented here is much simpler and easier as compared to MacCracken and Joy's. This method makes no restrictions on the local topology of meshes. Particularly, it can be applied without any change to meshes of nonmanifold topology.

  20. Volumetric composition in composites and historical data

    DEFF Research Database (Denmark)

    Lilholt, Hans; Madsen, Bo

    2013-01-01

    guidance to the optimal combination of fibre content, matrix content and porosity content, in order to achieve the best obtainable properties. Several composite materials systems have been shown to be handleable with this model. An extensive series of experimental data for the system of cellulose fibres...... and polymer (resin) was produced in 1942 – 1944, and these data have been (re-)analysed by the volumetric composition model, and the property values for density, stiffness and strength have been evaluated. Good agreement has been obtained and some further observations have been extracted from the analysis....

  1. Magnetic volumetric hologram memory with magnetic garnet.

    Science.gov (United States)

    Nakamura, Yuichi; Takagi, Hiroyuki; Lim, Pang Boey; Inoue, Mitsuteru

    2014-06-30

    Holographic memory is a promising next-generation optical memory that has a higher recording density and a higher transfer rate than other types of memory. In holographic memory, magnetic garnet films can serve as rewritable holographic memory media by use of magneto-optical effect. We have now demonstrated that a magnetic hologram can be recorded volumetrically in a ferromagnetic garnet film and that the signal image can be reconstructed from it for the first time. In addition, multiplicity of the magnetic hologram was also confirmed; the image could be reconstructed from a spot overlapped by other spots.

  2. Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study

    Directory of Open Access Journals (Sweden)

    Jisun Byun

    2014-12-01

    Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.

  3. Original Dataset - dbQSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us dbQSNP Original Dataset Data detail Data name Original Dataset DOI 10.18908/lsdba.nbdc00042-...This Database Site Policy | Contact Us Original Dataset - dbQSNP | LSDB Archive ... ...switchLanguage; BLAST Search Image Search Home About Archive Update History Data

  4. Reliability of brain volume measurements: a test-retest dataset.

    Science.gov (United States)

    Maclaren, Julian; Han, Zhaoying; Vos, Sjoerd B; Fischbein, Nancy; Bammer, Roland

    2014-01-01

    Evaluation of neurodegenerative disease progression may be assisted by quantification of the volume of structures in the human brain using magnetic resonance imaging (MRI). Automated segmentation software has improved the feasibility of this approach, but often the reliability of measurements is uncertain. We have established a unique dataset to assess the repeatability of brain segmentation and analysis methods. We acquired 120 T1-weighted volumes from 3 subjects (40 volumes/subject) in 20 sessions spanning 31 days, using the protocol recommended by the Alzheimer's Disease Neuroimaging Initiative (ADNI). Each subject was scanned twice within each session, with repositioning between the two scans, allowing determination of test-retest reliability both within a single session (intra-session) and from day to day (inter-session). To demonstrate the application of the dataset, all 3D volumes were processed using FreeSurfer v5.1. The coefficient of variation of volumetric measurements was between 1.6% (caudate) and 6.1% (thalamus). Inter-session variability exceeded intra-session variability for lateral ventricle volume (P<0.0001), indicating that ventricle volume in the subjects varied between days.

  5. Discovery and Reuse of Open Datasets: An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Sara

    2016-07-01

    Full Text Available Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.

  6. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  7. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  8. Wi-Fi Crowdsourced Fingerprinting Dataset for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Elena Simona Lohan

    2017-10-01

    Full Text Available Benchmark open-source Wi-Fi fingerprinting datasets for indoor positioning studies are still hard to find in the current literature and existing public repositories. This is unlike other research fields, such as the image processing field, where benchmark test images such as the Lenna image or Face Recognition Technology (FERET databases exist, or the machine learning field, where huge datasets are available for example at the University of California Irvine (UCI Machine Learning Repository. It is the purpose of this paper to present a new openly available Wi-Fi fingerprint dataset, comprised of 4648 fingerprints collected with 21 devices in a university building in Tampere, Finland, and to present some benchmark indoor positioning results using these data. The datasets and the benchmarking software are distributed under the open-source MIT license and can be found on the EU Zenodo repository.

  9. The Flora Mycologica Iberica Project fungi occurrence dataset

    Directory of Open Access Journals (Sweden)

    Francisco Pando

    2016-09-01

    Full Text Available The dataset contains detailed distribution information on several fungal groups. The information has been revised, and in many times compiled, by expert mycologist(s working on the monographs for the Flora Mycologica Iberica Project (FMI. Records comprise both collection and observational data, obtained from a variety of sources including field work, herbaria, and the literature. The dataset contains 59,235 records, of which 21,393 are georeferenced. These correspond to 2,445 species, grouped in 18 classes. The geographical scope of the dataset is Iberian Peninsula (Continental Portugal and Spain, and Andorra and Balearic Islands. The complete dataset is available in Darwin Core Archive format via the Global Biodiversity Information Facility (GBIF.

  10. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Department of Resources — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  11. VT Hydrography Dataset - High Resolution NHD

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The Vermont Hydrography Dataset (VHD) is compliant with the local resolution (also known as High Resolution) National Hydrography Dataset (NHD)...

  12. Disentangling volumetric and hydrational properties of proteins.

    Science.gov (United States)

    Voloshin, Vladimir P; Medvedev, Nikolai N; Smolin, Nikolai; Geiger, Alfons; Winter, Roland

    2015-02-05

    We used molecular dynamics simulations of a typical monomeric protein, SNase, in combination with Voronoi-Delaunay tessellation to study and analyze the temperature dependence of the apparent volume, Vapp, of the solute. We show that the void volume, VB, created in the boundary region between solute and solvent, determines the temperature dependence of Vapp to a major extent. The less pronounced but still significant temperature dependence of the molecular volume of the solute, VM, is essentially the result of the expansivity of its internal voids, as the van der Waals contribution to VM is practically independent of temperature. Results for polypeptides of different chemical nature feature a similar temperature behavior, suggesting that the boundary/hydration contribution seems to be a universal part of the temperature dependence of Vapp. The results presented here shine new light on the discussion surrounding the physical basis for understanding and decomposing the volumetric properties of proteins and biomolecules in general.

  13. All Photons Imaging Through Volumetric Scattering

    Science.gov (United States)

    Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh

    2016-01-01

    Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis. PMID:27683065

  14. A Technique for Volumetric CSG Based on Morphology

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2001-01-01

    In this paper, a new technique for volumetric CSG is presented. The technique requires the input volumes to correspond to solids which fulfill a voxelization suitability criterion. Assume the CSG operation is union. The volumetric union of two such volumes is defined in terms of the voxelization...

  15. Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets

    Directory of Open Access Journals (Sweden)

    Mingwei Leng

    2013-01-01

    Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.

  16. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  17. Optical Addressing of Multi-Colour Photochromic Material Mixture for Volumetric Display

    Science.gov (United States)

    Hirayama, Ryuji; Shiraki, Atsushi; Naruse, Makoto; Nakamura, Shinichiro; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-08-01

    This is the first study to demonstrate that colour transformations in the volume of a photochromic material (PM) are induced at the intersections of two control light channels, one controlling PM colouration and the other controlling decolouration. Thus, PM colouration is induced by position selectivity, and therefore, a dynamic volumetric display may be realised using these two control lights. Moreover, a mixture of multiple PM types with different absorption properties exhibits different colours depending on the control light spectrum. Particularly, the spectrum management of the control light allows colour-selective colouration besides position selectivity. Therefore, a PM-based, full-colour volumetric display is realised. We experimentally construct a mixture of two PM types and validate the operating principles of such a volumetric display system. Our system is constructed simply by mixing multiple PM types; therefore, the display hardware structure is extremely simple, and the minimum size of a volume element can be as small as the size of a molecule. Volumetric displays can provide natural three-dimensional (3D) perception; therefore, the potential uses of our system include high-definition 3D visualisation for medical applications, architectural design, human–computer interactions, advertising, and entertainment.

  18. Mathematical Model Defining Volumetric Losses of Hydraulic Oil Compression in a Variable Capacity Displacement Pump

    Directory of Open Access Journals (Sweden)

    Paszota Zygmunt

    2015-01-01

    Full Text Available The objective of the work is to develop the capability of evaluating the volumetric losses of hydraulic oil compression in the working chambers of high pressure variable capacity displacement pump. Volumetric losses of oil compression must be determined as functions of the same parameters, which the volumetric losses due to leakage, resulting from the quality of design solution of the pump, are evaluated as dependent on and also as function of the oil aeration coefficient Ɛ. A mathematical model has been developed describing the hydraulic oil compressibility coefficient klc|Δppi;Ɛ;v as a relation to the ratio ΔpPi/pn of indicated increase ΔpPi of pressure in the working chambers and the nominal pressure pn, to the pump capacity coefficient bP, to the oil aeration coefficient  and to the ratio v/vnof oil viscosity v and reference viscosity vn. A mathematical model is presented of volumetric losses qpvc|ΔpPi;bp;;vof hydraulic oil compression in the pump working chambers in the form allowing to use it in the model of power of losses and energy efficiency

  19. A Dosimetric Study of Using Fixed-Jaw Volumetric Modulated Arc Therapy for the Treatment of Nasopharyngeal Carcinoma with Cervical Lymph Node Metastasis.

    Directory of Open Access Journals (Sweden)

    Wu-Zhe Zhang

    Full Text Available To study the dosimetric difference between fixed-jaw volumetric modulated radiotherapy (FJ-VMAT and large-field volumetric modulated radiotherapy (LF-VMAT for nasopharyngeal carcinoma (NPC with cervical lymph node metastasis.Computed tomography (CT datasets of 10 NPC patients undergoing chemoradiotherapy were used to generate LF-VMAT and FJ-VMAT plans in the Eclipse version 10.0 treatment planning system. These two kinds of plans were then compared with respect to planning-target-volume (PTV coverage, conformity index (CI, homogeneity index (HI, organ-at-risk sparing, monitor units (MUs and treatment time (TT.The FJ-VMAT plans provided lower D2% of PGTVnd (PTV of lymph nodes, PTV1 (high-risk PTV and PTV2 (low-risk PTV than did the LF-VMAT plans, whereas no significant differences were observed in PGTVnx (PTV of primary nasopharyngeal tumor. The FJ-VMAT plans provided lower doses delivered to the planning organ at risk (OAR volumes (PRVs of both brainstem and spinal cord, both parotid glands and normal tissue than did the LF-VMAT plans, whereas no significant differences were observed with respect to the oral cavity and larynx. The MUs of the FJ-VMAT plans (683 ± 87 were increased by 22% ± 12% compared with the LF-VMAT plans (559 ± 62. In terms of the TT, no significant difference was found between the two kinds of plans.FJ-VMAT was similar or slightly superior to LF-VMAT in terms of PTV coverage and was significantly superior in terms of OAR sparing, at the expense of increased MUs.

  20. Controlled Vocabulary Standards for Anthropological Datasets

    Directory of Open Access Journals (Sweden)

    Celia Emmelhainz

    2014-07-01

    Full Text Available This article seeks to outline the use of controlled vocabulary standards for qualitative datasets in cultural anthropology, which are increasingly held in researcher-accessible government repositories and online digital libraries. As a humanistic science that can address almost any aspect of life with meaning to humans, cultural anthropology has proven difficult for librarians and archivists to effectively organize. Yet as anthropology moves onto the web, the challenge of organizing and curating information within the field only grows. In considering the subject classification of digital information in anthropology, I ask how we might best use controlled vocabularies for indexing digital anthropological data. After a brief discussion of likely concerns, I outline thesauri which may potentially be used for vocabulary control in metadata fields for language, location, culture, researcher, and subject. The article concludes with recommendations for those existing thesauri most suitable to provide a controlled vocabulary for describing digital objects in the anthropological world.

  1. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2012-01-01

      Introduction The first part of the year presented an important test for the new Physics Performance and Dataset (PPD) group (cf. its mandate: http://cern.ch/go/8f77). The activity was focused on the validation of the new releases meant for the Monte Carlo (MC) production and the data-processing in 2012 (CMSSW 50X and 52X), and on the preparation of the 2012 operations. In view of the Chamonix meeting, the PPD and physics groups worked to understand the impact of the higher pile-up scenario on some of the flagship Higgs analyses to better quantify the impact of the high luminosity on the CMS physics potential. A task force is working on the optimisation of the reconstruction algorithms and on the code to cope with the performance requirements imposed by the higher event occupancy as foreseen for 2012. Concerning the preparation for the analysis of the new data, a new MC production has been prepared. The new samples, simulated at 8 TeV, are already being produced and the digitisation and recons...

  2. Effects of Different Reconstruction Parameters on CT Volumetric Measurement 
of Pulmonary Nodules

    Directory of Open Access Journals (Sweden)

    Rongrong YANG

    2012-02-01

    Full Text Available Background and objective It has been proven that volumetric measurements could detect subtle changes in small pulmonary nodules in serial CT scans, and thus may play an important role in the follow-up of indeterminate pulmonary nodules and in differentiating malignant nodules from benign nodules. The current study aims to evaluate the effects of different reconstruction parameters on the volumetric measurements of pulmonary nodules in chest CT scans. Methods Thirty subjects who underwent chest CT scan because of indeterminate pulmonary nodules in General Hospital of Tianjin Medical University from December 2009 to August 2011 were retrospectively analyzed. A total of 52 pulmonary nodules were included, and all CT data were reconstructed using three reconstruction algorithms and three slice thicknesses. The volumetric measurements of the nodules were performed using the advanced lung analysis (ALA software. The effects of the reconstruction algorithms, slice thicknesses, and nodule diameters on the volumetric measurements were assessed using the multivariate analysis of variance for repeated measures, the correlation analysis, and the Bland-Altman method. Results The reconstruction algorithms (F=13.6, P<0.001 and slice thicknesses (F=4.4, P=0.02 had significant effects on the measured volume of pulmonary nodules. In addition, the coefficients of variation of nine measurements were inversely related with nodule diameter (r=-0.814, P<0.001. The volume measured at the 2.5 mm slice thickness had poor agreement with the volumes measured at 1.25 mm and 0.625 mm, respectively. Moreover, the best agreement was achieved between the slice thicknesses of 1.25 mm and 0.625 mm using the bone algorithm. Conclusion Reconstruction algorithms and slice thicknesses have significant impacts on the volumetric measurements of lung nodules, especially for the small nodules. Therefore, the reconstruction setting in serial CT scans should be consistent in the follow

  3. Web based hybrid volumetric visualisation of urban GIS data. Integration of 4D Temperature and Wind Fields with LoD-2 CityGML models

    Science.gov (United States)

    Congote, J.; Moreno, A.; Kabongo, L.; Pérez, J.-L.; San-José, R.; Ruiz, O.

    2012-10-01

    City models visualisation, buildings, structures and volumetric information, is an important task in Computer Graphics and Urban Planning. The different formats and data sources involved in the visualisation make the development of applications a big challenge. We present a homogeneous web visualisation framework using X3DOM and MEDX3DOM for the visualisation of these urban objects. We present an integration of different declarative data sources, enabling the utilization of advanced visualisation algorithms to render the models. It has been tested with a city model composed of buildings from the Madrid University Campus, some volumetric datasets coming from Air Quality Models and 2D layers wind datasets. Results show that the visualisation of all the urban models can be performed in real time on the Web. An HTML5 web interface is presented to the users, enabling real time modifications of visualisation parameters.

  4. Development of Mathematical Models for Detecting Micron Scale Volumetric Defects in Thin Film Coatings

    Directory of Open Access Journals (Sweden)

    Gaigals G.

    2016-04-01

    Full Text Available The focus of the present research is to investigate possibilities of volumetric defect detection in thin film coatings on glass substrates by means of high definition imaging with no complex optical systems, such as lenses, and to determine development and construction feasibility of a defectoscope employing the investigated methods. Numerical simulations were used to test the proposed methods. Three theoretical models providing various degrees of accuracy and feasibility were studied.

  5. Developing an improved soil moisture dataset by blending passive and active microwave satellite-based retrievals

    Directory of Open Access Journals (Sweden)

    Y. Y. Liu

    2011-02-01

    Full Text Available Combining information derived from satellite-based passive and active microwave sensors has the potential to offer improved estimates of surface soil moisture at global scale. We develop and evaluate a methodology that takes advantage of the retrieval characteristics of passive (AMSR-E and active (ASCAT microwave satellite estimates to produce an improved soil moisture product. First, volumetric soil water content (m3 m−3 from AMSR-E and degree of saturation (% from ASCAT are rescaled against a reference land surface model data set using a cumulative distribution function matching approach. While this imposes any bias of the reference on the rescaled satellite products, it adjusts them to the same range and preserves the dynamics of original satellite-based products. Comparison with in situ measurements demonstrates that where the correlation coefficient between rescaled AMSR-E and ASCAT is greater than 0.65 ("transitional regions", merging the different satellite products increases the number of observations while minimally changing the accuracy of soil moisture retrievals. These transitional regions also delineate the boundary between sparsely and moderately vegetated regions where rescaled AMSR-E and ASCAT, respectively, are used for the merged product. Therefore the merged product carries the advantages of better spatial coverage overall and increased number of observations, particularly for the transitional regions. The combination method developed has the potential to be applied to existing microwave satellites as well as to new missions. Accordingly, a long-term global soil moisture dataset can be developed and extended, enhancing basic understanding of the role of soil moisture in the water, energy and carbon cycles.

  6. Iterative reconstruction of volumetric particle distribution

    Science.gov (United States)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  7. River network routing on the NHDPlus dataset

    OpenAIRE

    David, Cédric; Maidment, David,; Niu, Guo-Yue; Yang, Zong-Liang; Habets, Florence; Eijkhout, Victor

    2011-01-01

    International audience; The mapped rivers and streams of the contiguous United States are available in a geographic information system (GIS) dataset called National Hydrography Dataset Plus (NHDPlus). This hydrographic dataset has about 3 million river and water body reaches along with information on how they are connected into net- works. The U.S. Geological Survey (USGS) National Water Information System (NWIS) provides stream- flow observations at about 20 thousand gauges located on theNHDP...

  8. River network routing on the NHDPlus dataset

    OpenAIRE

    David, Cédric; Maidment, David,; Niu, Guo-Yue; Yang, Zong-Liang; Habets, Florence; Eijkhout, Victor

    2011-01-01

    International audience; The mapped rivers and streams of the contiguous United States are available in a geographic information system (GIS) dataset called National Hydrography Dataset Plus (NHDPlus). This hydrographic dataset has about 3 million river and water body reaches along with information on how they are connected into net- works. The U.S. Geological Survey (USGS) National Water Information System (NWIS) provides stream- flow observations at about 20 thousand gauges located on theNHDP...

  9. Volumetric modulated arc therapy vs. c-IMRT for the treatment of upper thoracic esophageal cancer.

    Directory of Open Access Journals (Sweden)

    Wu-Zhe Zhang

    Full Text Available To compare plans using volumetric-modulated arc therapy (VMAT with conventional sliding window intensity-modulated radiation therapy (c-IMRT to treat upper thoracic esophageal cancer (EC.CT datasets of 11 patients with upper thoracic EC were identified. Four plans were generated for each patient: c-IMRT with 5 fields (5F and VMAT with a single arc (1A, two arcs (2A, or three arcs (3A. The prescribed doses were 64 Gy/32 F for the primary tumor (PTV64. The dose-volume histogram data, the number of monitoring units (MUs and the treatment time (TT for the different plans were compared.All of the plans generated similar dose distributions for PTVs and organs at risk (OARs, except that the 2A- and 3A-VMAT plans yielded a significantly higher conformity index (CI than the c-IMRT plan. The CI of the PTV64 was improved by increasing the number of arcs in the VMAT plans. The maximum spinal cord dose and the planning risk volume of the spinal cord dose for the two techniques were similar. The 2A- and 3A-VMAT plans yielded lower mean lung doses and heart V50 values than the c-IMRT. The V20 and V30 for the lungs in all of the VMAT plans were lower than those in the c-IMRT plan, at the expense of increasing V5, V10 and V13. The VMAT plan resulted in significant reductions in MUs and TT.The 2A-VMAT plan appeared to spare the lungs from moderate-dose irradiation most effectively of all plans, at the expense of increasing the low-dose irradiation volume, and also significantly reduced the number of required MUs and the TT. The CI of the PTVs and the OARs was improved by increasing the arc-number from 1 to 2; however, no significant improvement was observed using the 3A-VMAT, except for an increase in the TT.

  10. A new bed elevation dataset for Greenland

    Directory of Open Access Journals (Sweden)

    J. L. Bamber

    2013-03-01

    Full Text Available We present a new bed elevation dataset for Greenland derived from a combination of multiple airborne ice thickness surveys undertaken between the 1970s and 2012. Around 420 000 line kilometres of airborne data were used, with roughly 70% of this having been collected since the year 2000, when the last comprehensive compilation was undertaken. The airborne data were combined with satellite-derived elevations for non-glaciated terrain to produce a consistent bed digital elevation model (DEM over the entire island including across the glaciated–ice free boundary. The DEM was extended to the continental margin with the aid of bathymetric data, primarily from a compilation for the Arctic. Ice thickness was determined where an ice shelf exists from a combination of surface elevation and radar soundings. The across-track spacing between flight lines warranted interpolation at 1 km postings for significant sectors of the ice sheet. Grids of ice surface elevation, error estimates for the DEM, ice thickness and data sampling density were also produced alongside a mask of land/ocean/grounded ice/floating ice. Errors in bed elevation range from a minimum of ±10 m to about ±300 m, as a function of distance from an observation and local topographic variability. A comparison with the compilation published in 2001 highlights the improvement in resolution afforded by the new datasets, particularly along the ice sheet margin, where ice velocity is highest and changes in ice dynamics most marked. We estimate that the volume of ice included in our land-ice mask would raise mean sea level by 7.36 m, excluding any solid earth effects that would take place during ice sheet decay.

  11. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  12. A global distributed basin morphometric dataset

    Science.gov (United States)

    Shen, Xinyi; Anagnostou, Emmanouil N.; Mei, Yiwen; Hong, Yang

    2017-01-01

    Basin morphometry is vital information for relating storms to hydrologic hazards, such as landslides and floods. In this paper we present the first comprehensive global dataset of distributed basin morphometry at 30 arc seconds resolution. The dataset includes nine prime morphometric variables; in addition we present formulas for generating twenty-one additional morphometric variables based on combination of the prime variables. The dataset can aid different applications including studies of land-atmosphere interaction, and modelling of floods and droughts for sustainable water management. The validity of the dataset has been consolidated by successfully repeating the Hack's law.

  13. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. Veterans Affairs Suicide Prevention Synthetic Dataset Metadata

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  15. BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters

    Directory of Open Access Journals (Sweden)

    Mithun Biswas

    2017-06-01

    Full Text Available BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.

  16. Hyperspectral image classification based on volumetric texture and dimensionality reduction

    Science.gov (United States)

    Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui

    2015-06-01

    A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.

  17. Designing remote web-based mechanical-volumetric flow meter ...

    African Journals Online (AJOL)

    ... remote web-based mechanical-volumetric flow meter reading systems based on ... damage and also provides the ability to control and manage consumption. ... existing infrastructure of the telecommunications is used in data transmission.

  18. Increasing the volumetric efficiency of Diesel engines by intake pipes

    Science.gov (United States)

    List, Hans

    1933-01-01

    Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.

  19. Dataset of transcriptional landscape of B cell early activation

    Directory of Open Access Journals (Sweden)

    Alexander S. Garruss

    2015-09-01

    Full Text Available Signaling via B cell receptors (BCR and Toll-like receptors (TLRs result in activation of B cells with distinct physiological outcomes, but transcriptional regulatory mechanisms that drive activation and distinguish these pathways remain unknown. At early time points after BCR and TLR ligand exposure, 0.5 and 2 h, RNA-seq was performed allowing observations on rapid transcriptional changes. At 2 h, ChIP-seq was performed to allow observations on important regulatory mechanisms potentially driving transcriptional change. The dataset includes RNA-seq, ChIP-seq of control (Input, RNA Pol II, H3K4me3, H3K27me3, and a separate RNA-seq for miRNA expression, which can be found at Gene Expression Omnibus Dataset GSE61608. Here, we provide details on the experimental and analysis methods used to obtain and analyze this dataset and to examine the transcriptional landscape of B cell early activation.

  20. Robust Machine Learning Applied to Terascale Astronomical Datasets

    CERN Document Server

    Ball, Nicholas M; Myers, Adam D

    2008-01-01

    We present recent results from the LCDM (Laboratory for Cosmological Data Mining; http://lcdm.astro.uiuc.edu) collaboration between UIUC Astronomy and NCSA to deploy supercomputing cluster resources and machine learning algorithms for the mining of terascale astronomical datasets. This is a novel application in the field of astronomy, because we are using such resources for data mining, and not just performing simulations. Via a modified implementation of the NCSA cyberenvironment Data-to-Knowledge, we are able to provide improved classifications for over 100 million stars and galaxies in the Sloan Digital Sky Survey, improved distance measures, and a full exploitation of the simple but powerful k-nearest neighbor algorithm. A driving principle of this work is that our methods should be extensible from current terascale datasets to upcoming petascale datasets and beyond. We discuss issues encountered to-date, and further issues for the transition to petascale. In particular, disk I/O will become a major limit...

  1. Serial volumetric registration of pulmonary CT studies

    Science.gov (United States)

    Silva, José Silvestre; Silva, Augusto; Sousa Santos, Beatriz

    2008-03-01

    Detailed morphological analysis of pulmonary structures and tissue, provided by modern CT scanners, is of utmost importance as in the case of oncological applications both for diagnosis, treatment, and follow-up. In this case, a patient may go through several tomographic studies throughout a period of time originating volumetric sets of image data that must be appropriately registered in order to track suspicious radiological findings. The structures or regions of interest may change their position or shape in CT exams acquired at different moments, due to postural, physiologic or pathologic changes, so, the exams should be registered before any follow-up information can be extracted. Postural mismatching throughout time is practically impossible to avoid being particularly evident when imaging is performed at the limiting spatial resolution. In this paper, we propose a method for intra-patient registration of pulmonary CT studies, to assist in the management of the oncological pathology. Our method takes advantage of prior segmentation work. In the first step, the pulmonary segmentation is performed where trachea and main bronchi are identified. Then, the registration method proceeds with a longitudinal alignment based on morphological features of the lungs, such as the position of the carina, the pulmonary areas, the centers of mass and the pulmonary trans-axial principal axis. The final step corresponds to the trans-axial registration of the corresponding pulmonary masked regions. This is accomplished by a pairwise sectional registration process driven by an iterative search of the affine transformation parameters leading to optimal similarity metrics. Results with several cases of intra-patient, intra-modality registration, up to 7 time points, show that this method provides accurate registration which is needed for quantitative tracking of lesions and the development of image fusion strategies that may effectively assist the follow-up process.

  2. Volumetric optoacoustic monitoring of endovenous laser treatments

    Science.gov (United States)

    Fehm, Thomas F.; Deán-Ben, Xosé L.; Schaur, Peter; Sroka, Ronald; Razansky, Daniel

    2016-03-01

    Chronic venous insufficiency (CVI) is one of the most common medical conditions with reported prevalence estimates as high as 30% in the adult population. Although conservative management with compression therapy may improve the symptoms associated with CVI, healing often demands invasive procedures. Besides established surgical methods like vein stripping or bypassing, endovenous laser therapy (ELT) emerged as a promising novel treatment option during the last 15 years offering multiple advantages such as less pain and faster recovery. Much of the treatment success hereby depends on monitoring of the treatment progression using clinical imaging modalities such as Doppler ultrasound. The latter however do not provide sufficient contrast, spatial resolution and three-dimensional imaging capacity which is necessary for accurate online lesion assessment during treatment. As a consequence, incidence of recanalization, lack of vessel occlusion and collateral damage remains highly variable among patients. In this study, we examined the capacity of volumetric optoacoustic tomography (VOT) for real-time monitoring of ELT using an ex-vivo ox foot model. ELT was performed on subcutaneous veins while optoacoustic signals were acquired and reconstructed in real-time and at a spatial resolution in the order of 200μm. VOT images showed spatio-temporal maps of the lesion progression, characteristics of the vessel wall, and position of the ablation fiber's tip during the pull back. It was also possible to correlate the images with the temperature elevation measured in the area adjacent to the ablation spot. We conclude that VOT is a promising tool for providing online feedback during endovenous laser therapy.

  3. Treatment planning for volumetric modulated arc therapy

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, James L. [Joint Department of Physics, Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT (United Kingdom)

    2009-11-15

    Purpose: Volumetric modulated arc therapy (VMAT) is a specific type of intensity-modulated radiation therapy (IMRT) in which the gantry speed, multileaf collimator (MLC) leaf position, and dose rate vary continuously during delivery. A treatment planning system for VMAT is presented. Methods: Arc control points are created uniformly throughout one or more arcs. An iterative least-squares algorithm is used to generate a fluence profile at every control point. The control points are then grouped and all of the control points in a given group are used to approximate the fluence profiles. A direct-aperture optimization is then used to improve the solution, taking into account the allowed range of leaf motion of the MLC. Dose is calculated using a fast convolution algorithm and the motion between control points is approximated by 100 interpolated dose calculation points. The method has been applied to five cases, consisting of lung, rectum, prostate and seminal vesicles, prostate and pelvic lymph nodes, and head and neck. The resulting plans have been compared with segmental (step-and-shoot) IMRT and delivered and verified on an Elekta Synergy to ensure practicality. Results: For the lung, prostate and seminal vesicles, and rectum cases, VMAT provides a plan of similar quality to segmental IMRT but with faster delivery by up to a factor of 4. For the prostate and pelvic nodes and head-and-neck cases, the critical structure doses are reduced with VMAT, both of these cases having a longer delivery time than IMRT. The plans in general verify successfully, although the agreement between planned and measured doses is not very close for the more complex cases, particularly the head-and-neck case. Conclusions: Depending upon the emphasis in the treatment planning, VMAT provides treatment plans which are higher in quality and/or faster to deliver than IMRT. The scheme described has been successfully introduced into clinical use.

  4. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    Science.gov (United States)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in

  5. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  6. The KUSC Classical Music Dataset for Audio Key Finding

    Directory of Open Access Journals (Sweden)

    Ching-Hua Chuan

    2014-08-01

    Full Text Available In this paper, we present a benchmark dataset based on the KUSC classical music collection and provide baseline key-finding comparison results. Audio key finding is a basic music information retrieval task; it forms an essential component of systems for music segmentation, similarity assessment, and mood detection. Due to copyright restrictions and a labor-intensive annotation process, audio key finding algorithms have only been evaluated using small proprietary datasets to date. To create a common base for systematic comparisons, we have constructed a dataset comprising of more than 3,000 excerpts of classical music. The excerpts are made publicly accessible via commonly used acoustic features such as pitch-based spectrograms and chromagrams. We introduce a hybrid annotation scheme that combines the use of title keys with expert validation and correction of only the challenging cases. The expert musicians also provide ratings of key recognition difficulty. Other meta-data include instrumentation. As demonstration of use of the dataset, and to provide initial benchmark comparisons for evaluating new algorithms, we conduct a series of experiments reporting key determination accuracy of four state-of-the-art algorithms. We further show the importance of considering factors such as estimated tuning frequency, key strength or confidence value, and key recognition difficulty in key finding. In the future, we plan to expand the dataset to include meta-data for other music information retrieval tasks.

  7. Spatial Accuracy Assessment and Integration of Global Land Cover Datasets

    Directory of Open Access Journals (Sweden)

    Nandin-Erdene Tsendbazar

    2015-11-01

    Full Text Available Along with the creation of new maps, current efforts for improving global land cover (GLC maps focus on integrating maps by accounting for their relative merits, e.g., agreement amongst maps or map accuracy. Such integration efforts may benefit from the use of multiple GLC reference datasets. Using available reference datasets, this study assesses spatial accuracy of recent GLC maps and compares methods for creating an improved land cover (LC map. Spatial correspondence with reference dataset was modeled for Globcover-2009, Land Cover-CCI-2010, MODIS-2010 and Globeland30 maps for Africa. Using different scenarios concerning the used input data, five integration methods for an improved LC map were tested and cross-validated. Comparison of the spatial correspondences showed that the preferences for GLC maps varied spatially. Integration methods using both the GLC maps and reference data at their locations resulted in 4.5%–13% higher correspondence with the reference LC than any of the input GLC maps. An integrated LC map and LC class probability maps were computed using regression kriging, which produced the highest correspondence (76%. Our results demonstrate the added value of using reference datasets and geostatistics for improving GLC maps. This approach is useful as more GLC reference datasets are becoming publicly available and their reuse is being encouraged.

  8. SU-E-T-540: Volumetric Modulated Total Body Irradiation Using a Rotational Lazy Susan-Like Immobilization System

    Energy Technology Data Exchange (ETDEWEB)

    Gu, X; Hrycushko, B; Lee, H; Lamphier, R; Jiang, S; Abdulrahman, R; Timmerman, R [UT Southwestern Medical Center, Dallas, TX (United States)

    2014-06-01

    Purpose: Traditional extended SSD total body irradiation (TBI) techniques can be problematic in terms of patient comfort and/or dose uniformity. This work aims to develop a comfortable TBI technique that achieves a uniform dose distribution to the total body while reducing the dose to organs at risk for complications. Methods: To maximize patient comfort, a lazy Susan-like couch top immobilization system which rotates about a pivot point was developed. During CT simulation, a patient is immobilized by a Vac-Lok bag within the body frame. The patient is scanned head-first and then feet-first following 180° rotation of the frame. The two scans are imported into the Pinnacle treatment planning system and concatenated to give a full-body CT dataset. Treatment planning matches multiple isocenter volumetric modulated arc (VMAT) fields of the upper body and multiple isocenter parallel-opposed fields of the lower body. VMAT fields of the torso are optimized to satisfy lung dose constraints while achieving a therapeutic dose to the torso. The multiple isocenter VMAT fields are delivered with an indexed couch, followed by body frame rotation about the pivot point to treat the lower body isocenters. The treatment workflow was simulated with a Rando phantom, and the plan was mapped to a solid water slab phantom for point- and film-dose measurements at multiple locations. Results: The treatment plan of 12Gy over 8 fractions achieved 80.2% coverage of the total body volume within ±10% of the prescription dose. The mean lung dose was 8.1 Gy. All ion chamber measurements were within ±1.7% compared to the calculated point doses. All relative film dosimetry showed at least a 98.0% gamma passing rate using a 3mm/3% passing criteria. Conclusion: The proposed patient comfort-oriented TBI technique provides for a uniform dose distribution within the total body while reducing the dose to the lungs.

  9. Visualization and volumetric structures from MR images of the brain

    Energy Technology Data Exchange (ETDEWEB)

    Parvin, B.; Johnston, W.; Robertson, D.

    1994-03-01

    Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.

  10. Soft bilateral filtering volumetric shadows using cube shadow maps

    Science.gov (United States)

    Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang

    2017-01-01

    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740

  11. Volumetric characteristics and compactability of asphalt rubber mixtures with organic warm mix asphalt additives

    Directory of Open Access Journals (Sweden)

    A. M. Rodríguez-Alloza

    2017-04-01

    Full Text Available Warm Mix Asphalt (WMA refers to technologies that reduce manufacturing and compaction temperatures of asphalt mixtures allowing lower energy consumption and reducing greenhouse gas emissions from asphalt plants. These benefits, combined with the effective reuse of a solid waste product, make asphalt rubber (AR mixtures with WMA additives an excellent environmentally-friendly material for road construction. The effect of WMA additives on rubberized mixtures has not yet been established in detail and the lower mixing/compaction temperatures of these mixtures may result in insufficient compaction. In this sense, the present study uses a series of laboratory tests to evaluate the volumetric characteristics and compactability of AR mixtures with organic additives when production/compaction temperatures are decreased. The results of this study indicate that the additives selected can decrease the mixing/compaction temperatures without compromising the volumetric characteristics and compactability.

  12. Volumetric capnography for the evaluation of chronic airways diseases

    Directory of Open Access Journals (Sweden)

    Veronez L

    2014-09-01

    Full Text Available Liliani de Fátima Veronez,1 Monica Corso Pereira,2 Silvia Maria Doria da Silva,2 Luisa Affi Barcaui,2 Eduardo Mello De Capitani,2 Marcos Mello Moreira,2 Ilma Aparecida Paschoalz2 1Department of Physical Therapy, University of Votuporanga (Educational Foundation of Votuporanga, Votuporanga, 2Department of Internal Medicine, School of Medical Sciences, State University of Campinas (UNICAMP, Campinas, Sao Paulo, BrazilBackground: Obstructive lung diseases of different etiologies present with progressive peripheral airway involvement. The peripheral airways, known as the silent lung zone, are not adequately evaluated with conventional function tests. The principle of gas washout has been used to detect pulmonary ventilation inhomogeneity and to estimate the location of the underlying disease process. Volumetric capnography (VC analyzes the pattern of CO2 elimination as a function of expired volume.Objective: To measure normalized phase 3 slopes with VC in patients with non-cystic fibrosis bronchiectasis (NCB and in bronchitic patients with chronic obstructive pulmonary disease (COPD in order to compare the slopes obtained for the groups.Methods: NCB and severe COPD were enrolled sequentially from an outpatient clinic (Hospital of the State University of Campinas. A control group was established for the NCB group, paired by sex and age. All subjects performed spirometry, VC, and the 6-Minute Walk Test (6MWT. Two comparisons were made: NCB group versus its control group, and NCB group versus COPD group. The project was approved by the ethical committee of the institution. Statistical tests used were Wilcoxon or Student’s t-test; P<0.05 was considered to be a statistically significant difference.Results: Concerning the NCB group (N=20 versus the control group (N=20, significant differences were found in body mass index and in several functional variables (spirometric, VC, 6MWT with worse results observed in the NCB group. In the comparison between

  13. Widespread Volumetric Brain Changes following Tooth Loss in Female Mice

    Science.gov (United States)

    Avivi-Arber, Limor; Seltzer, Ze'ev; Friedel, Miriam; Lerch, Jason P.; Moayedi, Massieh; Davis, Karen D.; Sessle, Barry J.

    2017-01-01

    and 21 BXA24 mice revealed significant volumetric differences between the two strains in several brain regions. These findings highlight the utility of high-resolution sMRI for studying tooth loss-induced structural brain plasticity in mice, and provide a foundation for further phenotyping structural brain changes following tooth loss in the full AXB-BXA panel to facilitate mapping genes that control brain plasticity following orofacial injury. PMID:28119577

  14. Volumetric and MGMT parameters in glioblastoma patients: Survival analysis

    Directory of Open Access Journals (Sweden)

    Iliadis Georgios

    2012-01-01

    Full Text Available Abstract Background In this study several tumor-related volumes were assessed by means of a computer-based application and a survival analysis was conducted to evaluate the prognostic significance of pre- and postoperative volumetric data in patients harboring glioblastomas. In addition, MGMT (O6-methylguanine methyltransferase related parameters were compared with those of volumetry in order to observe possible relevance of this molecule in tumor development. Methods We prospectively analyzed 65 patients suffering from glioblastoma (GBM who underwent radiotherapy with concomitant adjuvant temozolomide. For the purpose of volumetry T1 and T2-weighted magnetic resonance (MR sequences were used, acquired both pre- and postoperatively (pre-radiochemotherapy. The volumes measured on preoperative MR images were necrosis, enhancing tumor and edema (including the tumor and on postoperative ones, net-enhancing tumor. Age, sex, performance status (PS and type of operation were also included in the multivariate analysis. MGMT was assessed for promoter methylation with Multiplex Ligation-dependent Probe Amplification (MLPA, for RNA expression with real time PCR, and for protein expression with immunohistochemistry in a total of 44 cases with available histologic material. Results In the multivariate analysis a negative impact was shown for pre-radiochemotherapy net-enhancing tumor on the overall survival (OS (p = 0.023 and for preoperative necrosis on progression-free survival (PFS (p = 0.030. Furthermore, the multivariate analysis confirmed the importance of PS in PFS and OS of patients. MGMT promoter methylation was observed in 13/23 (43.5% evaluable tumors; complete methylation was observed in 3/13 methylated tumors only. High rate of MGMT protein positivity (> 20% positive neoplastic nuclei was inversely associated with pre-operative tumor necrosis (p = 0.021. Conclusions Our findings implicate that volumetric parameters may have a significant role in

  15. NEW APPROACH FOR TECHNOLOGY OF VOLUMETRIC – SUPERFICIAL HARDENING OF GEAR DETAILS OF THE BACK AXLE OF MOBILE MACHINES

    Directory of Open Access Journals (Sweden)

    A. I. Mihluk

    2010-01-01

    Full Text Available The new approach for technology of volumetric – superficial hardening of gear details of the back axle made of steel lowered harden ability is offered. This approach consisting in formation of intense – hardened condition on all surface of a detail.

  16. Evaluation of dosimetric effect caused by slowing with multi-leaf collimator (MLC leaves for volumetric modulated arc therapy (VMAT

    Directory of Open Access Journals (Sweden)

    Xu Zhengzheng

    2016-03-01

    Full Text Available This study is to report 1 the sensitivity of intensity modulated radiation therapy (IMRT QA method for clinical volumetric modulated arc therapy (VMAT plans with multi-leaf collimator (MLC leaf errors that will not trigger MLC interlock during beam delivery; 2 the effect of non-beam-hold MLC leaf errors on the quality of VMAT plan dose delivery.

  17. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  18. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  19. Brain stem and cerebellum volumetric analysis of Machado Joseph disease patients

    Directory of Open Access Journals (Sweden)

    S T Camargos

    2011-01-01

    Full Text Available Machado-Joseph disease, or spinocerebellar ataxia type 3(MJD/SCA3, is the most frequent late onset spinocerebellar ataxia and results from a CAG repeat expansion in the ataxin-3 gene. Previous studies have found correlation between atrophy of cerebellum and brainstem with age and CAG repeats, although no such correlation has been found with disease duration and clinical manifestations. In this study we test the hypothesis that atrophy of cerebellum and brainstem in MJD/SCA3 is related to clinical severity, disease duration and CAG repeat length as well as to other variables such as age and ICARS (International Cooperative Ataxia Rating Scale. Whole brain high resolution MRI and volumetric measurement with cranial volume normalization were obtained from 15 MJD/SCA3 patients and 15 normal, age and sex-matchedcontrols. We applied ICARS and compared the score with volumes and CAG number, disease duration and age. We found significant correlation of both brain stem and cerebellar atrophy with CAG repeat length, age, disease duration and degree of disability. The Spearman rank correlation was stronger with volumetric reduction of the cerebellum than with brain stem. Our data allow us to conclude that volumetric analysis might reveal progressive degeneration after disease onset, which in turn is linked to both age and number of CAG repeat expansions in SCA 3.

  20. The importance of accurate anatomic assessment for the volumetric analysis of the amygdala

    Directory of Open Access Journals (Sweden)

    L. Bonilha

    2005-03-01

    Full Text Available There is a wide range of values reported in volumetric studies of the amygdala. The use of single plane thick magnetic resonance imaging (MRI may prevent the correct visualization of anatomic landmarks and yield imprecise results. To assess whether there is a difference between volumetric analysis of the amygdala performed with single plane MRI 3-mm slices and with multiplanar analysis of MRI 1-mm slices, we studied healthy subjects and patients with temporal lobe epilepsy. We performed manual delineation of the amygdala on T1-weighted inversion recovery, 3-mm coronal slices and manual delineation of the amygdala on three-dimensional volumetric T1-weighted images with 1-mm slice thickness. The data were compared using a dependent t-test. There was a significant difference between the volumes obtained by the coronal plane-based measurements and the volumes obtained by three-dimensional analysis (P < 0.001. An incorrect estimate of the amygdala volume may preclude a correct analysis of the biological effects of alterations in amygdala volume. Three-dimensional analysis is preferred because it is based on more extensive anatomical assessment and the results are similar to those obtained in post-mortem studies.

  1. Three-dimensional linear and volumetric analysis of maxillary sinus pneumatization

    Directory of Open Access Journals (Sweden)

    Reham M. Hamdy

    2014-05-01

    Full Text Available Considering the anatomical variability related to the maxillary sinus, its intimate relation to the maxillary posterior teeth and because of all the implications that pneumatization may possess, three-dimensional assessment of maxillary sinus pneumatization is of most usefulness. The aim of this study is to analyze the maxillary sinus dimensions both linearly and volumetrically using cone beam computed tomography (CBCT to assess the maxillary sinus pneumatization. Retrospective analysis of 30 maxillary sinuses belonging to 15 patients’ CBCT scans was performed. Linear and volumetric measurements were conducted and statistically analyzed. The maximum craniocaudal extension of the maxillary sinus was located around the 2nd molar in 93% of the sinuses, while the maximum mediolateral and antroposterior extensions of the maxillary sinus were located at the level of root of zygomatic complex in 90% of sinuses. There was a high correlation between the linear measurements of the right and left sides, where the antroposterior extension of the sinus at level of the nasal floor had the largest correlation (0.89. There was also a high correlation between the Simplant and geometric derived maxillary sinus volumes for both right and left sides (0.98 and 0.96, respectively. The relations of the sinus floor can be accurately assessed on the different orthogonal images obtained through 3D CBCT scan. The geometric method offered a much cheaper, easier, and less sophisticated substitute; therefore, with the availability of software, 3D volumetric measurements are more facilitated.

  2. Dental volumetric tomographical evaluation of location and prevalence of maxillary sinus septa

    Directory of Open Access Journals (Sweden)

    Ibrahim Damlar

    2013-06-01

    Full Text Available Purpose: The aim of this study was to determine the prevalence and location of maxillary sinus septa with the help of dental volumetric tomography. Methods: 760 patients’ 1520 maxillary sinus were evaluated by dental volumetric tomography for detecting maxillary sinus septa. Maxillary sinus was divided into 3 zones (anterior, middle and posterior zones while location of the maxillary sinus septa. Results: 47 of maxillary sinus septa existed in the anterior zone (24.7%, 35 of them in the middle zone (18.4% and 108 of them in the posterior region (56.8%. Conclusion: The formation of the maxillary sinus septa was affected by the existence or lack of the teeth. Correct detection of the presence of maxillary sinus septa was important prior to sinus lifting and dental implant surgery. Dental volumetric tomographical evaluation of maxillary sinus septa was more useful for a correct diagnosis and treatment planning. [Cukurova Med J 2013; 38(3.000: 467-474

  3. Fatigue life estimation for different notched specimens based on the volumetric approach

    Directory of Open Access Journals (Sweden)

    Esmaeili F.

    2010-06-01

    Full Text Available In this paper, the effects of notch radius for different notched specimens has been studied on the values of stress concentration factor, notch strength reduction factor, and fatigue life duration of the specimens. The material which has been selected for this investigation is Al 2024T3 . Volumetric approach has been applied to obtain the values of notch strength reduction factor and results have been compared with those obtained from the Neuber and Peterson methods. Load controlled fatigue tests of mentioned specimens have been conducted on the 250kN servo-hydraulic Zwick/Amsler fatigue testing machine with the frequency of 10Hz. The fatigue lives of the specimens have also been predicted based on the available smooth S-N curve of Al2024-T3 and also the amounts of notch strength reduction factor which have been obtained from volumetric, Neuber and Peterson methods. The values of stress and strain around the notch roots are required to predict the fatigue life of notched specimens, so Ansys finite element code has been used and non-linear analyses have been performed to obtain the stress and strain distributions around the notches. The plastic deformations of the material have been simulated using multi-linear kinematic hardening and cyclic stress-strain relation. The work here shows that the volumetric approach does a very good job for predicting the fatigue life of the notched specimens.

  4. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Science.gov (United States)

    Luo, Gongning

    2017-01-01

    Segmentation of the left atrium (LA) from cardiac magnetic resonance imaging (MRI) datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs) and active contour model (ACM) approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC) and average surface-to-surface distance (S2S), were computed as 0.9227 ± 0.0598 and 1.14 ± 1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively. PMID:28316992

  5. PEMODELAN OBYEK TIGA DIMENSI DARI GAMBAR SINTETIS DUA DIMENSI DENGAN PENDEKATAN VOLUMETRIC

    Directory of Open Access Journals (Sweden)

    Rudy Adipranata

    2005-01-01

    Full Text Available In this paper, we implemented 3D object modeling from 2D input images. Modeling is performed by using volumetric reconstruction approaches by using volumetric reconstruction approaches, the 3D space is tesselated into discrete volumes called voxels. We use voxel coloring method to reconstruct 3D object from synthetic input images by using voxel coloring, we can get photorealistic result and also has advantage to solve occlusion problem that occur in many case of 3D reconstruction. Photorealistic 3D object reconstruction is a challenging problem in computer graphics and still an active area nowadays. Many applications that make use the result of reconstruction, include virtual reality, augmented reality, 3D games, and another 3D applications. Voxel coloring considered the reconstruction problem as a color reconstruction problem, instead of shape reconstruction problem. This method works by discretizing scene space into voxels, then traversed and colored those voxels in special order. The result is photorealitstic 3D object. Abstract in Bahasa Indonesia : Dalam penelitian ini dilakukan implementasi untuk pemodelan obyek tiga dimensi yang berasal dari gambar dua dimensi. Pemodelan ini dilakukan dengan menggunakan pendekatan volumetric. Dengan menggunakan pendekatan volumetric, ruang tiga dimensi dibagi menjadi bentuk diskrit yang disebut voxel. Kemudian pada voxel-voxel tersebut dilakukan metode pewarnaan voxel untuk mendapatkan hasil berupa obyek tiga dimensi yang bersifat photorealistic. Bagaimana memodelkan obyek tiga dimensi untuk menghasilkan hasil photorealistic merupakan masalah yang masih aktif di bidang komputer grafik. Banyak aplikasi lain yang dapat memanfaatkan hasil dari pemodelan tersebut seperti virtual reality, augmented reality dan lain-lain. Pewarnaan voxel merupakan pemodelan obyek tiga dimensi dengan melakukan rekonstruksi warna, bukan rekonstruksi bentuk. Metode ini bekerja dengan cara mendiskritkan obyek menjadi voxel dan

  6. A new laboratory-scale experimental facility for detailed aerothermal characterizations of volumetric absorbers

    Science.gov (United States)

    Gomez-Garcia, Fabrisio; Santiago, Sergio; Luque, Salvador; Romero, Manuel; Gonzalez-Aguilar, Jose

    2016-05-01

    This paper describes a new modular laboratory-scale experimental facility that was designed to conduct detailed aerothermal characterizations of volumetric absorbers for use in concentrating solar power plants. Absorbers are generally considered to be the element with the highest potential for efficiency gains in solar thermal energy systems. The configu-ration of volumetric absorbers enables concentrated solar radiation to penetrate deep into their solid structure, where it is progressively absorbed, prior to being transferred by convection to a working fluid flowing through the structure. Current design trends towards higher absorber outlet temperatures have led to the use of complex intricate geometries in novel ceramic and metallic elements to maximize the temperature deep inside the structure (thus reducing thermal emission losses at the front surface and increasing efficiency). Although numerical models simulate the conjugate heat transfer mechanisms along volumetric absorbers, they lack, in many cases, the accuracy that is required for precise aerothermal validations. The present work aims to aid this objective by the design, development, commissioning and operation of a new experimental facility which consists of a 7 kWe (1.2 kWth) high flux solar simulator, a radiation homogenizer, inlet and outlet collector modules and a working section that can accommodate volumetric absorbers up to 80 mm × 80 mm in cross-sectional area. Experimental measurements conducted in the facility include absorber solid temperature distributions along its depth, inlet and outlet air temperatures, air mass flow rate and pressure drop, incident radiative heat flux, and overall thermal efficiency. In addition, two windows allow for the direct visualization of the front and rear absorber surfaces, thus enabling full-coverage surface temperature measurements by thermal imaging cameras. This paper presents the results from the aerothermal characterization of a siliconized silicon

  7. Improved volumetric imaging in tomosynthesis using combined multiaxial sweeps.

    Science.gov (United States)

    Gersh, Jacob A; Wiant, David B; Best, Ryan C M; Bennett, Marcus C; Munley, Michael T; King, June D; McKee, Mahta M; Baydush, Alan H

    2010-09-03

    This study explores the volumetric reconstruction fidelity attainable using tomosynthesis with a kV imaging system which has a unique ability to rotate isocentrically and with multiple degrees of mechanical freedom. More specifically, we seek to investigate volumetric reconstructions by combining multiple limited-angle rotational image acquisition sweeps. By comparing these reconstructed images with those of a CBCT reconstruction, we can gauge the volumetric fidelity of the reconstructions. In surgical situations, the described tomosynthesis-based system could provide high-quality volumetric imaging without requiring patient motion, even with rotational limitations present. Projections were acquired using the Digital Integrated Brachytherapy Unit, or IBU-D. A phantom was used which contained several spherical objects of varying contrast. Using image projections acquired during isocentric sweeps around the phantom, reconstructions were performed by filtered backprojection. For each image acquisition sweep configuration, a contrasting sphere is analyzed using two metrics and compared to a gold standard CBCT reconstruction. Since the intersection of a reconstructed sphere and an imaging plane is ideally a circle with an eccentricity of zero, the first metric presented compares the effective eccentricity of intersections of reconstructed volumes and imaging planes. As another metric of volumetric reconstruction fidelity, the volume of one of the contrasting spheres was determined using manual contouring. By comparing these manually delineated volumes with a CBCT reconstruction, we can gauge the volumetric fidelity of reconstructions. The configuration which yielded the highest overall volumetric reconstruction fidelity, as determined by effective eccentricities and volumetric contouring, consisted of two orthogonally-offset 60° L-arm sweeps and a single C-arm sweep which shared a pivot point with one the L-arm sweeps. When compared to a similar configuration that

  8. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images.

  9. Characterization of hydrological responses to rainfall and volumetric coefficients on the event scale in rural catchments of the Iberian Peninsula

    Science.gov (United States)

    Taguas, Encarnación; Nadal-Romero, Estela; Ayuso, José L.; Casalí, Javier; Cid, Patricio; Dafonte, Jorge; Duarte, Antonio C.; Giménez, Rafael; Giráldez, Juan V.; Gómez-Macpherson, Helena; Gómez, José A.; González-Hidalgo, J. Carlos; Lucía, Ana; Mateos, Luciano; Rodríguez-Blanco, M. Luz; Schnabel, Susanne; Serrano-Muela, M. Pilar; Lana-Renault, Noemí; Mercedes Taboada-Castro, M.; Taboada-Castro, M. Teresa

    2016-04-01

    Analysis of storm rainfall-runoff data is essential to improve our understanding of catchment hydrology and to validate models supporting hydrological planning. In a context of climate change, statistical and process-based models are helpful to explore different scenarios which might be represented by simple parameters such as volumetric runoff coefficient. In this work, rainfall-runoff event datasets collected at 17 rural catchments in the Iberian Peninsula were studied. The objectives were: i) to describe hydrological patterns/variability of the relation rainfall-runoff; ii) to explore different methodologies to quantify representative volumetric runoff coefficients. Firstly, the criteria used to define an event were examined in order to standardize the analysis. Linear regression adjustments and statistics of the rainfall-runoff relations were examined to identify possible common patterns. In addition, a principal component analysis was applied to evaluate the variability among catchments based on their physical attributes. Secondly, runoff coefficients at event temporal scale were calculated following different methods. Median, mean, Hawkinś graphic method (Hawkins, 1993), reference values for engineering project of Prevert (TRAGSA, 1994) and the ratio of cumulated runoff and cumulated precipitation of the event that generated runoff (Rcum) were compared. Finally, the relations between the most representative volumetric runoff coefficients with the physical features of the catchments were explored using multiple linear regressions. The mean volumetric runoff coefficient in the studied catchments was 0.18, whereas the median was 0.15, both with variation coefficients greater than 100%. In 6 catchments, rainfall-runoff linear adjustments presented coefficient of determination greater than 0.60 (p hydrological response differences in the catchments. REFERENCES: Hawkins, R. H. (1993). Asymptotic determination of runoff curve numbers from data. J. Irrig. Drain. Eng

  10. Multi-atlas multi-shape segmentation of fetal brain MRI for volumetric and morphometric analysis of ventriculomegaly.

    Science.gov (United States)

    Gholipour, Ali; Akhondi-Asl, Alireza; Estroff, Judy A; Warfield, Simon K

    2012-04-15

    The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogeneous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement.

  11. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  12. BASE MAP DATASET, LOGAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  13. BASE MAP DATASET, KENDALL COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  14. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  16. BASE MAP DATASET, ROGERS COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  17. BASE MAP DATASET, HARRISON COUNTY, TEXAS, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  18. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  19. BASE MAP DATASET, SEQUOYAH COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  20. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  1. BASE MAP DATASET, CADDO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  2. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  3. Environmental Dataset Gateway (EDG) Search Widget

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  4. BASE MAP DATASET, CHEROKEE COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  5. Hajj and Umrah Event Recognition Datasets

    CERN Document Server

    Zawbaa, Hossam

    2012-01-01

    In this note, new Hajj and Umrah Event Recognition datasets (HUER) are presented. The demonstrated datasets are based on videos and images taken during 2011-2012 Hajj and Umrah seasons. HUER is the first collection of datasets covering the six types of Hajj and Umrah ritual events (rotating in Tawaf around Kabaa, performing Sa'y between Safa and Marwa, standing on the mount of Arafat, staying overnight in Muzdalifah, staying two or three days in Mina, and throwing Jamarat). The HUER datasets also contain video and image databases for nine types of human actions during Hajj and Umrah (walking, drinking from Zamzam water, sleeping, smiling, eating, praying, sitting, shaving hairs and ablutions, reading the holy Quran and making duaa). The spatial resolutions are 1280 x 720 pixels for images and 640 x 480 pixels for videos and have lengths of 20 seconds in average with 30 frame per second rates.

  6. VT Hydrography Dataset - cartographic extract lines

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) VHDCARTO is a simplified version of the local resolution Vermont Hydrography Dataset (VHD) that has been enriched with stream perenniality, e.g.,...

  7. VT Hydrography Dataset - cartographic extract polygons

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) VHDCARTO is a simplified version of the local resolution Vermont Hydrography Dataset (VHD) that has been enriched with stream perenniality, e.g.,...

  8. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  9. BASE MAP DATASET, GARVIN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  10. BASE MAP DATASET, OUACHITA COUNTY, ARKANSAS

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  11. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  12. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  13. BASE MAP DATASET, BRYAN COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme, orthographic...

  14. BASE MAP DATASET, DELAWARE COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. BASE MAP DATASET, STEPHENS COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  16. BASE MAP DATASET, WOODWARD COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  17. BASE MAP DATASET, HOWARD COUNTY, ARKANSAS

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  18. Structural diversity of biologically interesting datasets: a scaffold analysis approach

    Directory of Open Access Journals (Sweden)

    Khanna Varun

    2011-08-01

    Full Text Available Abstract Background The recent public availability of the human metabolome and natural product datasets has revitalized "metabolite-likeness" and "natural product-likeness" as a drug design concept to design lead libraries targeting specific pathways. Many reports have analyzed the physicochemical property space of biologically important datasets, with only a few comprehensively characterizing the scaffold diversity in public datasets of biological interest. With large collections of high quality public data currently available, we carried out a comparative analysis of current day leads with other biologically relevant datasets. Results In this study, we note a two-fold enrichment of metabolite scaffolds in drug dataset (42% as compared to currently used lead libraries (23%. We also note that only a small percentage (5% of natural product scaffolds space is shared by the lead dataset. We have identified specific scaffolds that are present in metabolites and natural products, with close counterparts in the drugs, but are missing in the lead dataset. To determine the distribution of compounds in physicochemical property space we analyzed the molecular polar surface area, the molecular solubility, the number of rings and the number of rotatable bonds in addition to four well-known Lipinski properties. Here, we note that, with only few exceptions, most of the drugs follow Lipinski's rule. The average values of the molecular polar surface area and the molecular solubility in metabolites is the highest while the number of rings is the lowest. In addition, we note that natural products contain the maximum number of rings and the rotatable bonds than any other dataset under consideration. Conclusions Currently used lead libraries make little use of the metabolites and natural products scaffold space. We believe that metabolites and natural products are recognized by at least one protein in the biosphere therefore, sampling the fragment and scaffold

  19. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  20. A method to detect landmark pairs accurately between intra-patient volumetric medical images.

    Science.gov (United States)

    Yang, Deshan; Zhang, Miao; Chang, Xiao; Fu, Yabo; Liu, Shi; Li, Harold H; Mutic, Sasa; Duan, Ye

    2017-08-23

    An image processing procedure was developed in this study to detect large quantity of landmark pairs accurately in pairs of volumetric medical images. The detected landmark pairs can be used to evaluate of deformable image registration (DIR) methods quantitatively. Landmark detection and pair matching were implemented in a Gaussian pyramid multi-resolution scheme. A 3D scale-invariant feature transform (SIFT) feature detection method and a 3D Harris-Laplacian corner detection method were employed to detect feature points, i.e., landmarks. A novel feature matching algorithm, Multi-Resolution Inverse-Consistent Guided Matching or MRICGM, was developed to allow accurate feature pairs matching. MRICGM performs feature matching using guidance by the feature pairs detected at the lower resolution stage and the higher confidence feature pairs already detected at the same resolution stage, while enforces inverse consistency. The proposed feature detection and feature pair matching algorithms were optimized to process 3D CT and MRI images. They were successfully applied between the inter-phase abdomen 4DCT images of three patients, between the original and the re-scanned radiation therapy simulation CT images of two head-neck patients, and between inter-fractional treatment MRIs of two patients. The proposed procedure was able to successfully detect and match over 6300 feature pairs on average. The automatically detected landmark pairs were manually verified and the mismatched pairs were rejected. The automatic feature matching accuracy before manual error rejection was 99.4%. Performance of MRICGM was also evaluated using seven digital phantom datasets with known ground truth of tissue deformation. On average, 11855 feature pairs were detected per digital phantom dataset with TRE = 0.77 ± 0.72 mm. A procedure was developed in this study to detect large number of landmark pairs accurately between two volumetric medical images. It allows a semi-automatic way to generate the

  1. Relevancy Ranking of Satellite Dataset Search Results

    Science.gov (United States)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2017-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  2. Robust Machine Learning Applied to Terascale Astronomical Datasets

    Science.gov (United States)

    Ball, N. M.; Brunner, R. J.; Myers, A. D.

    2008-08-01

    We present recent results from the Laboratory for Cosmological Data Mining {http://lcdm.astro.uiuc.edu} at the National Center for Supercomputing Applications (NCSA) to provide robust classifications and photometric redshifts for objects in the terascale-class Sloan Digital Sky Survey (SDSS). Through a combination of machine learning in the form of decision trees, k-nearest neighbor, and genetic algorithms, the use of supercomputing resources at NCSA, and the cyberenvironment Data-to-Knowledge, we are able to provide improved classifications for over 100 million objects in the SDSS, improved photometric redshifts, and a full exploitation of the powerful k-nearest neighbor algorithm. This work is the first to apply the full power of these algorithms to contemporary terascale astronomical datasets, and the improvement over existing results is demonstrable. We discuss issues that we have encountered in dealing with data on the terascale, and possible solutions that can be implemented to deal with upcoming petascale datasets.

  3. A high volume, high throughput volumetric sorption analyzer

    Science.gov (United States)

    Soo, Y. C.; Beckner, M.; Romanos, J.; Wexler, C.; Pfeifer, P.; Buckley, P.; Clement, J.

    2011-03-01

    In this talk we will present an overview of our new Hydrogen Test Fixture (HTF) constructed by the Midwest Research Institute for The Alliance for Collaborative Research in Alternative Fuel Technology to test activated carbon monoliths for hydrogen gas storage. The HTF is an automated, computer-controlled volumetric instrument for rapid screening and manipulation of monoliths under an inert atmosphere (to exclude degradation of carbon from exposure to oxygen). The HTF allows us to measure large quantity (up to 500 g) of sample in a 0.5 l test tank, making our results less sensitive to sample inhomogeneity. The HTF can measure isotherms at pressures ranging from 1 to 300 bar at room temperature. For comparison, other volumetric instruments such as Hiden Isochema's HTP-1 Volumetric Analyser can only measure carbon samples up to 150 mg at pressures up to 200 bar. Work supported by the US DOD Contract # N00164-08-C-GS37.

  4. Volumetric (3D) compressive sensing spectral domain optical coherence tomography.

    Science.gov (United States)

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-11-01

    In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.

  5. Dataset de contenidos musicales de video, basado en emociones

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Solarte Moncayo

    2016-07-01

    Full Text Available Agilizar el acceso al contenido, disminuyendo los tiempos de navegación por los catálogos multimedia, es uno de los retos del servicio de video bajo demanda (VoD, el cual es consecuencia del incremento de la cantidad de contenidos en las redes actuales. En este artículo, se describe el proceso de conformación de un dataset de videos musicales. Este dataset fue usado para el diseño e implementación de un servicio de VoD, el cual busca mejorar el acceso al contenido, mediante la clasificación musical de emociones. Así, en este trabajo se presenta la adaptación de un modelo de clasificación de emociones a partir del modelo de arousal-valence. Además, se describe el desarrollo de una herramienta Java para la clasificación de contenidos, la cual fue usada en la conformación del dataset. Finalmente, con el propósito de evaluar el dataset construido, se muestra la estructura funcional del servicio de VoD desarrollado.

  6. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  7. Microscopic images dataset for automation of RBCs counting

    Directory of Open Access Journals (Sweden)

    Sherif Abbas

    2015-12-01

    Full Text Available A method for Red Blood Corpuscles (RBCs counting has been developed using RBCs light microscopic images and Matlab algorithm. The Dataset consists of Red Blood Corpuscles (RBCs images and there RBCs segmented images. A detailed description using flow chart is given in order to show how to produce RBCs mask. The RBCs mask was used to count the number of RBCs in the blood smear image.

  8. Volumetric measurements of a spatially growing dust acoustic wave

    Science.gov (United States)

    Williams, Jeremiah D.

    2012-11-01

    In this study, tomographic particle image velocimetry (tomo-PIV) techniques are used to make volumetric measurements of the dust acoustic wave (DAW) in a weakly coupled dusty plasma system in an argon, dc glow discharge plasma. These tomo-PIV measurements provide the first instantaneous volumetric measurement of a naturally occurring propagating DAW. These measurements reveal over the measured volume that the measured wave mode propagates in all three spatial dimensional and exhibits the same spatial growth rate and wavelength in each spatial direction.

  9. Volumetric measurements of a spatially growing dust acoustic wave

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Jeremiah D. [Physics Department, Wittenberg University, Springfield, Ohio 45504 (United States)

    2012-11-15

    In this study, tomographic particle image velocimetry (tomo-PIV) techniques are used to make volumetric measurements of the dust acoustic wave (DAW) in a weakly coupled dusty plasma system in an argon, dc glow discharge plasma. These tomo-PIV measurements provide the first instantaneous volumetric measurement of a naturally occurring propagating DAW. These measurements reveal over the measured volume that the measured wave mode propagates in all three spatial dimensional and exhibits the same spatial growth rate and wavelength in each spatial direction.

  10. Volumetric Pricing of Agricultural Water Supplies: A Case Study

    Science.gov (United States)

    Griffin, Ronald C.; Perry, Gregory M.

    1985-07-01

    Models of water consumption by rice producers are conceptualized and then estimated using cross-sectional time series data obtained from 16 Texas canal operators for the years 1977-1982. Two alternative econometric models demonstrate that both volumetric and flat rate water charges are strongly and inversely related to agricultural water consumption. Nonprice conservation incentives accompanying flat rates are hypothesized to explain the negative correlation of flat rate charges and water consumption. Application of these results suggests that water supply organizations in the sample population converting to volumetric pricing will generally reduce water consumption.

  11. A new bed elevation dataset for Greenland

    Directory of Open Access Journals (Sweden)

    J. A. Griggs

    2012-11-01

    Full Text Available We present a new bed elevation dataset for Greenland derived from a combination of multiple airborne ice thickness surveys undertaken between the 1970s and 2011. Around 344 000 line kilometres of airborne data were used, with the majority of this having been collected since the year 2000, when the last comprehensive compilation was undertaken. The airborne data were combined with satellite-derived elevations for non glaciated terrain to produce a consistent bed digital elevation model (DEM over the entire island including across the glaciated/ice free boundary. The DEM was extended to the continental margin with the aid of bathymetric data, primarily from a compilation for the Arctic. Ice shelf thickness was determined where a floating tongue exists, in particular in the north. The across-track spacing between flight lines warranted interpolation at 1 km postings near the ice sheet margin and 2.5 km in the interior. Grids of ice surface elevation, error estimates for the DEM, ice thickness and data sampling density were also produced alongside a mask of land/ocean/grounded ice/floating ice. Errors in bed elevation range from a minimum of ±6 m to about ±200 m, as a function of distance from an observation and local topographic variability. A comparison with the compilation published in 2001 highlights the improvement in resolution afforded by the new data sets, particularly along the ice sheet margin, where ice velocity is highest and changes most marked. We use the new bed and surface DEMs to calculate the hydraulic potential for subglacial flow and present the large scale pattern of water routing. We estimate that the volume of ice included in our land/ice mask would raise eustatic sea level by 7.36 m, excluding any solid earth effects that would take place during ice sheet decay.

  12. X3D-Earth: Full Globe Coverage Utilizing Multiple Dataset

    Science.gov (United States)

    2010-09-01

    52 IV. AVAILABLE IMAGERY, CARTOGRAPHY , TERRAIN, AND BATHYMETRY...57 2. Cartography .......................................................................................58 a. Compressed...spatial information, 17 processing software, and spatial services. Spatial information and processing encompass geographic information systems ( GIS

  13. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  14. Comparison of global 3-D aviation emissions datasets

    Directory of Open Access Journals (Sweden)

    S. C. Olsen

    2013-01-01

    Full Text Available Aviation emissions are unique from other transportation emissions, e.g., from road transportation and shipping, in that they occur at higher altitudes as well as at the surface. Aviation emissions of carbon dioxide, soot, and water vapor have direct radiative impacts on the Earth's climate system while emissions of nitrogen oxides (NOx, sulfur oxides, carbon monoxide (CO, and hydrocarbons (HC impact air quality and climate through their effects on ozone, methane, and clouds. The most accurate estimates of the impact of aviation on air quality and climate utilize three-dimensional chemistry-climate models and gridded four dimensional (space and time aviation emissions datasets. We compare five available aviation emissions datasets currently and historically used to evaluate the impact of aviation on climate and air quality: NASA-Boeing 1992, NASA-Boeing 1999, QUANTIFY 2000, Aero2k 2002, and AEDT 2006 and aviation fuel usage estimates from the International Energy Agency. Roughly 90% of all aviation emissions are in the Northern Hemisphere and nearly 60% of all fuelburn and NOx emissions occur at cruise altitudes in the Northern Hemisphere. While these datasets were created by independent methods and are thus not strictly suitable for analyzing trends they suggest that commercial aviation fuelburn and NOx emissions increased over the last two decades while HC emissions likely decreased and CO emissions did not change significantly. The bottom-up estimates compared here are consistently lower than International Energy Agency fuelburn statistics although the gap is significantly smaller in the more recent datasets. Overall the emissions distributions are quite similar for fuelburn and NOx with regional peaks over the populated land masses of North America, Europe, and East Asia. For CO and HC there are relatively larger differences. There are however some distinct differences in the altitude distribution

  15. Comparison of global 3-D aviation emissions datasets

    Directory of Open Access Journals (Sweden)

    S. C. Olsen

    2012-07-01

    Full Text Available Aviation emissions are unique from other transportation emissions, e.g., from road transportation and shipping, in that they occur at higher altitudes as well as at the surface. Aviation emissions of carbon dioxide, soot, and water vapor have direct radiative impacts on the Earth's climate system while emissions of nitrogen oxides (NOx, sulfur oxides, carbon monoxide (CO, and hydrocarbons (HC impact air quality and climate through their effects on ozone, methane, and clouds. The most accurate estimates of the impact of aviation on air quality and climate utilize three-dimensional chemistry-climate models and gridded four dimensional (space and time aviation emissions datasets. We compare five available aviation emissions datasets currently and historically used to evaluate the impact of aviation on climate and air quality: NASA-Boeing 1992, NASA-Boeing 1999, QUANTIFY 2000, Aero2k 2002, and AEDT 2006 and aviation fuel usage estimates from the International Energy Agency. Roughly 90% of all aviation emissions are in the Northern Hemisphere and nearly 60% of all fuelburn and NOx emissions occur at cruise altitudes in the Northern Hemisphere. While these datasets were created by independent methods and are thus not strictly suitable for analyzing trends they suggest that commercial aviation fuelburn and NOx emissions increased over the last two decades while HC emissions likely decreased and CO emissions did not change significantly. The bottom-up estimates compared here are consistently lower than International Energy Agency fuelburn statistics although the gap is significantly lower in the more recent datasets. Overall the emissions distributions are quite similar for fuelburn and NOx while for CO and HC there are relatively larger differences. There are however some distinct differences in the altitude distribution of emissions in certain regions for the Aero2k dataset.

  16. Geoseq: a tool for dissecting deep-sequencing datasets

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2010-10-01

    Full Text Available Abstract Background Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO, Sequence Read Archive (SRA hosted by the NCBI, or the DNA Data Bank of Japan (ddbj. Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Results Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Conclusions Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a identify differential isoform expression in mRNA-seq datasets, b identify miRNAs (microRNAs in libraries, and identify mature and star sequences in miRNAS and c to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  17. Predicting MHC class I epitopes in large datasets

    Directory of Open Access Journals (Sweden)

    Lengauer Thomas

    2010-02-01

    Full Text Available Abstract Background Experimental screening of large sets of peptides with respect to their MHC binding capabilities is still very demanding due to the large number of possible peptide sequences and the extensive polymorphism of the MHC proteins. Therefore, there is significant interest in the development of computational methods for predicting the binding capability of peptides to MHC molecules, as a first step towards selecting peptides for actual screening. Results We have examined the performance of four diverse MHC Class I prediction methods on comparatively large HLA-A and HLA-B allele peptide binding datasets extracted from the Immune Epitope Database and Analysis resource (IEDB. The chosen methods span a representative cross-section of available methodology for MHC binding predictions. Until the development of IEDB, such an analysis was not possible, as the available peptide sequence datasets were small and spread out over many separate efforts. We tested three datasets which differ in the IC50 cutoff criteria used to select the binders and non-binders. The best performance was achieved when predictions were performed on the dataset consisting only of strong binders (IC50 less than 10 nM and clear non-binders (IC50 greater than 10,000 nM. In addition, robustness of the predictions was only achieved for alleles that were represented with a sufficiently large (greater than 200, balanced set of binders and non-binders. Conclusions All four methods show good to excellent performance on the comprehensive datasets, with the artificial neural networks based method outperforming the other methods. However, all methods show pronounced difficulties in correctly categorizing intermediate binders.

  18. Two ultraviolet radiation datasets that cover China

    Science.gov (United States)

    Liu, Hui; Hu, Bo; Wang, Yuesi; Liu, Guangren; Tang, Liqin; Ji, Dongsheng; Bai, Yongfei; Bao, Weikai; Chen, Xin; Chen, Yunming; Ding, Weixin; Han, Xiaozeng; He, Fei; Huang, Hui; Huang, Zhenying; Li, Xinrong; Li, Yan; Liu, Wenzhao; Lin, Luxiang; Ouyang, Zhu; Qin, Boqiang; Shen, Weijun; Shen, Yanjun; Su, Hongxin; Song, Changchun; Sun, Bo; Sun, Song; Wang, Anzhi; Wang, Genxu; Wang, Huimin; Wang, Silong; Wang, Youshao; Wei, Wenxue; Xie, Ping; Xie, Zongqiang; Yan, Xiaoyuan; Zeng, Fanjiang; Zhang, Fawei; Zhang, Yangjian; Zhang, Yiping; Zhao, Chengyi; Zhao, Wenzhi; Zhao, Xueyong; Zhou, Guoyi; Zhu, Bo

    2017-07-01

    Ultraviolet (UV) radiation has significant effects on ecosystems, environments, and human health, as well as atmospheric processes and climate change. Two ultraviolet radiation datasets are described in this paper. One contains hourly observations of UV radiation measured at 40 Chinese Ecosystem Research Network stations from 2005 to 2015. CUV3 broadband radiometers were used to observe the UV radiation, with an accuracy of 5%, which meets the World Meteorology Organization's measurement standards. The extremum method was used to control the quality of the measured datasets. The other dataset contains daily cumulative UV radiation estimates that were calculated using an all-sky estimation model combined with a hybrid model. The reconstructed daily UV radiation data span from 1961 to 2014. The mean absolute bias error and root-mean-square error are smaller than 30% at most stations, and most of the mean bias error values are negative, which indicates underestimation of the UV radiation intensity. These datasets can improve our basic knowledge of the spatial and temporal variations in UV radiation. Additionally, these datasets can be used in studies of potential ozone formation and atmospheric oxidation, as well as simulations of ecological processes.

  19. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  20. DSRank: A New Hyper-Linked Based Method to Rank Datasets in LOD Cloud

    Directory of Open Access Journals (Sweden)

    Hamidreza Fardad

    2014-12-01

    Full Text Available The increase of available datasets in web of data, causes the ranking of the datasets become very important. The present article, the famous PageRank algorithm is extended and a new link-based method is proposed for ranking the datasets in web of data. In this method, the number of links to the dataset, the type of the links, and the number of each type of the links has been considered and a new hyper-linked based approach name as DSRank is proposed. The suggested algorithm has been implemented on datasets through collecting from the web amounting to 20 GB. All of the datasets are arranged by using suggested method. In order to evaluate, the access log files of Dbpedia, DBTune, and Dog Food are used. The number of requests by users in one day for these datasets are calculated and then datasets are organized based on user’s opinion. The results of comparing our suggested algorithm with the number of the requests by the users in a day indicate that the order of the assigned ranks in the dataset through using the proposed method is correct.

  1. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  2. Preliminary performance analysis of a transverse flow spectrally selective two-slab packed bed volumetric receiver

    CSIR Research Space (South Africa)

    Roos, TH

    2016-05-01

    Full Text Available stream_source_info Roos_2016_ABSTRACT.pdf.txt stream_content_type text/plain stream_size 2694 Content-Encoding UTF-8 stream_name Roos_2016_ABSTRACT.pdf.txt Content-Type text/plain; charset=UTF-8 21st SolarPACES... International Conference (SolarPACES 2015), 13-16 October 2015 Preliminary Performance Analysis of a Transverse Flow Spectrally Selective Two-slab Packed Bed Volumetric Receiver Thomas H. Roos1, a) and Thomas M. Harms2, b) 1Aeronautical Systems...

  3. Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.

    Science.gov (United States)

    Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T

    2013-12-01

    Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.

  4. GPU-based computational adaptive optics for volumetric optical coherence microscopy

    Science.gov (United States)

    Tang, Han; Mulligan, Jeffrey A.; Untracht, Gavrielle R.; Zhang, Xihao; Adie, Steven G.

    2016-03-01

    Optical coherence tomography (OCT) is a non-invasive imaging technique that measures reflectance from within biological tissues. Current higher-NA optical coherence microscopy (OCM) technologies with near cellular resolution have limitations on volumetric imaging capabilities due to the trade-offs between resolution vs. depth-of-field and sensitivity to aberrations. Such trade-offs can be addressed using computational adaptive optics (CAO), which corrects aberration computationally for all depths based on the complex optical field measured by OCT. However, due to the large size of datasets plus the computational complexity of CAO and OCT algorithms, it is a challenge to achieve high-resolution 3D-OCM reconstructions at speeds suitable for clinical and research OCM imaging. In recent years, real-time OCT reconstruction incorporating both dispersion and defocus correction has been achieved through parallel computing on graphics processing units (GPUs). We add to these methods by implementing depth-dependent aberration correction for volumetric OCM using plane-by-plane phase deconvolution. Following both defocus and aberration correction, our reconstruction algorithm achieved depth-independent transverse resolution of 2.8 um, equal to the diffraction-limited focal plane resolution. We have translated the CAO algorithm to a CUDA code implementation and tested the speed of the software in real-time using two GPUs - NVIDIA Quadro K600 and Geforce TITAN Z. For a data volume containing 4096×256×256 voxels, our system's processing speed can keep up with the 60 kHz acquisition rate of the line-scan camera, and takes 1.09 seconds to simultaneously update the CAO correction for 3 en face planes at user-selectable depths.

  5. Pbm: A new dataset for blog mining

    CERN Document Server

    Aziz, Mehwish

    2012-01-01

    Text mining is becoming vital as Web 2.0 offers collaborative content creation and sharing. Now Researchers have growing interest in text mining methods for discovering knowledge. Text mining researchers come from variety of areas like: Natural Language Processing, Computational Linguistic, Machine Learning, and Statistics. A typical text mining application involves preprocessing of text, stemming and lemmatization, tagging and annotation, deriving knowledge patterns, evaluating and interpreting the results. There are numerous approaches for performing text mining tasks, like: clustering, categorization, sentimental analysis, and summarization. There is a growing need to standardize the evaluation of these tasks. One major component of establishing standardization is to provide standard datasets for these tasks. Although there are various standard datasets available for traditional text mining tasks, but there are very few and expensive datasets for blog-mining task. Blogs, a new genre in web 2.0 is a digital...

  6. Genomics dataset of unidentified disclosed isolates.

    Science.gov (United States)

    Rekadwad, Bhagwan N

    2016-09-01

    Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis.

  7. Spatial Evolution of Openstreetmap Dataset in Turkey

    Science.gov (United States)

    Zia, M.; Seker, D. Z.; Cakir, Z.

    2016-10-01

    Large amount of research work has already been done regarding many aspects of OpenStreetMap (OSM) dataset in recent years for developed countries and major world cities. On the other hand, limited work is present in scientific literature for developing or underdeveloped ones, because of poor data coverage. In presented study it has been demonstrated how Turkey-OSM dataset has spatially evolved in an 8 year time span (2007-2015) throughout the country. It is observed that there is an east-west spatial biasedness in OSM features density across the country. Population density and literacy level are found to be the two main governing factors controlling this spatial trend. Future research paradigms may involve considering contributors involvement and commenting about dataset health.

  8. Space-Time Transfinite Interpolation of Volumetric Material Properties.

    Science.gov (United States)

    Sanchez, Mathieu; Fryazinov, Oleg; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander

    2015-02-01

    The paper presents a novel technique based on extension of a general mathematical method of transfinite interpolation to solve an actual problem in the context of a heterogeneous volume modelling area. It deals with time-dependent changes to the volumetric material properties (material density, colour, and others) as a transformation of the volumetric material distributions in space-time accompanying geometric shape transformations such as metamorphosis. The main idea is to represent the geometry of both objects by scalar fields with distance properties, to establish in a higher-dimensional space a time gap during which the geometric transformation takes place, and to use these scalar fields to apply the new space-time transfinite interpolation to volumetric material attributes within this time gap. The proposed solution is analytical in its nature, does not require heavy numerical computations and can be used in real-time applications. Applications of this technique also include texturing and displacement mapping of time-variant surfaces, and parametric design of volumetric microstructures.

  9. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm;

    2015-01-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological...

  10. Automatic segmentation of pulmonary segments from volumetric chest CT scans.

    NARCIS (Netherlands)

    Rikxoort, E.M. van; Hoop, B. de; Vorst, S. van de; Prokop, M.; Ginneken, B. van

    2009-01-01

    Automated extraction of pulmonary anatomy provides a foundation for computerized analysis of computed tomography (CT) scans of the chest. A completely automatic method is presented to segment the lungs, lobes and pulmonary segments from volumetric CT chest scans. The method starts with lung segmenta

  11. Volumetric T-spline Construction Using Boolean Operations

    Science.gov (United States)

    2013-07-01

    15213, USA 2 Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX 78712, USA 3 Department of Civil and...and S. Yau. Volumetric harmonic map. Communications in Information and Systems, 3(3):191–202, 2003. 12. C.A.R. Guerra . Simultaneous untangling and

  12. Video-rate volumetric optical coherence tomography-based microangiography

    Science.gov (United States)

    Baran, Utku; Wei, Wei; Xu, Jingjiang; Qi, Xiaoli; Davis, Wyatt O.; Wang, Ruikang K.

    2016-04-01

    Video-rate volumetric optical coherence tomography (vOCT) is relatively young in the field of OCT imaging but has great potential in biomedical applications. Due to the recent development of the MHz range swept laser sources, vOCT has started to gain attention in the community. Here, we report the first in vivo video-rate volumetric OCT-based microangiography (vOMAG) system by integrating an 18-kHz resonant microelectromechanical system (MEMS) mirror with a 1.6-MHz FDML swept source operating at ˜1.3 μm wavelength. Because the MEMS scanner can offer an effective B-frame rate of 36 kHz, we are able to engineer vOMAG with a video rate up to 25 Hz. This system was utilized for real-time volumetric in vivo visualization of cerebral microvasculature in mice. Moreover, we monitored the blood perfusion dynamics during stimulation within mouse ear in vivo. We also discussed this system's limitations. Prospective MEMS-enabled OCT probes with a real-time volumetric functional imaging capability can have a significant impact on endoscopic imaging and image-guided surgery applications.

  13. Visualising Large Datasets in TOPCAT v4

    CERN Document Server

    Taylor, Mark

    2014-01-01

    TOPCAT is a widely used desktop application for manipulation of astronomical catalogues and other tables, which has long provided fast interactive visualisation features including 1, 2 and 3-d plots, multiple datasets, linked views, color coding, transparency and more. In Version 4 a new plotting library has been written from scratch to deliver new and enhanced visualisation capabilities. This paper describes some of the considerations in the design and implementation, particularly in regard to providing comprehensible interactive visualisation for multi-million point datasets.

  14. ArcHydro global datasets for Hawaii StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a personal geodatabase containing several vector datasets. These datasets may be used with the ArcHydro Tools, developed by ESRI in...

  15. Augmented Reality Prototype for Visualizing Large Sensors’ Datasets

    Directory of Open Access Journals (Sweden)

    Folorunso Olufemi A.

    2011-04-01

    Full Text Available This paper addressed the development of an augmented reality (AR based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations which made data exploration and visualisation daunting tasks. Therefore a model to manage such data and enhance computational support needed for effective explorations are developed in this paper. A challenge of this approach is to reduce the data inefficiency. This paper presented a model for computing information gain for each data attributes and determine a lead attribute.The computed lead attribute is then used for the development of an AR-based scientific visualization interface which automatically identifies, localises and visualizes all necessary data relevant to a particularly selected region of interest (ROI on the network. Necessary architectural system supports and the interface requirements for such visualizations are also presented.

  16. Analysis of Changing Swarm Rate using Volumetric Strain

    Science.gov (United States)

    Kumazawa, T.; Ogata, Y.; Kimura, K.; Maeda, K.; Kobayashi, A.

    2015-12-01

    Near the eastern coast of Izu peninsula is an active submarine volcanic region in Japan, where magma intrusions have been observed many times. The forecast of earthquake swarm activities and eruptions are serious concern particularly in nearby hot spring resort areas. It is well known that temporal durations of the swarm activities have been correlated with early volumetric strain changes at a certain observation station of about 20 km distance apart. Therefore the Earthquake Research Committee (2010) investigated some empirical statistical relations to predict sizes of the swarm activity. Here we looked at the background seismicity rate changes during these swarm periods using the non-stationary ETAS model (Kumazawa and Ogata, 2013, 2014), and have found the followings. The modified volumetric strain data, by removing the effect of earth tides, precipitation and coseismic jumps, have significantly higher cross-correlations to the estimated background rates of the ETAS model than to the swarm rate-changes. Specifically, the background seismicity rate synchronizes clearer to the strain change by the lags around a half day. These relations suggest an enhanced prediction of earthquakes in this region using volumetric strain measurements. Hence we propose an extended ETAS model where the background rate is modulated by the volumetric strain data. We have also found that the response function to the strain data can be well approximated by an exponential functions with the same decay rate, but that their intersects are inversely proportional to the distances between the volumetric strain-meter and the onset location of the swarm. Our numerical results by the same proposed model show consistent outcomes for the various major swarms in this region.

  17. Thesaurus Dataset of Educational Technology in Chinese

    Science.gov (United States)

    Wu, Linjing; Liu, Qingtang; Zhao, Gang; Huang, Huan; Huang, Tao

    2015-01-01

    The thesaurus dataset of educational technology is a knowledge description of educational technology in Chinese. The aims of this thesaurus were to collect the subject terms in the domain of educational technology, facilitate the standardization of terminology and promote the communication between Chinese researchers and scholars from various…

  18. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...

  19. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas

    2009-01-01

    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  20. The Derivation of Fault Volumetric Properties from 3D Trace Maps Using Outcrop Constrained Discrete Fracture Network Models

    Science.gov (United States)

    Hodgetts, David; Seers, Thomas

    2015-04-01

    Fault systems are important structural elements within many petroleum reservoirs, acting as potential conduits, baffles or barriers to hydrocarbon migration. Large, seismic-scale faults often serve as reservoir bounding seals, forming structural traps which have proved to be prolific plays in many petroleum provinces. Though inconspicuous within most seismic datasets, smaller subsidiary faults, commonly within the damage zones of parent structures, may also play an important role. These smaller faults typically form narrow, tabular low permeability zones which serve to compartmentalize the reservoir, negatively impacting upon hydrocarbon recovery. Though considerable improvements have been made in the visualization field to reservoir-scale fault systems with the advent of 3D seismic surveys, the occlusion of smaller scale faults in such datasets is a source of significant uncertainty during prospect evaluation. The limited capacity of conventional subsurface datasets to probe the spatial distribution of these smaller scale faults has given rise to a large number of outcrop based studies, allowing their intensity, connectivity and size distributions to be explored in detail. Whilst these studies have yielded an improved theoretical understanding of the style and distribution of sub-seismic scale faults, the ability to transform observations from outcrop to quantities that are relatable to reservoir volumes remains elusive. These issues arise from the fact that outcrops essentially offer a pseudo-3D window into the rock volume, making the extrapolation of surficial fault properties such as areal density (fracture length per unit area: P21), to equivalent volumetric measures (i.e. fracture area per unit volume: P32) applicable to fracture modelling extremely challenging. Here, we demonstrate an approach which harnesses advances in the extraction of 3D trace maps from surface reconstructions using calibrated image sequences, in combination with a novel semi

  1. Public Availability to ECS Collected Datasets

    Science.gov (United States)

    Henderson, J. F.; Warnken, R.; McLean, S. J.; Lim, E.; Varner, J. D.

    2013-12-01

    Coastal nations have spent considerable resources exploring the limits of their extended continental shelf (ECS) beyond 200 nm. Although these studies are funded to fulfill requirements of the UN Convention on the Law of the Sea, the investments are producing new data sets in frontier areas of Earth's oceans that will be used to understand, explore, and manage the seafloor and sub-seafloor for decades to come. Although many of these datasets are considered proprietary until a nation's potential ECS has become 'final and binding' an increasing amount of data are being released and utilized by the public. Data sets include multibeam, seismic reflection/refraction, bottom sampling, and geophysical data. The U.S. ECS Project, a multi-agency collaboration whose mission is to establish the full extent of the continental shelf of the United States consistent with international law, relies heavily on data and accurate, standard metadata. The United States has made it a priority to make available to the public all data collected with ECS-funding as quickly as possible. The National Oceanic and Atmospheric Administration's (NOAA) National Geophysical Data Center (NGDC) supports this objective by partnering with academia and other federal government mapping agencies to archive, inventory, and deliver marine mapping data in a coordinated, consistent manner. This includes ensuring quality, standard metadata and developing and maintaining data delivery capabilities built on modern digital data archives. Other countries, such as Ireland, have submitted their ECS data for public availability and many others have made pledges to participate in the future. The data services provided by NGDC support the U.S. ECS effort as well as many developing nation's ECS effort through the U.N. Environmental Program. Modern discovery, visualization, and delivery of scientific data and derived products that span national and international sources of data ensure the greatest re-use of data and

  2. Interpolation of diffusion weighted imaging datasets.

    Science.gov (United States)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W; Reislev, Nina L; Paulson, Olaf B; Ptito, Maurice; Siebner, Hartwig R

    2014-12-01

    Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical resolution and more anatomical details in complex regions such as tract boundaries and cortical layers, which are normally only visualized at higher image resolutions. Similar results were found with typical clinical human DWI dataset. However, a possible bias in quantitative values imposed by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue compartments.

  3. Spatially continuous dataset at local scale of Taita Hills in Kenya and Mount Kilimanjaro in Tanzania

    Directory of Open Access Journals (Sweden)

    Sizah Mwalusepo

    2016-09-01

    Full Text Available Climate change is a global concern, requiring local scale spatially continuous dataset and modeling of meteorological variables. This dataset article provided the interpolated temperature, rainfall and relative humidity dataset at local scale along Taita Hills and Mount Kilimanjaro altitudinal gradients in Kenya and Tanzania, respectively. The temperature and relative humidity were recorded hourly using automatic onset THHOBO data loggers and rainfall was recorded daily using GENERALR wireless rain gauges. Thin plate spline (TPS was used to interpolate, with the degree of data smoothing determined by minimizing the generalized cross validation. The dataset provide information on the status of the current climatic conditions along the two mountainous altitudinal gradients in Kenya and Tanzania. The dataset will, thus, enhance future research.

  4. End User Licence to Open Government Data? A Simulated Penetration Attack on Two Social Survey Datasets

    Directory of Open Access Journals (Sweden)

    Elliot Mark

    2016-06-01

    Full Text Available In the UK, the transparency agenda is forcing data stewardship organisations to review their dissemination policies and to consider whether to release data that is currently only available to a restricted community of researchers under licence as open data. Here we describe the results of a study providing evidence about the risks of such an approach via a simulated attack on two social survey datasets. This is also the first systematic attempt to simulate a jigsaw identification attack (one using a mashup of multiple data sources on an anonymised dataset. The information that we draw on is collected from multiple online data sources and purchasable commercial data. The results indicate that such an attack against anonymised end user licence (EUL datasets, if converted into open datasets, is possible and therefore we would recommend that penetration tests should be factored into any decision to make datasets (that are about people open.

  5. Detecting and Quantifying Forest Change: The Potential of Existing C- and X-Band Radar Datasets.

    Directory of Open Access Journals (Sweden)

    Mihai A Tanase

    Full Text Available This paper evaluates the opportunity provided by global interferometric radar datasets for monitoring deforestation, degradation and forest regrowth in tropical and semi-arid environments. The paper describes an easy to implement method for detecting forest spatial changes and estimating their magnitude. The datasets were acquired within space-borne high spatial resolutions radar missions at near-global scales thus being significant for monitoring systems developed under the United Framework Convention on Climate Change (UNFCCC. The approach presented in this paper was tested in two areas located in Indonesia and Australia. Forest change estimation was based on differences between a reference dataset acquired in February 2000 by the Shuttle Radar Topography Mission (SRTM and TanDEM-X mission (TDM datasets acquired in 2011 and 2013. The synergy between SRTM and TDM datasets allowed not only identifying changes in forest extent but also estimating their magnitude with respect to the reference through variations in forest height.

  6. VFDB 2016: hierarchical and refined dataset for big data analysis—10 years on

    Science.gov (United States)

    Chen, Lihong; Zheng, Dandan; Liu, Bo; Yang, Jian; Jin, Qi

    2016-01-01

    The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) is dedicated to providing up-to-date knowledge of virulence factors (VFs) of various bacterial pathogens. Since its inception the VFDB has served as a comprehensive repository of bacterial VFs for over a decade. The exponential growth in the amount of biological data is challenging to the current database in regard to big data analysis. We recently improved two aspects of the infrastructural dataset of VFDB: (i) removed the redundancy introduced by previous releases and generated two hierarchical datasets – one core dataset of experimentally verified VFs only and another full dataset including all known and predicted VFs and (ii) refined the gene annotation of the core dataset with controlled vocabularies. Our efforts enhanced the data quality of the VFDB and promoted the usability of the database in the big data era for the bioinformatic mining of the explosively growing data regarding bacterial VFs. PMID:26578559

  7. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  8. Comparison of hybrid volumetric modulated arc therapy (VMAT technique and double arc VMAT technique in the treatment of prostate cancer

    Directory of Open Access Journals (Sweden)

    Amaloo Christopher

    2015-09-01

    Full Text Available Background. Volumetric modulated arc therapy (VMAT has quickly become accepted as standard of care for the treatment of prostate cancer based on studies showing it is able to provide faster delivery with adequate target coverage and reduced monitor units while maintaining organ at risk (OAR sparing. This study aims to demonstrate the potential to increase dose conformality with increased planner control and OAR sparing using a hybrid treatment technique compared to VMAT.

  9. Increasing consistency of disease biomarker prediction across datasets.

    Directory of Open Access Journals (Sweden)

    Maria D Chikina

    Full Text Available Microarray studies with human subjects often have limited sample sizes which hampers the ability to detect reliable biomarkers associated with disease and motivates the need to aggregate data across studies. However, human gene expression measurements may be influenced by many non-random factors such as genetics, sample preparations, and tissue heterogeneity. These factors can contribute to a lack of agreement among related studies, limiting the utility of their aggregation. We show that it is feasible to carry out an automatic correction of individual datasets to reduce the effect of such 'latent variables' (without prior knowledge of the variables in such a way that datasets addressing the same condition show better agreement once each is corrected. We build our approach on the method of surrogate variable analysis but we demonstrate that the original algorithm is unsuitable for the analysis of human tissue samples that are mixtures of different cell types. We propose a modification to SVA that is crucial to obtaining the improvement in agreement that we observe. We develop our method on a compendium of multiple sclerosis data and verify it on an independent compendium of Parkinson's disease datasets. In both cases, we show that our method is able to improve agreement across varying study designs, platforms, and tissues. This approach has the potential for wide applicability to any field where lack of inter-study agreement has been a concern.

  10. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  11. Synchronization of networks of chaotic oscillators: Structural and dynamical datasets

    Directory of Open Access Journals (Sweden)

    Ricardo Sevilla-Escoboza

    2016-06-01

    Full Text Available We provide the topological structure of a series of N=28 Rössler chaotic oscillators diffusively coupled through one of its variables. The dynamics of the y variable describing the evolution of the individual nodes of the network are given for a wide range of coupling strengths. Datasets capture the transition from the unsynchronized behavior to the synchronized one, as a function of the coupling strength between oscillators. The fact that both the underlying topology of the system and the dynamics of the nodes are given together makes this dataset a suitable candidate to evaluate the interplay between functional and structural networks and serve as a benchmark to quantify the ability of a given algorithm to extract the structural network of connections from the observation of the dynamics of the nodes. At the same time, it is possible to use the dataset to analyze the different dynamical properties (randomness, complexity, reproducibility, etc. of an ensemble of oscillators as a function of the coupling strength.

  12. Scaling statistical multiple sequence alignment to large datasets

    Directory of Open Access Journals (Sweden)

    Michael Nute

    2016-11-01

    Full Text Available Abstract Background Multiple sequence alignment is an important task in bioinformatics, and alignments of large datasets containing hundreds or thousands of sequences are increasingly of interest. While many alignment methods exist, the most accurate alignments are likely to be based on stochastic models where sequences evolve down a tree with substitutions, insertions, and deletions. While some methods have been developed to estimate alignments under these stochastic models, only the Bayesian method BAli-Phy has been able to run on even moderately large datasets, containing 100 or so sequences. A technique to extend BAli-Phy to enable alignments of thousands of sequences could potentially improve alignment and phylogenetic tree accuracy on large-scale data beyond the best-known methods today. Results We use simulated data with up to 10,000 sequences representing a variety of model conditions, including some that are significantly divergent from the statistical models used in BAli-Phy and elsewhere. We give a method for incorporating BAli-Phy into PASTA and UPP, two strategies for enabling alignment methods to scale to large datasets, and give alignment and tree accuracy results measured against the ground truth from simulations. Comparable results are also given for other methods capable of aligning this many sequences. Conclusions Extensions of BAli-Phy using PASTA and UPP produce significantly more accurate alignments and phylogenetic trees than the current leading methods.

  13. ENHANCED DATA DISCOVERABILITY FOR IN SITU HYPERSPECTRAL DATASETS

    Directory of Open Access Journals (Sweden)

    B. Rasaiah

    2016-06-01

    Full Text Available Field spectroscopic metadata is a central component in the quality assurance, reliability, and discoverability of hyperspectral data and the products derived from it. Cataloguing, mining, and interoperability of these datasets rely upon the robustness of metadata protocols for field spectroscopy, and on the software architecture to support the exchange of these datasets. Currently no standard for in situ spectroscopy data or metadata protocols exist. This inhibits the effective sharing of growing volumes of in situ spectroscopy datasets, to exploit the benefits of integrating with the evolving range of data sharing platforms. A core metadataset for field spectroscopy was introduced by Rasaiah et al., (2011-2015 with extended support for specific applications. This paper presents a prototype model for an OGC and ISO compliant platform-independent metadata discovery service aligned to the specific requirements of field spectroscopy. In this study, a proof-of-concept metadata catalogue has been described and deployed in a cloud-based architecture as a demonstration of an operationalized field spectroscopy metadata standard and web-based discovery service.

  14. Strategies for analyzing highly enriched IP-chip datasets

    Directory of Open Access Journals (Sweden)

    Tavaré Simon

    2009-09-01

    Full Text Available Abstract Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip has been employed to examine features such as protein binding and histone modifications on a genome-wide scale in a variety of cell types. Array data from the latter studies typically have a high proportion of enriched probes whose signals vary considerably (due to heterogeneity in the cell population, and this makes their normalization and downstream analysis difficult. Results Here we present strategies for analyzing such experiments, focusing our discussion on the analysis of Bromodeoxyruridine (BrdU immunoprecipitation on tiling array (BrdU-IP-chip datasets. BrdU-IP-chip experiments map large, recently replicated genomic regions and have similar characteristics to histone modification/location data. To prepare such data for downstream analysis we employ a dynamic programming algorithm that identifies a set of putative unenriched probes, which we use for both within-array and between-array normalization. We also introduce a second dynamic programming algorithm that incorporates a priori knowledge to identify and quantify positive signals in these datasets. Conclusion Highly enriched IP-chip datasets are often difficult to analyze with traditional array normalization and analysis strategies. Here we present and test a set of analytical tools for their normalization and quantification that allows for accurate identification and analysis of enriched regions.

  15. Improving plan quality for prostate volumetric-modulated arc therapy.

    Science.gov (United States)

    Wright, Katrina; Ferrari-Anderson, Janet; Barry, Tamara; Bernard, Anne; Brown, Elizabeth; Lehman, Margot; Pryor, David

    2017-08-04

    We critically evaluated the quality and consistency of volumetric-modulated arc therapy (VMAT) prostate planning at a single institution to quantify objective measures for plan quality and establish clear guidelines for plan evaluation and quality assurance. A retrospective analysis was conducted on 34 plans generated on the Pinnacle(3) version 9.4 and 9.8 treatment planning system to deliver 78 Gy in 39 fractions to the prostate only using VMAT. Data were collected on contoured structure volumes, overlaps and expansions, planning target volume (PTV) and organs at risk volumes and relationship, dose volume histogram, plan conformity, plan homogeneity, low-dose wash, and beam parameters. Standard descriptive statistics were used to describe the data. Despite a standardized planning protocol, we found variability was present in all steps of the planning process. Deviations from protocol contours by radiation oncologists and radiation therapists occurred in 12% and 50% of cases, respectively, and the number of optimization parameters ranged from 12 to 27 (median 17). This contributed to conflicts within the optimization process reflected by the mean composite objective value of 0.07 (range 0.01 to 0.44). Methods used to control low-intermediate dose wash were inconsistent. At the PTV rectum interface, the dose-gradient distance from the 74.1 Gy to 40 Gy isodose ranged from 0.6 cm to 2.0 cm (median 1.0 cm). Increasing collimator angle was associated with a decrease in monitor units and a single full 6 MV arc was sufficient for the majority of plans. A significant relationship was found between clinical target volume-rectum distance and rectal tolerances achieved. A linear relationship was determined between the PTV volume and volume of 40 Gy isodose. Objective values and composite objective values were useful in determining plan quality. Anatomic geometry and overlap of structures has a measurable impact on the plan quality achieved for prostate patients

  16. Updated data on institutions and elections 1960–2012: presenting the IAEP dataset version 2.0

    Directory of Open Access Journals (Sweden)

    Tore Wig

    2015-04-01

    Full Text Available This article presents an updated version of the Institutions and Elections Project (IAEP dataset. The dataset comprises information on 107 de jure institutional provisions, and 16 variables related to electoral procedures and electoral events, for 170 countries in the period 1960–2012. The dataset is one of the most encompassing datasets on global institutional variation that explicitly codes de jure formal institutions. This article presents the dataset and compares it with existing datasets on political institutions, highlighting how the IAEP’s focus on disaggregated de jure institutions complements existing datasets that combine de facto and de jure elements. We illustrate the potential uses of the data by constructing indices that capture institutional dimensions beyond the standard democracy–autocracy dimension, and that represent different ways of using the data for index construction. Finally, we illustrate potential applications by conducting a short replication and expansion of a recent study of democracy and civil war onset.

  17. A Comparative Analysis of Burned Area Datasets in Canadian Boreal Forest in 2000

    Directory of Open Access Journals (Sweden)

    Laia Núñez-Casillas

    2013-01-01

    Full Text Available The turn of the new millennium was accompanied by a particularly diverse group of burned area datasets from different sensors in the Canadian boreal forests, brought together in a year of low global fire activity. This paper provides an assessment of spatial and temporal accuracy, by means of a fire-by-fire comparison of the following: two burned area datasets obtained from SPOT-VEGETATION (VGT imagery, a MODIS Collection 5 burned area dataset, and three different datasets obtained from NOAA-AVHRR. Results showed that burned area data from MODIS provided accurate dates of burn but great omission error, partially caused by calibration problems. One of the VGT-derived datasets (L3JRC represented the largest number of fire sites in spite of its great overall underestimation, whereas the GBA2000 dataset achieved the best burned area quantification, both showing delayed and very variable fire timing. Spatial accuracy was comparable between the 5 km and the 1 km AVHRR-derived datasets but was remarkably lower in the 8 km dataset leading, us to conclude that at higher spatial resolutions, temporal accuracy was lower. The probable methodological and contextual causes of these differences were analyzed in detail.

  18. Epitope Prediction Based on Random Peptide Library Screening: Benchmark Dataset and Prediction Tools Evaluation

    Directory of Open Access Journals (Sweden)

    Yanxin Huang

    2011-06-01

    Full Text Available Epitope prediction based on random peptide library screening has become a focus as a promising method in immunoinformatics research. Some novel software and web-based servers have been proposed in recent years and have succeeded in given test cases. However, since the number of available mimotopes with the relevant structure of template-target complex is limited, a systematic evaluation of these methods is still absent. In this study, a new benchmark dataset was defined. Using this benchmark dataset and a representative dataset, five examples of the most popular epitope prediction software products which are based on random peptide library screening have been evaluated. Using the benchmark dataset, in no method did performance exceed a 0.42 precision and 0.37 sensitivity, and the MCC scores suggest that the epitope prediction results of these software programs are greater than random prediction about 0.09–0.13; while using the representative dataset, most of the values of these performance measures are slightly improved, but the overall performance is still not satisfactory. Many test cases in the benchmark dataset cannot be applied to these pieces of software due to software limitations. Moreover chances are that these software products are overfitted to the small dataset and will fail in other cases. Therefore finding the correlation between mimotopes and genuine epitope residues is still far from resolved and much larger dataset for mimotope-based epitope prediction is desirable.

  19. Variable Volumetric Stiffness Fluid Mount Design

    Directory of Open Access Journals (Sweden)

    Nader Vahdati

    2004-01-01

    Full Text Available Passive fluid mounts are commonly used in the automotive and aerospace applications to isolate the cabin from the engine noise and vibration. Due to manufacturing and material variabilities, no two identical fluid mount designs act the same. So, fluid mounts are tuned one by one before it is shipped out to customers. In some cases, for a batch of fluid mounts manufactured at the same time, one is tuned and the rest is set to the same settings. In some cases they are shipped as is with its notch frequency not being in its most optimum location. Since none of the passive fluid mount parameters are controllable, the only way to tune the mount is to redesign the mount by changing fluid, changing inertia track length or diameter, or changing rubber stiffness. This trial and error manufacturing process is very costly. To reduce the fluid mount notch frequency tuning cycle time, a new fluid mount design is proposed. In this new fluid mount design, the notch frequency can be easily modified without the need for any redesigns. In this paper, the new design concept, and its mathematical model and simulation results will be presented.

  20. Simultaneous clustering of multiple gene expression and physical interaction datasets.

    Directory of Open Access Journals (Sweden)

    Manikandan Narayanan

    2010-04-01

    Full Text Available Many genome-wide datasets are routinely generated to study different aspects of biological systems, but integrating them to obtain a coherent view of the underlying biology remains a challenge. We propose simultaneous clustering of multiple networks as a framework to integrate large-scale datasets on the interactions among and activities of cellular components. Specifically, we develop an algorithm JointCluster that finds sets of genes that cluster well in multiple networks of interest, such as coexpression networks summarizing correlations among the expression profiles of genes and physical networks describing protein-protein and protein-DNA interactions among genes or gene-products. Our algorithm provides an efficient solution to a well-defined problem of jointly clustering networks, using techniques that permit certain theoretical guarantees on the quality of the detected clustering relative to the optimal clustering. These guarantees coupled with an effective scaling heuristic and the flexibility to handle multiple heterogeneous networks make our method JointCluster an advance over earlier approaches. Simulation results showed JointCluster to be more robust than alternate methods in recovering clusters implanted in networks with high false positive rates. In systematic evaluation of JointCluster and some earlier approaches for combined analysis of the yeast physical network and two gene expression datasets under glucose and ethanol growth conditions, JointCluster discovers clusters that are more consistently enriched for various reference classes capturing different aspects of yeast biology or yield better coverage of the analysed genes. These robust clusters, which are supported across multiple genomic datasets and diverse reference classes, agree with known biology of yeast under these growth conditions, elucidate the genetic control of coordinated transcription, and enable functional predictions for a number of uncharacterized genes.

  1. Circumpolar dataset of sequenced specimens of Promachocrinus kerguelensis (Echinodermata, Crinoidea

    Directory of Open Access Journals (Sweden)

    Lenaïg G. Hemery

    2013-07-01

    Full Text Available This circumpolar dataset of the comatulid (Echinodermata: Crinoidea Promachocrinus kerguelensis (Carpenter, 1888 from the Southern Ocean, documents biodiversity associated with the specimens sequenced in Hemery et al. (2012. The aim of Hemery et al. (2012 paper was to use phylogeographic and phylogenetic tools to assess the genetic diversity, demographic history and evolutionary relationships of this very common and abundant comatulid, in the context of the glacial history of the Antarctic and Sub-Antarctic shelves (Thatje et al. 2005, 2008. Over one thousand three hundred specimens (1307 used in this study were collected during seventeen cruises from 1996 to 2010, in eight regions of the Southern Ocean: Kerguelen Plateau, Davis Sea, Dumont d’Urville Sea, Ross Sea, Amundsen Sea, West Antarctic Peninsula, East Weddell Sea and Scotia Arc including the tip of the Antarctic Peninsula and the Bransfield Strait. We give here the metadata of this dataset, which lists sampling sources (cruise ID, ship name, sampling date, sampling gear, sampling sites (station, geographic coordinates, depth and genetic data (phylogroup, haplotype, sequence ID for each of the 1307 specimens. The identification of the specimens was controlled by an expert taxonomist specialist of crinoids (Marc Eléaume, Muséum national d’Histoire naturelle, Paris and all the COI sequences were matched against those available on the Barcode of Life Data System (BOLD: http://www.boldsystems.org/index.php/IDS_OpenIdEngine. This dataset can be used by studies dealing with, among other interests, Antarctic and/or crinoid diversity (species richness, distribution patterns, biogeography or habitat / ecological niche modeling. This dataset is accessible through the GBIF network at http://ipt.biodiversity.aq/resource.do?r=proke.

  2. Nonrigid registration of volumetric images using ranked order statistics

    DEFF Research Database (Denmark)

    Tennakoon, Ruwan; Bab-Hadiashar, Alireza; Cao, Zhenwei

    2014-01-01

    Non-rigid image registration techniques using intensity based similarity measures are widely used in medical imaging applications. Due to high computational complexities of these techniques, particularly for volumetric images, finding appropriate registration methods to both reduce the computation...... burden and increase the registration accuracy has become an intensive area of research. In this paper we propose a fast and accurate non-rigid registration method for intra-modality volumetric images. Our approach exploits the information provided by an order statistics based segmentation method, to find...... the important regions for registration and use an appropriate sampling scheme to target those areas and reduce the registration computation time. A unique advantage of the proposed method is its ability to identify the point of diminishing returns and stop the registration process. Our experiments...

  3. Volumetric characterization of delamination fields via angle longitudinal wave ultrasound

    Science.gov (United States)

    Wertz, John; Wallentine, Sarah; Welter, John; Dierken, Josiah; Aldrin, John

    2017-02-01

    The volumetric characterization of delaminations necessarily precedes rigorous composite damage progression modeling. Yet, inspection of composite structures for subsurface damage remains largely focused on detection, resulting in a capability gap. In response to this need, angle longitudinal wave ultrasound was employed to characterize a composite surrogate containing a simulated three-dimensional delamination field with distinct regions of occluded features (shadow regions). Simple analytical models of the specimen were developed to guide subsequent experimentation through identification of optimal scanning parameters. The ensuing experiments provided visual evidence of the complete delamination field, including indications of features within the shadow regions. The results of this study demonstrate proof-of-principle for the use of angle longitudinal wave ultrasonic inspection for volumetric characterization of three-dimensional delamination fields. Furthermore, the techniques developed herein form the foundation of succeeding efforts to characterize impact delaminations within inhomogeneous laminar materials such as polymer matrix composites.

  4. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  5. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm

    2015-01-01

    . This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array......Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...

  6. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    2016-10-18

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  7. Volumetric 3D display using a DLP projection engine

    Science.gov (United States)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  8. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images.

    Science.gov (United States)

    Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H; Rottmann, Joerg; Bryant, Jonathan H; Williams, Christopher L; Berbeco, Ross I; Lewis, John H

    2014-08-01

    In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate

  9. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H. [Brigham and Women' s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Li, Ruijiang [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  10. Sharing Video Datasets in Design Research

    DEFF Research Database (Denmark)

    Christensen, Bo; Abildgaard, Sille Julie Jøhnk

    2017-01-01

    This paper examines how design researchers, design practitioners and design education can benefit from sharing a dataset. We present the Design Thinking Research Symposium 11 (DTRS11) as an exemplary project that implied sharing video data of design processes and design activity in natural settings...... with a large group of fellow academics from the international community of Design Thinking Research, for the purpose of facilitating research collaboration and communication within the field of Design and Design Thinking. This approach emphasizes the social and collaborative aspects of design research, where...... a multitude of appropriate perspectives and methods may be utilized in analyzing and discussing the singular dataset. The shared data is, from this perspective, understood as a design object in itself, which facilitates new ways of working, collaborating, studying, learning and educating within the expanding...

  11. RTK: efficient rarefaction analysis of large datasets.

    Science.gov (United States)

    Saary, Paul; Forslund, Kristoffer; Bork, Peer; Hildebrand, Falk

    2017-08-15

    The rapidly expanding microbiomics field is generating increasingly larger datasets, characterizing the microbiota in diverse environments. Although classical numerical ecology methods provide a robust statistical framework for their analysis, software currently available is inadequate for large datasets and some computationally intensive tasks, like rarefaction and associated analysis. Here we present a software package for rarefaction analysis of large count matrices, as well as estimation and visualization of diversity, richness and evenness. Our software is designed for ease of use, operating at least 7x faster than existing solutions, despite requiring 10x less memory. C ++ and R source code (GPL v.2) as well as binaries are available from https://github.com/hildebra/Rarefaction and from CRAN (https://cran.r-project.org/). bork@embl.de or falk.hildebrand@embl.de. Supplementary data are available at Bioinformatics online.

  12. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal...... to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional...

  13. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  14. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  15. Using surface heave to estimate reservoir volumetric strain

    Energy Technology Data Exchange (ETDEWEB)

    Nanayakkara, A.S.; Wong, R.C.K. [Calgary Univ., AB (Canada)

    2008-07-01

    This paper presented a newly developed numerical tool for estimating reservoir volumetric strain distribution using surface vertical displacements and solving an inverse problem. Waterflooding, steam injection, carbon dioxide sequestration and aquifer storage recovery are among the subsurface injection operations that are responsible for reservoir dilations which propagate to the surrounding formations and extend to the surface resulting in surface heaves. Global positioning systems and surface tiltmeters are often used to measure the characteristics of these surface heaves and to derive valuable information regarding reservoir deformation and flow characteristics. In this study, Tikhonov regularization techniques were adopted to solve the ill-posed inversion problem commonly found in standard inversion techniques such as Gaussian elimination and least squares methods. Reservoir permeability was then estimated by inverting the volumetric strain distribution. Results of the newly developed numerical tool were compared with results from fully-coupled finite element simulation of fluid injection problems. The reservoir volumetric strain distribution was successfully estimated along with an approximate value for reservoir permeability.

  16. Volumetric Light-field Encryption at the Microscopic Scale

    Science.gov (United States)

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.

  17. Volumetric Light-field Encryption at the Microscopic Scale

    Science.gov (United States)

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149

  18. Volumetric Light-field Encryption at the Microscopic Scale

    CERN Document Server

    Li, Haoyu; Muniraj, Inbarasan; Schroeder, Bryce C; Sheridan, John T; Jia, Shu

    2016-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve spatially multiplexed discrete and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.

  19. Volumetric Light-field Encryption at the Microscopic Scale.

    Science.gov (United States)

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C; Sheridan, John T; Jia, Shu

    2017-01-06

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.

  20. Data Assimilation and Model Evaluation Experiment Datasets.

    Science.gov (United States)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  1. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy

    CERN Document Server

    Li, Ruijiang; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-01-01

    Recently we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency were then evaluated on 1) a digital respiratory phantom, 2) a physical respiratory phantom, and 3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 seconds, for both regular and irreg...

  2. Integrated dataset of screening hits against multiple neglected disease pathogens.

    Directory of Open Access Journals (Sweden)

    Solomon Nwaka

    2011-12-01

    Full Text Available New chemical entities are desperately needed that overcome the limitations of existing drugs for neglected diseases. Screening a diverse library of 10,000 drug-like compounds against 7 neglected disease pathogens resulted in an integrated dataset of 744 hits. We discuss the prioritization of these hits for each pathogen and the strong correlation observed between compounds active against more than two pathogens and mammalian cell toxicity. Our work suggests that the efficiency of early drug discovery for neglected diseases can be enhanced through a collaborative, multi-pathogen approach.

  3. Dataset concerning the analytical approximation of the Ae3 temperature

    Directory of Open Access Journals (Sweden)

    B.L. Ennis

    2017-02-01

    The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled “The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel” 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016 [1].

  4. 3D Volumetric Modeling and Microvascular Reconstruction of Irradiated Lumbosacral Defects After Oncologic Resection

    Directory of Open Access Journals (Sweden)

    Emilio Garcia-Tutor

    2016-12-01

    Full Text Available Background: Locoregional flaps are sufficient in most sacral reconstructions. However, large sacral defects due to malignancy necessitate a different reconstructive approach, with local flaps compromised by radiation and regional flaps inadequate for broad surface areas or substantial volume obliteration. In this report, we present our experience using free muscle transfer for volumetric reconstruction in such cases, and demonstrate 3D haptic models of the sacral defect to aid preoperative planning.Methods: Five consecutive patients with irradiated sacral defects secondary to oncologic resections were included, surface area ranging from 143-600cm2. Latissimus dorsi-based free flap sacral reconstruction was performed in each case, between 2005 and 2011. Where the superior gluteal artery was compromised, the subcostal artery was used as a recipient vessel. Microvascular technique, complications and outcomes are reported. The use of volumetric analysis and 3D printing is also demonstrated, with imaging data converted to 3D images suitable for 3D printing with Osirix software (Pixmeo, Geneva, Switzerland. An office-based, desktop 3D printer was used to print 3D models of sacral defects, used to demonstrate surface area and contour and produce a volumetric print of the dead space needed for flap obliteration. Results: The clinical series of latissimus dorsi free flap reconstructions is presented, with successful transfer in all cases, and adequate soft-tissue cover and volume obliteration achieved. The original use of the subcostal artery as a recipient vessel was successfully achieved. All wounds healed uneventfully. 3D printing is also demonstrated as a useful tool for 3D evaluation of volume and dead-space.Conclusion: Free flaps offer unique benefits in sacral reconstruction where local tissue is compromised by irradiation and tumor recurrence, and dead-space requires accurate volumetric reconstruction. We describe for the first time the use of

  5. Multi-Perspective Vehicle Detection and Tracking: Challenges, Dataset, and Metrics

    DEFF Research Database (Denmark)

    Dueholm, Jacob Velling; Kristoffersen, Miklas Strøm; Satzoda, Ravi K.;

    2016-01-01

    this dataset is introduced along with its challenges and evaluation metrics. A vision-based multi-perspective dataset is presented, containing a full panoramic view from a moving platform driving on U.S. highways capturing 2704x1440 resolution images at 12 frames per second. The dataset serves multiple...... purposes to be used as traditional detection and tracking, together with tracking of vehicles across perspectives. Each of the four perspectives have been annotated, resulting in more than 4000 bounding boxes in order to evaluate and compare novel methods....

  6. A Design Strategy for Volumetric Efficiency Improvement in a Multi-cylinder Stationary Diesel Engine and its Validity under Transient Engine Operation

    Directory of Open Access Journals (Sweden)

    P. Seenikannan

    2008-01-01

    Full Text Available This paper proposes an approach to improve engine performance of volumetric efficiency of a multi cylinder diesel engine. A computer simulation model is used to compare volumetric efficiency with instantaneous values. A baseline engine model is first correlated with measured volumetric efficiency data to establish confidence in the engine model’s predictions. A derivative of the baseline model with exhaust manifold, is then subjected to a transient expedition simulating typical, in-service, maximum rates of engine speed change. Instantaneous volumetric efficiency, calculated over discrete engine cycles forming the sequence, is then compared with its steady speed equivalent at the corresponding speed. It is shown that the engine volumetric efficiency responds almost quasi-steadily under transient operation thus justifying the assumption of correlation between steady speed and transient data. The computer model is used to demonstrate the basic gas dynamic phenomena graphically. The paper provides a good example of the application of computer simulation techniques in providing answers to real engineering questions. In particular, the value of a comprehensive analysis of fundamental physical phenomena characterizing engine mass flow is demonstrated.

  7. British Library Dataset Programme: Supporting Research in the Library of the 21st Century

    Directory of Open Access Journals (Sweden)

    J. Max Wilkinson

    2010-08-01

    Full Text Available Advances in computational science and its application are reshaping the social landscape and the practice of research. Researchers are increasingly exploiting technology for collaborative, experimental and observational research in all disciplines. Digital data and datasets are the fuel that drives these trends; increasingly datasets are being recognised as a national asset that requires preservation, attribution and access in much the same way as text-based communication. The British Library is in a unique position to enhance UK and international research by extending its presence from the physical collection to the digital dataset domain. To meet this challenge and be a responsible steward of the scholarly record, the Library has defined a programme of activity to support the datasets that underlie modern research and promote them as a national asset. We are designing a mixed model of activity where specific service-level projects with clear goals will provide support for collaborative work aimed at revealing and clarifying requirements related to datasets. For example, there is a clear community need for stable, scalable and agreed data citation mechanisms. In response the British Library became a founding member of DataCite, the International Data Citation Initiative which, as a member of the International DOI foundation, assigns Digital Object Identifiers (DOIs to datasets. We are leveraging the services built for DataCite to actively partner with a number of UK data centres and data publishers to add value to their collections and facilitate the rejoining to the scholarly record by linking the published record with the datasets that underlie it. We are also implementing a similar strategy to promote dataset discovery services through the Library's catalogues and streamlining access to national external collections. The British Library datasets programme will guide activities across the Library and provide a focus for stakeholder

  8. A dosimetric study of volumetric modulated arc therapy planning techniques for treatment of low-risk prostate cancer in patients with bilateral hip prostheses

    Directory of Open Access Journals (Sweden)

    Suresh B Rana

    2014-01-01

    Full Text Available Background and Purpose: Recently, megavoltage (MV photon volumetric modulated arc therapy (VMAT has gained widespread acceptance as the technique of choice for prostate cancer patients undergoing external beam radiation therapy. However, radiation treatment planning for patients with metallic hip prostheses composed of high-Z materials can be challenging due to (1 presence of streak artifacts from prosthetic hips in computed tomography dataset, and (2 inhomogeneous dose distribution within the target volume. The purpose of this study was to compare the dosimetric quality of VMAT techniques in the form of Rapid Arc (RA for treating low-risk prostate cancer patient with bilateral prostheses. Materials and Methods: Three treatment plans were created using RA techniques utilizing 2 arcs (2-RA, 3 arcs (3-RA, and 4 arcs (4-RA for 6 MV photon beam in Eclipse treatment planning system. Each plan was optimized for total dose of 79.2 Gy prescribed to the planning target volume (PTV over 44 fractions. All three RA plans were calculated with anisotropic analytical algorithm. Results : The mean and maximum doses to the PTV as well as the homogeneity index among all three RA plans were comparable. The plan conformity index was highest in the 2-Arc plan (1.19 and lowest in the 4-Arc plan (1.10. In comparison to the 2-RA technique, the 4-RA technique reduced the doses to rectum by up to 18.8% and to bladder by up to 7.8%. In comparison to the 3-RA technique, the 4-RA technique reduced the doses to rectum by up to 14.6% and to bladder by up to 3.5%. Conclusion: Based on the RA techniques investigated for a low-risk prostate cancer patient with bilateral prostheses, the 4-RA plan produced lower rectal and bladder dose and better dose conformity across the PTV in comparison with the 2-RA and 3-RA plans.

  9. Composite Match Index with Application of Interior Deformation Field Measurement from Magnetic Resonance Volumetric Images of Human Tissues

    Directory of Open Access Journals (Sweden)

    Penglin Zhang

    2012-01-01

    Full Text Available Whereas a variety of different feature-point matching approaches have been reported in computer vision, few feature-point matching approaches employed in images from nonrigid, nonuniform human tissues have been reported. The present work is concerned with interior deformation field measurement of complex human tissues from three-dimensional magnetic resonance (MR volumetric images. To improve the reliability of matching results, this paper proposes composite match index (CMI as the foundation of multimethod fusion methods to increase the reliability of these various methods. Thereinto, we discuss the definition, components, and weight determination of CMI. To test the validity of the proposed approach, it is applied to actual MR volumetric images obtained from a volunteer’s calf. The main result is consistent with the actual condition.

  10. The Influence of Aerosol Concentration on Changes in the Volumetric Activities of Indoor Radon Short-Term Decay Products

    Directory of Open Access Journals (Sweden)

    Diana Politova

    2011-02-01

    Full Text Available The article describes the influence of aerosol concentration on changes in the volumetric activities of indoor radon short-term decay products. The concentration of aerosol in the air, equilibrium factors and unattached fraction were measured under normal living conditions when the concentration of aerosol increases, i.e. burning a candle or frankincense in accommodations, smoke-filled accommodations, a steamy kitchen etc. It has been established that when the concentration of aerosol in the air rises, the number of free atoms of radon short-term decay products attached to aerosol particles also increases, and therefore higher volumetric activity of alpha particles is fixed. A tight positive connection of the correlation between equilibrium factor (F and aerosol particle concentration in the air of accommodations as well as a negative correlation between unattached fraction and an equilibrium factor have been determined.Article in Lithuanian

  11. Methodological proposal for the volumetric study of archaeological ceramics through 3D edition free-software programs: the case of the celtiberians cemeteries of the meseta

    Directory of Open Access Journals (Sweden)

    Álvaro Sánchez Climent

    2014-10-01

    Full Text Available Nowadays the free-software programs have been converted into the ideal tools for the archaeological researches, reaching the same level as other commercial programs. For that reason, the 3D modeling tool Blender has reached in the last years a great popularity offering similar characteristics like other commercial 3D editing programs such as 3D Studio Max or AutoCAD. Recently, it has been developed the necessary script for the volumetric calculations of three-dimnesional objects, offering great possibilities to calculate the volume of the archaeological ceramics. In this paper, we present a methodological approach for the volumetric studies with Blender and a study case of funerary urns from several celtiberians cemeteries of the Spanish Meseta. The goal is to demonstrate the great possibilities that the 3D editing free-software tools have in the volumetric studies at the present time.

  12. Early clinical experience with volumetric modulated arc therapy in head and neck cancer patients

    Directory of Open Access Journals (Sweden)

    Cozzi Luca

    2010-10-01

    Full Text Available Abstract Background To report about early clinical experience in radiation treatment of head and neck cancer of different sites and histology by volumetric modulated arcs with the RapidArc technology. Methods During 2009, 45 patients were treated at Istituto Clinico Humanitas with RapidArc (28 males and 17 females, median age 65 years. Of these, 78% received concomitant chemotherapy. Thirty-six patients were treated as exclusive curative intent (group A, three as postoperative curative intent (group B and six with sinonasal tumours (group C. Dose prescription was at Planning Target Volumes (PTV with simultaneous integrated boost: 54.45Gy and 69.96Gy in 33 fractions (group A; 54.45Gy and 66Gy in 33 fractions (group B and 55Gy in 25 fractions (group C. Results Concerning planning optimization strategies and constraints, as per PTV coverage, for all groups, D98% > 95% and V95% > 99%. As regards organs at risk, all planning objectives were respected, and this was correlated with observed acute toxicity rates. Only 28% of patients experienced G3 mucositis, 14% G3 dermitis 44% had G2 dysphagia. Nobody required feeding tubes to be placed during treatment. Acute toxicity is also related to chemotherapy. Two patients interrupted the course of radiotherapy because of a quick worsening of general clinical condition. Conclusions These preliminary results stated that volumetric modulated arc therapy in locally advanced head and neck cancers is feasible and effective, with acceptable toxicities.

  13. Volumetric synthetic aperture imaging with a piezoelectric 2D row-column probe

    Science.gov (United States)

    Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann; Beers, Christopher; Lei, Anders; Stuart, Matthias Bo; Nikolov, Svetoslav Ivanov; Thomsen, Erik Vilain; Jensen, Jørgen Arendt

    2016-04-01

    The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addressed transducer array. Utilizing single element transmit events, a volume rate of 90 Hz down to 14 cm deep is achieved. Data are obtained using the experimental ultrasound scanner SARUS with a 70 MHz sampling frequency and beamformed using a delay-and-sum (DAS) approach. A signal-to-noise ratio of up to 32 dB is measured on the beamformed images of a tissue mimicking phantom with attenuation of 0.5 dB cm-1 MHz-1, from the surface of the probe to the penetration depth of 300λ. Measured lateral resolution as Full-Width-at-Half-Maximum (FWHM) is between 4λ and 10λ for 18% to 65% of the penetration depth from the surface of the probe. The averaged contrast is 13 dB for the same range. The imaging performance assessment results may represent a reference guide for possible applications of such an array in different medical fields.

  14. Impact of Turbocharger Non-Adiabatic Operation on Engine Volumetric Efficiency and Turbo Lag

    Directory of Open Access Journals (Sweden)

    S. Shaaban

    2012-01-01

    Full Text Available Turbocharger performance significantly affects the thermodynamic properties of the working fluid at engine boundaries and hence engine performance. Heat transfer takes place under all circumstances during turbocharger operation. This heat transfer affects the power produced by the turbine, the power consumed by the compressor, and the engine volumetric efficiency. Therefore, non-adiabatic turbocharger performance can restrict the engine charging process and hence engine performance. The present research work investigates the effect of turbocharger non-adiabatic performance on the engine charging process and turbo lag. Two passenger car turbochargers are experimentally and theoretically investigated. The effect of turbine casing insulation is also explored. The present investigation shows that thermal energy is transferred to the compressor under all circumstances. At high rotational speeds, thermal energy is first transferred to the compressor and latter from the compressor to the ambient. Therefore, the compressor appears to be “adiabatic” at high rotational speeds despite the complex heat transfer processes inside the compressor. A tangible effect of turbocharger non-adiabatic performance on the charging process is identified at turbocharger part load operation. The turbine power is the most affected operating parameter, followed by the engine volumetric efficiency. Insulating the turbine is recommended for reducing the turbine size and the turbo lag.

  15. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  16. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach.

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications.

  17. Volumetric capnography: In the diagnostic work-up of chronic thromboembolic disease

    Directory of Open Access Journals (Sweden)

    Marcos Mello Moreira

    2010-05-01

    Full Text Available Marcos Mello Moreira1, Renato Giuseppe Giovanni Terzi1, Laura Cortellazzi2, Antonio Luis Eiras Falcão1, Heitor Moreno Junior2, Luiz Cláudio Martins2, Otavio Rizzi Coelho21Department of Surgery, 2Department of Internal Medicine, State University of Campinas, School of Medical Sciences, Campinas, Sao Paulo, BrazilAbstract: The morbidity and mortality of pulmonary embolism (PE have been found to be related to early diagnosis and appropriate treatment. The examinations used to diagnose PE are expensive and not always easily accessible. These options include noninvasive examinations, such as clinical pretests, ELISA D-dimer (DD tests, and volumetric capnography (VCap. We report the case of a patient whose diagnosis of PE was made via pulmonary arteriography. The clinical pretest revealed a moderate probability of the patient having PE, and the DD result was negative; however, the VCap associated with arterial blood gases result was positive. The patient underwent all noninvasive exams following admission to hospital and again eight months after discharge. Results gained from invasive tests were similar to those produced by image exams, highlighting the importance of VCap as an important noninvasive tool.Keywords: pulmonary embolism, pulmonary hypertension, volumetric capnography, d-dimers, pretest probability

  18. Concentrated fed-batch cell culture increases manufacturing capacity without additional volumetric capacity.

    Science.gov (United States)

    Yang, William C; Minkler, Daniel F; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2016-01-10

    Biomanufacturing factories of the future are transitioning from large, single-product facilities toward smaller, multi-product, flexible facilities. Flexible capacity allows companies to adapt to ever-changing pipeline and market demands. Concentrated fed-batch (CFB) cell culture enables flexible manufacturing capacity with limited volumetric capacity; it intensifies cell culture titers such that the output of a smaller facility can rival that of a larger facility. We tested this hypothesis at bench scale by developing a feeding strategy for CFB and applying it to two cell lines. CFB improved cell line A output by 105% and cell line B output by 70% compared to traditional fed-batch (TFB) processes. CFB did not greatly change cell line A product quality, but it improved cell line B charge heterogeneity, suggesting that CFB has both process and product quality benefits. We projected CFB output gains in the context of a 2000-L small-scale facility, but the output was lower than that of a 15,000-L large-scale TFB facility. CFB's high cell mass also complicated operations, eroded volumetric productivity, and showed our current processes require significant improvements in specific productivity in order to realize their full potential and savings in manufacturing. Thus, improving specific productivity can resolve CFB's cost, scale-up, and operability challenges.

  19. Integral transform solution of natural convection in a square cavity with volumetric heat generation

    Directory of Open Access Journals (Sweden)

    C. An

    2013-12-01

    Full Text Available The generalized integral transform technique (GITT is employed to obtain a hybrid numerical-analytical solution of natural convection in a cavity with volumetric heat generation. The hybrid nature of this approach allows for the establishment of benchmark results in the solution of non-linear partial differential equation systems, including the coupled set of heat and fluid flow equations that govern the steady natural convection problem under consideration. Through performing the GITT, the resulting transformed ODE system is then numerically solved by making use of the subroutine DBVPFD from the IMSL Library. Therefore, numerical results under user prescribed accuracy are obtained for different values of Rayleigh numbers, and the convergence behavior of the proposed eigenfunction expansions is illustrated. Critical comparisons against solutions produced by ANSYS CFX 12.0 are then conducted, which demonstrate excellent agreement. Several sets of reference results for natural convection with volumetric heat generation in a bi-dimensional square cavity are also provided for future verification of numerical results obtained by other researchers.

  20. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    Science.gov (United States)

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  1. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  2. Wild Type and PPAR KO Dataset

    Science.gov (United States)

    Data set 1 consists of the experimental data for the Wild Type and PPAR KO animal study and includes data used to prepare Figures 1-4 and Table 1 of the Das et al, 2016 paper.This dataset is associated with the following publication:Das, K., C. Wood, M. Lin, A.A. Starkov, C. Lau, K.B. Wallace, C. Corton, and B. Abbott. Perfluoroalky acids-induced liver steatosis: Effects on genes controlling lipid homeostasis. TOXICOLOGY. Elsevier Science Ltd, New York, NY, USA, 378: 32-52, (2017).

  3. An improved Antarctic dataset for high resolution numerical ice sheet models (ALBMAP v1

    Directory of Open Access Journals (Sweden)

    A. M. Le Brocq

    2010-10-01

    Full Text Available The dataset described in this paper (ALBMAP has been created for the purposes of high-resolution numerical ice sheet modelling of the Antarctic Ice Sheet. It brings together data on the ice sheet configuration (e.g. ice surface and ice thickness and boundary conditions, such as the surface air temperature, accumulation and geothermal heat flux. The ice thickness and basal topography is based on the BEDMAP dataset (Lythe et al., 2001, however, there are a number of inconsistencies within BEDMAP and, since its release, more data has become available. The dataset described here addresses these inconsistencies, including some novel interpolation schemes for sub ice-shelf cavities, and incorporates some major new datasets. The inclusion of new datasets is not exhaustive, this considerable task is left for the next release of BEDMAP, however, the data and procedure documented here provides another step forward and demonstrates the issues that need addressing in a continental scale dataset useful for high resolution ice sheet modelling. The dataset provides an initial condition that is as close as possible to present-day ice sheet configuration, aiding modelling of the response of the Antarctic Ice Sheet to various forcings, which are, at present, not fully understood.

  4. Volumetric CT-images improve testing of radiological image interpretation skills

    Energy Technology Data Exchange (ETDEWEB)

    Ravesloot, Cécile J., E-mail: C.J.Ravesloot@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Schaaf, Marieke F. van der, E-mail: M.F.vanderSchaaf@uu.nl [Department of Pedagogical and Educational Sciences at Utrecht University, Heidelberglaan 1, 3584 CS Utrecht (Netherlands); Schaik, Jan P.J. van, E-mail: J.P.J.vanSchaik@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Cate, Olle Th.J. ten, E-mail: T.J.tenCate@umcutrecht.nl [Center for Research and Development of Education at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Gijp, Anouk van der, E-mail: A.vanderGijp-2@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Mol, Christian P., E-mail: C.Mol@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Vincken, Koen L., E-mail: K.Vincken@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands)

    2015-05-15

    Rationale and objectives: Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Materials and methods: Two groups of medical students (n = 139; n = 143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students’ test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Results: Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p < .001). The volumetric CT-image testing program was considered user-friendly. Conclusion: This study shows that volumetric image questions can be successfully integrated in students’ radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test.

  5. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a

  6. Personalized heterogeneous deformable model for fast volumetric registration.

    Science.gov (United States)

    Si, Weixin; Liao, Xiangyun; Wang, Qiong; Heng, Pheng Ann

    2017-02-20

    Biomechanical deformable volumetric registration can help improve safety of surgical interventions by ensuring the operations are extremely precise. However, this technique has been limited by the accuracy and the computational efficiency of patient-specific modeling. This study presents a tissue-tissue coupling strategy based on penalty method to model the heterogeneous behavior of deformable body, and estimate the personalized tissue-tissue coupling parameters in a data-driven way. Moreover, considering that the computational efficiency of biomechanical model is highly dependent on the mechanical resolution, a practical coarse-to-fine scheme is proposed to increase runtime efficiency. Particularly, a detail enrichment database is established in an offline fashion to represent the mapping relationship between the deformation results of high-resolution hexahedral mesh extracted from the raw medical data and a newly constructed low-resolution hexahedral mesh. At runtime, the mechanical behavior of human organ under interactions is simulated with this low-resolution hexahedral mesh, then the microstructures are synthesized in virtue of the detail enrichment database. The proposed method is validated by volumetric registration in an abdominal phantom compression experiments. Our personalized heterogeneous deformable model can well describe the coupling effects between different tissues of the phantom. Compared with high-resolution heterogeneous deformable model, the low-resolution deformable model with our detail enrichment database can achieve 9.4× faster, and the average target registration error is 3.42 mm, which demonstrates that the proposed method shows better volumetric registration performance than state-of-the-art. Our framework can well balance the precision and efficiency, and has great potential to be adopted in the practical augmented reality image-guided robotic systems.

  7. Robust Machine Learning Applied to Terascale Astronomical Datasets

    CERN Document Server

    Ball, Nicholas M; Myers, Adam D

    2007-01-01

    We present recent results from the Laboratory for Cosmological Data Mining (http://lcdm.astro.uiuc.edu) at the National Center for Supercomputing Applications (NCSA) to provide robust classifications and photometric redshifts for objects in the terascale-class Sloan Digital Sky Survey (SDSS). Through a combination of machine learning in the form of decision trees, k-nearest neighbor, and genetic algorithms, the use of supercomputing resources at NCSA, and the cyberenvironment Data-to-Knowledge, we are able to provide improved classifications for over 100 million objects in the SDSS, improved photometric redshifts, and a full exploitation of the powerful k-nearest neighbor algorithm. This work is the first to apply the full power of these algorithms to contemporary terascale astronomical datasets, and the improvement over existing results is demonstrable. We discuss issues that we have encountered in dealing with data on the terascale, and possible solutions that can be implemented to deal with upcoming petasc...

  8. Volumetric hemispheric ratio as a useful tool in personality psychology.

    Science.gov (United States)

    Montag, Christian; Schoene-Bake, Jan-Christoph; Wagner, Jan; Reuter, Martin; Markett, Sebastian; Weber, Bernd; Quesada, Carlos M

    2013-02-01

    The present study investigates the link between volumetric hemispheric ratios (VHRs) and personality measures in N=267 healthy participants using Eysenck's Personality Inventory-Revised (EPQ-R) and the BIS/BAS scales. A robust association between extraversion and VHRs was observed for gray matter in males but not females. Higher gray matter volume in the left than in the right hemisphere was associated with higher extraversion in males. The results are discussed in the context of positive emotionality and laterality of the human brain.

  9. AN ATTRIBUTION OF CAVITATION RESONANCE: VOLUMETRIC OSCILLATIONS OF CLOUD

    Institute of Scientific and Technical Information of China (English)

    ZUO Zhi-gang; LI Sheng-cai; LIU Shu-hong; LI Shuang; CHEN Hui

    2009-01-01

    In order to further verify the proposed theory of cavitation resonance, as well as to proceed the investigations into microscopic level, a series of studies are being carried out on the Warwick venturi. The analysis of the oscillation characteristics of the cavitation resonance has conclusively verified the macro-mechanism proposed through previous studies on other cavitating flows by the authors. The initial observations using high-speed photographic approach have revealed a new attribution of cavitation resonance. That is, the volumetric oscillation of cavitation cloud is associated with the cavitation resonance, which is a collective behaviour of the bubbles in the cloud.

  10. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Science.gov (United States)

    Reyna, Alberto; Panduro, Marco A.; Del Rio Bocio, Carlos

    2014-01-01

    This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction. PMID:24701150

  11. Estimation of volumetric breast density for breast cancer risk prediction

    Science.gov (United States)

    Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.

    2000-04-01

    Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator

  12. Designing the colorectal cancer core dataset in Iran

    Directory of Open Access Journals (Sweden)

    Sara Dorri

    2017-01-01

    Full Text Available Background: There is no need to explain the importance of collection, recording and analyzing the information of disease in any health organization. In this regard, systematic design of standard data sets can be helpful to record uniform and consistent information. It can create interoperability between health care systems. The main purpose of this study was design the core dataset to record colorectal cancer information in Iran. Methods: For the design of the colorectal cancer core data set, a combination of literature review and expert consensus were used. In the first phase, the draft of the data set was designed based on colorectal cancer literature review and comparative studies. Then, in the second phase, this data set was evaluated by experts from different discipline such as medical informatics, oncology and surgery. Their comments and opinion were taken. In the third phase refined data set, was evaluated again by experts and eventually data set was proposed. Results: In first phase, based on the literature review, a draft set of 85 data elements was designed. In the second phase this data set was evaluated by experts and supplementary information was offered by professionals in subgroups especially in treatment part. In this phase the number of elements totally were arrived to 93 numbers. In the third phase, evaluation was conducted by experts and finally this dataset was designed in five main parts including: demographic information, diagnostic information, treatment information, clinical status assessment information, and clinical trial information. Conclusion: In this study the comprehensive core data set of colorectal cancer was designed. This dataset in the field of collecting colorectal cancer information can be useful through facilitating exchange of health information. Designing such data set for similar disease can help providers to collect standard data from patients and can accelerate retrieval from storage systems.

  13. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  14. Rapid global fitting of large fluorescence lifetime imaging microscopy datasets.

    Directory of Open Access Journals (Sweden)

    Sean C Warren

    Full Text Available Fluorescence lifetime imaging (FLIM is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset. This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis

  15. Conformal Pad-Printing Electrically Conductive Composites onto Thermoplastic Hemispheres: Toward Sustainable Fabrication of 3-Cents Volumetric Electrically Small Antennas.

    Directory of Open Access Journals (Sweden)

    Haoyi Wu

    Full Text Available Electrically small antennas (ESAs are becoming one of the key components in the compact wireless devices for telecommunications, defence, and aerospace systems, especially for the spherical one whose geometric layout is more closely approaching Chu's limit, thus yielding significant bandwidth improvements relative to the linear and planar counterparts. Yet broad applications of the volumetric ESAs are still hindered since the low cost fabrication has remained a tremendous challenge. Here we report a state-of-the-art technology to transfer electrically conductive composites (ECCs from a planar mould to a volumetric thermoplastic substrate by using pad-printing technology without pattern distortion, benefit from the excellent properties of the ECCs as well as the printing-calibration method that we developed. The antenna samples prepared in this way meet the stringent requirement of an ESA (ka is as low as 0.32 and the antenna efficiency is as high as 57%, suggesting that volumetric electronic components i.e. the antennas can be produced in such a simple, green, and cost-effective way. This work can be of interest for the development of studies on green and high performance wireless communication devices.

  16. Volumetric structural magnetic resonance imaging findings in pediatric posttraumatic stress disorder and obsessive-compulsive disorder: a systematic review

    Directory of Open Access Journals (Sweden)

    Fatima eAhmed

    2012-12-01

    Full Text Available Objectives: Structural magnetic resonance imaging (sMRI studies of anxiety disorders in children and adolescents are limited. Posttraumatic stress disorder (PTSD and obsessive-compulsive disorder (OCD have been best studied in this regard. We systematically reviewed structural neuroimaging findings in pediatric PTSD and OCD. Methods: The literature was reviewed for all sMRI studies examining volumetric parameters using PubMed, ScienceDirect and PsychInfo databases, with no limit on the time frame of publication. Nine studies in pediatric PTSD and 6 in OCD were suitable for inclusion. Results: Volumetric findings were inconsistent in both disorders. In PTSD, findings suggest increased as well as decreased volumes of the prefrontal cortex (PFC and corpus callosum; whilst in OCD studies indicate volumetric increase of the putamen, with inconsistent findings for the anterior cingulate cortex (ACC and frontal regions. Conclusions: Methodological differences may account for some of this inconsistency and additional volume-based studies in pediatric anxiety disorders using more uniform approaches are needed.

  17. New Fuzzy Support Vector Machine for the Class Imbalance Problem in Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Xiaoqing Gu

    2014-01-01

    Full Text Available In medical datasets classification, support vector machine (SVM is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM for the class imbalance problem (called FSVM-CIP is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

  18. A collection of Australian Drosophila datasets on climate adaptation and species distributions.

    Science.gov (United States)

    Hangartner, Sandra B; Hoffmann, Ary A; Smith, Ailie; Griffin, Philippa C

    2015-11-24

    The Australian Drosophila Ecology and Evolution Resource (ADEER) collates Australian datasets on drosophilid flies, which are aimed at investigating questions around climate adaptation, species distribution limits and population genetics. Australian drosophilid species are diverse in climatic tolerance, geographic distribution and behaviour. Many species are restricted to the tropics, a few are temperate specialists, and some have broad distributions across climatic regions. Whereas some species show adaptability to climate changes through genetic and plastic changes, other species have limited adaptive capacity. This knowledge has been used to identify traits and genetic polymorphisms involved in climate change adaptation and build predictive models of responses to climate change. ADEER brings together 103 datasets from 39 studies published between 1982-2013 in a single online resource. All datasets can be downloaded freely in full, along with maps and other visualisations. These historical datasets are preserved for future studies, which will be especially useful for assessing climate-related changes over time.

  19. Evaluation of linear interpolation method for missing value on solar radiation dataset in Perlis

    Energy Technology Data Exchange (ETDEWEB)

    Saaban, Azizan; Zainudin, Lutfi [School of Science Quantitative, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Bakar, Mohd Nazari Abu [Faculty of Applied Science, Universiti Teknologi MARA, 02600 Arau, Perlis (Malaysia)

    2015-05-15

    This paper intends to reveal the ability of the linear interpolation method to predict missing values in solar radiation time series. Reliable dataset is equally tends to complete time series observed dataset. The absence or presence of radiation data alters long-term variation of solar radiation measurement values. Based on that change, the opportunities to provide bias output result for modelling and the validation process is higher. The completeness of the observed variable dataset has significantly important for data analysis. Occurrence the lack of continual and unreliable time series solar radiation data widely spread and become the main problematic issue. However, the limited number of research quantity that has carried out to emphasize and gives full attention to estimate missing values in the solar radiation dataset.

  20. CoVennTree: A new method for the comparative analysis of large datasets

    Directory of Open Access Journals (Sweden)

    Steffen C. Lott

    2015-02-01

    Full Text Available The visualization of massive datasets, such as those resulting from comparative metatranscriptome analyses or the analysis of microbial population structures using ribosomal RNA sequences, is a challenging task. We developed a new method called CoVennTree (Comparative weighted Venn Tree that simultaneously compares up to three multifarious datasets by aggregating and propagating information from the bottom to the top level and produces a graphical output in Cytoscape. With the introduction of weighted Venn structures, the contents and relationships of various datasets can be correlated and simultaneously aggregated without losing information. We demonstrate the suitability of this approach using a dataset of 16S rDNA sequences obtained from microbial populations at three different depths of the Gulf of Aqaba in the Red Sea. CoVennTree has been integrated into the Galaxy ToolShed and can be directly downloaded and integrated into the user instance.

  1. Floating volumetric image formation using a dihedral corner reflector array device.

    Science.gov (United States)

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuki; Yamamoto, Siori; Mukai, Takaaki; Maekawa, Satoshi

    2013-01-01

    A volumetric display system using an optical imaging device consisting of numerous dihedral corner reflectors placed perpendicular to the surface of a metal plate is proposed. Image formation by the dihedral corner reflector array (DCRA) is free from distortion and focal length. In the proposed volumetric display system, a two-dimensional real image is moved by a mirror scanner to scan a three-dimensional (3D) space. Cross-sectional images of a 3D object are displayed in accordance with the position of the image plane. A volumetric image is observed as a stack of the cross-sectional images. The use of the DCRA brings compact system configuration and volumetric real image generation with very low distortion. An experimental volumetric display system including a DCRA, a galvanometer mirror, and a digital micro-mirror device was constructed to verify the proposed method. A volumetric image consisting of 1024×768×400 voxels was formed by the experimental system.

  2. ArcHydro global datasets for Idaho StreamStats

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset consists of a personal geodatabase containing several vector datasets. This database contains the information needed to link the HUCs together so a...

  3. Strontium removal jar test dataset for all figures and tables.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The datasets where used to generate data to demonstrate strontium removal under various water quality and treatment conditions. This dataset is associated with the...

  4. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    Science.gov (United States)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research

  5. Statistics of large detrital geochronology datasets

    Science.gov (United States)

    Saylor, J. E.; Sundell, K. E., II

    2014-12-01

    Implementation of quantitative metrics for inter-sample comparison of detrital geochronological data sets has lagged the increase in data set size, and ability to identify sub-populations and quantify their relative proportions. Visual comparison or application of some statistical approaches, particularly the Kolmogorov-Smirnov (KS) test, that initially appeared to provide a simple way of comparing detrital data sets, may be inadequate to quantify their similarity. We evaluate several proposed metrics by applying them to four large synthetic datasets drawn randomly from a parent dataset, as well as a recently published large empirical dataset consisting of four separate (n = ~1000 each) analyses of the same rock sample. Visual inspection of the cumulative probability density functions (CDF) and relative probability density functions (PDF) confirms an increasingly close correlation between data sets as the number of analyses increases. However, as data set size increases the KS test yields lower mean p-values implying greater confidence that the samples were not drawn from the same parent population and high standard deviations despite minor decreases in the mean difference between sample CDFs. We attribute this to the increasing sensitivity of the KS test when applied to larger data sets, which in turn limits its use for quantitative inter-sample comparison in detrital geochronology. Proposed alternative metrics, including Similarity, Likeness (complement to Mismatch), and the coefficient of determination (R2) of a cross-plot of PDF quantiles, point to an increasingly close correlation between data sets with increasing size, although they are the most sensitive at different ranges of data set sizes. The Similarity test is most sensitive to variation in data sets with n < 100 and is relatively insensitive to further convergence between larger data sets. The Likeness test reaches 90% of its asymptotic maximum at data set sizes of n = 200. The PDF cross-plot R2 value

  6. Volumetric display containing multiple two-dimensional color motion pictures

    Science.gov (United States)

    Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.

    2014-06-01

    We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.

  7. Volumetric three-dimensional display system with rasterization hardware

    Science.gov (United States)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  8. Myocardial kinematics based on tagged MRI from volumetric NURBS models

    Science.gov (United States)

    Tustison, Nicholas J.; Amini, Amir A.

    2004-04-01

    We present current research in which left ventricular deformation is estimated from tagged cardiac magnetic resonance imaging using volumetric deformable models constructed from nonuniform rational B-splines (NURBS). From a set of short and long axis images at end-diastole, the initial NURBS model is constructed by fitting two surfaces with the same parameterization to the set of epicardial and endocardial contours from which a volumetric model is created. Using normal displacements of the three sets of orthogonal tag planes as well as displacements of both tag line and contour/tag line intersection points, one can solve for the optimal homogeneous coordinates, in a least squares sense, of the control points of the NURBS model at a later time point using quadratic programming. After fitting to all time points of data, lofting the NURBS model at each time point creates a comprehensive 4-D NURBS model. From this model, we can extract 3-D myocardial displacement fields and corresponding strain maps, which are local measures of non-rigid deformation.

  9. Volumetric breast density affects performance of digital screening mammography.

    Science.gov (United States)

    Wanders, Johanna O P; Holland, Katharina; Veldhuis, Wouter B; Mann, Ritse M; Pijnappel, Ruud M; Peeters, Petra H M; van Gils, Carla H; Karssemeijer, Nico

    2017-02-01

    To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the American College of Radiology (ACR) breast density categories. Of all the examinations, 21.6% were categorized as density category 1 ('almost entirely fatty') and 41.5, 28.9, and 8.0% as category 2-4 ('extremely dense'), respectively. We identified 667 screen-detected and 234 interval cancers. Interval cancer rates were 0.7, 1.9, 2.9, and 4.4‰ and false positive rates were 11.2, 15.1, 18.2, and 23.8‰ for categories 1-4, respectively (both p-trend density categories: 85.7, 77.6, 69.5, and 61.0% for categories 1-4, respectively (p-trend density, automatically measured on digital mammograms, impacts screening performance measures along the same patterns as established with ACR breast density categories. Since measuring breast density fully automatically has much higher reproducibility than visual assessment, this automatic method could help with implementing density-based supplemental screening.

  10. Volumetric properties of human islet amyloid polypeptide in liquid water.

    Science.gov (United States)

    Brovchenko, I; Andrews, M N; Oleinikova, A

    2010-04-28

    The volumetric properties of human islet amyloid polypeptide (hIAPP) in water were studied in a wide temperature range by computer simulations. The intrinsic density rho(p) and the intrinsic thermal expansion coefficient alpha(p) of hIAPP were evaluated by taking into account the difference between the volumetric properties of hydration and bulk water. The density of hydration water rho(h) was found to decrease almost linearly with temperature upon heating and its thermal expansion coefficient was found to be notably higher than that of bulk water. The peptide surface exposed to water is more hydrophobic and its rho(h) is smaller in conformation with a larger number of intrapeptide hydrogen bonds. The two hIAPP peptides studied (with and without disulfide bridge) show negative alpha(p), which is close to zero at 250 K and decreases to approximately -1.5 x 10(-3) K(-1) upon heating to 450 K. The analysis of various structural properties of peptides shows a correlation between the intrinsic peptide volumes and the number of intrapeptide hydrogen bonds. The obtained negative values of alpha(p) can be attributed to the shrinkage of the inner voids of the peptides upon heating.

  11. Volumetric verification of multiaxis machine tool using laser tracker.

    Science.gov (United States)

    Aguado, Sergio; Samper, David; Santolaria, Jorge; Aguilar, Juan José

    2014-01-01

    This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space.

  12. The Volumetric Rate of Superluminous Supernovae at z~1

    CERN Document Server

    Prajs, S; Smith, M; Levan, A; Karpenka, N V; Edwards, T D P; Walker, C R; Wolf, W M; Balland, C; Carlberg, R; Howell, A; Lidman, C; Pain, R; Pritchet, C; Ruhlmann-Kleider, V

    2016-01-01

    We present a measurement of the volumetric rate of superluminous supernovae (SLSNe) at z~1, measured using archival data from the first four years of the Canada-France-Hawaii Telescope Supernova Legacy Survey (SNLS). We develop a method for the photometric classification of SLSNe to construct our sample. Our sample includes two previously spectroscopically-identified objects, and a further new candidate selected using our classification technique. We use the point-source recovery efficiencies from Perrett et.al. (2010) and a Monte Carlo approach to calculate the rate based on our SLSN sample. We find that the three identified SLSNe from SNLS give a rate of 91 (+76/-36) SNe/Yr/Gpc^3 at a volume-weighted redshift of z=1.13. This is equivalent to 2.2 (+1.8/-0.9) x10^-4 of the volumetric core collapse supernova rate at the same redshift. When combined with other rate measurements from the literature, we show that the rate of SLSNe increases with redshift in a manner consistent with that of the cosmic star formati...

  13. Visualization of Cosmological Particle-Based Datasets

    CERN Document Server

    Navrátil, Paul Arthur; Bromm, Volker

    2007-01-01

    We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.

  14. Predicting dataset popularity for the CMS experiment

    Science.gov (United States)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-10-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  15. Internationally coordinated glacier monitoring: strategy and datasets

    Science.gov (United States)

    Hoelzle, Martin; Armstrong, Richard; Fetterer, Florence; Gärtner-Roer, Isabelle; Haeberli, Wilfried; Kääb, Andreas; Kargel, Jeff; Nussbaumer, Samuel; Paul, Frank; Raup, Bruce; Zemp, Michael

    2014-05-01

    (c) the Randolph Glacier Inventory (RGI), a new and globally complete digital dataset of outlines from about 180,000 glaciers with some meta-information, which has been used for many applications relating to the IPCC AR5 report. Concerning glacier changes, a database (Fluctuations of Glaciers) exists containing information about mass balance, front variations including past reconstructed time series, geodetic changes and special events. Annual mass balance reporting contains information for about 125 glaciers with a subset of 37 glaciers with continuous observational series since 1980 or earlier. Front variation observations of around 1800 glaciers are available from most of the mountain ranges world-wide. This database was recently updated with 26 glaciers having an unprecedented dataset of length changes from from reconstructions of well-dated historical evidence going back as far as the 16th century. Geodetic observations of about 430 glaciers are available. The database is completed by a dataset containing information on special events including glacier surges, glacier lake outbursts, ice avalanches, eruptions of ice-clad volcanoes, etc. related to about 200 glaciers. A special database of glacier photographs contains 13,000 pictures from around 500 glaciers, some of them dating back to the 19th century. A key challenge is to combine and extend the traditional observations with fast evolving datasets from new technologies.

  16. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  17. SAGE Research Methods Datasets: A Data Analysis Educational Tool.

    Science.gov (United States)

    Vardell, Emily

    2016-01-01

    SAGE Research Methods Datasets (SRMD) is an educational tool designed to offer users the opportunity to obtain hands-on experience with data analysis. Users can search for and browse authentic datasets by method, discipline, and data type. Each of the datasets are supplemented with educational material on the research method and clear guidelines for how to approach data analysis.

  18. The NCS code of practice for the quality assurance and control for volumetric modulated arc therapy

    Science.gov (United States)

    Mans, Anton; Schuring, Danny; Arends, Mark P.; Vugts, Cornelia A. J. M.; Wolthaus, Jochem W. H.; Lotz, Heidi T.; Admiraal, Marjan; Louwe, Rob J. W.; Öllers, Michel C.; van de Kamer, Jeroen B.

    2016-10-01

    In 2010, the NCS (Netherlands Commission on Radiation Dosimetry) installed a subcommittee to develop guidelines for quality assurance and control for volumetric modulated arc therapy (VMAT) treatments. The report (published in 2015) has been written by Dutch medical physicists and has therefore, inevitably, a Dutch focus. This paper is a condensed version of these guidelines, the full report in English is freely available from the NCS website www.radiationdosimetry.org. After describing the transition from IMRT to VMAT, the paper addresses machine quality assurance (QA) and treatment planning system (TPS) commissioning for VMAT. The final section discusses patient specific QA issues such as the use of class solutions, measurement devices and dose evaluation methods.

  19. Simulating Volumetric Pricing for Irrigation Water Operational Cost Recovery under Complete and Perfect Information

    Directory of Open Access Journals (Sweden)

    Luca Giraldo

    2014-05-01

    Full Text Available This study evaluated the implementation of a volumetric and cost-recovery pricing method for irrigation water under symmetric information conditions without the inclusion of implementation costs. The study was carried out in two steps. First, a cost function was estimated for irrigation water supplied by a water user association to a typical Mediterranean agricultural area, based on a translog function. Second, the economic impact of a pricing method designed according to this cost function was simulated using a mathematical programming territorial model for the same agricultural area. The outcomes were compared with those for the current pricing method. The impacts of this pricing method are discussed in terms of its neutral effects on total farm income and, conversely, the importance of the redistributive effects.

  20. Damage in Concrete and its Detection by Use of Stress-Volumetric Strain Diagram

    Directory of Open Access Journals (Sweden)

    Jerga Ján

    2014-05-01

    Full Text Available The reliable determination of the damage degree of concrete in the structure is difficult and not seldom short-term compressive strengths are considered as real strengths of concrete. Because the load history of the construction is generally unknown, we do not know, whether there have been reached values in the vicinity of the peak of the stress-strain diagram. The strength at the sustained or repeated loading would be then significantly lower, as obtained from tests performed on intact samples. The diagnostic of concrete damage is impeded by environmental effects, resulting in the anisotropy of the development of micro cracks. The possibility is pointed out to use the characteristics of the stress volumetric strain diagram for the assessment of the condition of the material, with the perspective of the application for the determination of the residual long-term strength of concrete

  1. A new bed elevation dataset for Greenland

    Science.gov (United States)

    Bamber, J. L.; Griggs, J. A.; Hurkmans, R. T. W. L.; Dowdeswell, J. A.; Gogineni, S. P.; Howat, I.; Mouginot, J.; Paden, J.; Palmer, S.; Rignot, E.; Steinhage, D.

    2013-03-01

    We present a new bed elevation dataset for Greenland derived from a combination of multiple airborne ice thickness surveys undertaken between the 1970s and 2012. Around 420 000 line kilometres of airborne data were used, with roughly 70% of this having been collected since the year 2000, when the last comprehensive compilation was undertaken. The airborne data were combined with satellite-derived elevations for non-glaciated terrain to produce a consistent bed digital elevation model (DEM) over the entire island including across the glaciated-ice free boundary. The DEM was extended to the continental margin with the aid of bathymetric data, primarily from a compilation for the Arctic. Ice thickness was determined where an ice shelf exists from a combination of surface elevation and radar soundings. The across-track spacing between flight lines warranted interpolation at 1 km postings for significant sectors of the ice sheet. Grids of ice surface elevation, error estimates for the DEM, ice thickness and data sampling density were also produced alongside a mask of land/ocean/grounded ice/floating ice. Errors in bed elevation range from a minimum of ±10 m to about ±300 m, as a function of distance from an observation and local topographic variability. A comparison with the compilation published in 2001 highlights the improvement in resolution afforded by the new datasets, particularly along the ice sheet margin, where ice velocity is highest and changes in ice dynamics most marked. We estimate that the volume of ice included in our land-ice mask would raise mean sea level by 7.36 m, excluding any solid earth effects that would take place during ice sheet decay.

  2. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  3. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  4. BLAST-EXPLORER helps you building datasets for phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Claverie Jean-Michel

    2010-01-01

    Full Text Available Abstract Background The right sampling of homologous sequences for phylogenetic or molecular evolution analyses is a crucial step, the quality of which can have a significant impact on the final interpretation of the study. There is no single way for constructing datasets suitable for phylogenetic analysis, because this task intimately depends on the scientific question we want to address, Moreover, database mining softwares such as BLAST which are routinely used for searching homologous sequences are not specifically optimized for this task. Results To fill this gap, we designed BLAST-Explorer, an original and friendly web-based application that combines a BLAST search with a suite of tools that allows interactive, phylogenetic-oriented exploration of the BLAST results and flexible selection of homologous sequences among the BLAST hits. Once the selection of the BLAST hits is done using BLAST-Explorer, the corresponding sequence can be imported locally for external analysis or passed to the phylogenetic tree reconstruction pipelines available on the Phylogeny.fr platform. Conclusions BLAST-Explorer provides a simple, intuitive and interactive graphical representation of the BLAST results and allows selection and retrieving of the BLAST hit sequences based a wide range of criterions. Although BLAST-Explorer primarily aims at helping the construction of sequence datasets for further phylogenetic study, it can also be used as a standard BLAST server with enriched output. BLAST-Explorer is available at http://www.phylogeny.fr

  5. Comprehensive comparison of large-scale tissue expression datasets

    Directory of Open Access Journals (Sweden)

    Alberto Santos

    2015-06-01

    Full Text Available For tissues to carry out their functions, they rely on the right proteins to be present. Several high-throughput technologies have been used to map out which proteins are expressed in which tissues; however, the data have not previously been systematically compared and integrated. We present a comprehensive evaluation of tissue expression data from a variety of experimental techniques and show that these agree surprisingly well with each other and with results from literature curation and text mining. We further found that most datasets support the assumed but not demonstrated distinction between tissue-specific and ubiquitous expression. By developing comparable confidence scores for all types of evidence, we show that it is possible to improve both quality and coverage by combining the datasets. To facilitate use and visualization of our work, we have developed the TISSUES resource (http://tissues.jensenlab.org, which makes all the scored and integrated data available through a single user-friendly web interface.

  6. Investigating uncertainties in global gridded datasets of climate extremes

    Directory of Open Access Journals (Sweden)

    R. J. H. Dunn

    2014-05-01

    Full Text Available We assess the effects of different methodological choices made during the construction of gridded datasets of climate extremes, focusing primarily on HadEX2. Using global timeseries of the indices and their coverage, as well as uncertainty maps, we show that the choices which have the greatest effect are those relating to the station network used or which drastically change the values for individual grid boxes. The latter are most affected by the number of stations required in or around a grid box and the gridding method used. Most parametric changes have a small impact, on global and on grid box scales, whereas structural changes to the methods or input station networks may have large effects. On grid box scales, trends in temperature indices are very robust to most choices, especially in areas which have high station density (e.g. North America, Europe and Asia. Precipitation trends, being less spatially coherent, can be more susceptible to methodological changes, but are still clear in regions of high station density. Regional trends from all indices derived from areas with few stations should be treated with care. On a global scale, the linear trends over 1951–2010 from almost all choices fall within the statistical range of trends from HadEX2. This demonstrates the robust nature of HadEX2 and related datasets to choices in the creation method.

  7. NGO Presence and Activity in Afghanistan, 2000–2014: A Provincial-Level Dataset

    Directory of Open Access Journals (Sweden)

    David F. Mitchell

    2017-06-01

    Full Text Available This article introduces a new provincial-level dataset on non-governmental organizations (NGOs in Afghanistan. The data—which are freely available for download—provide information on the locations and sectors of activity of 891 international and local (Afghan NGOs that operated in the country between 2000 and 2014. A summary and visualization of the data is presented in the article following a brief historical overview of NGOs in Afghanistan. Links to download the full dataset are provided in the conclusion.

  8. Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service

    Science.gov (United States)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2016-06-01

    Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.

  9. Gravimetric and volumetric determination of the purity of electrolytically refined silver and the produced silver nitrate

    Directory of Open Access Journals (Sweden)

    Ačanski Marijana M.

    2007-01-01

    Full Text Available Silver is, along with gold and the platinum-group metals, one of the so called precious metals. Because of its comparative scarcity, brilliant white color, malleability and resistance to atmospheric oxidation, silver has been used in the manufacture of coins and jewelry for a long time. Silver has the highest known electrical and thermal conductivity of all metals and is used in fabricating printed electrical circuits, and also as a coating for electronic conductors. It is also alloyed with other elements such as nickel or palladium for use in electrical contacts. The most useful silver salt is silver nitrate, a caustic chemical reagent, significant as an antiseptic and as a reagent in analytical chemistry. Pure silver nitrate is an intermediate in the industrial preparation of other silver salts, including the colloidal silver compounds used in medicine and the silver halides incorporated into photographic emulsions. Silver halides become increasingly insoluble in the series: AgCl, AgBr, AgI. All silver salts are sensitive to light and are used in photographic coatings on film and paper. The ZORKA-PHARMA company (Sabac, Serbia specializes in the production of pharmaceutical remedies and lab chemicals. One of its products is chemical silver nitrate (argentum-nitricum (l. Silver nitrate is generally produced by dissolving pure electrolytically refined silver in hot 48% nitric acid. Since the purity of silver nitrate, produced in 2002, was not in compliance with the p.a. level of purity, there was doubt that the electrolytically refined silver was pure. The aim of this research was the gravimetric and volumetric determination of the purity of electrolytically refined silver and silver nitrate, produced industrially and in a laboratory. The purity determination was carried out gravimetrically, by the sedimentation of silver(I ions in the form of insoluble silver salts: AgCl, AgBr and Agi, and volumetrically, according to Mohr and Volhardt. The

  10. Rapid mapping of volumetric machine errors using distance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  11. Full page insight

    DEFF Research Database (Denmark)

    Cortsen, Rikke Platz

    2014-01-01

    Alan Moore and his collaborating artists often manipulate time and space by drawing upon the formal elements of comics and making alternative constellations. This article looks at an element that is used frequently in comics of all kinds – the full page – and discusses how it helps shape spatio......, something that it shares with the full page in comics. Through an analysis of several full pages from Moore titles like Swamp Thing, From Hell, Watchmen and Promethea, it is made clear why the full page provides an apt vehicle for an apocalypse in comics....

  12. Full page insight

    DEFF Research Database (Denmark)

    Cortsen, Rikke Platz

    2014-01-01

    Alan Moore and his collaborating artists often manipulate time and space by drawing upon the formal elements of comics and making alternative constellations. This article looks at an element that is used frequently in comics of all kinds – the full page – and discusses how it helps shape spatio-t......, something that it shares with the full page in comics. Through an analysis of several full pages from Moore titles like Swamp Thing, From Hell, Watchmen and Promethea, it is made clear why the full page provides an apt vehicle for an apocalypse in comics....

  13. The INGV tectonomagnetic network: 2004–2005 preliminary dataset analysis

    Directory of Open Access Journals (Sweden)

    F. Masci

    2006-01-01

    Full Text Available It is well established that earthquakes and volcanic eruption can produce small variations in the local geomagnetic field. The Italian Istituto Nazionale di Geofisica e Vulcanologia (INGV tectonomagnetic network was installed in Central Italy since 1989 to investigate possible effects on the local geomagnetic field related to earthquakes occurrences. At the present time, total geomagnetic field intensity data are collected in four stations using proton precession magnetometers. We report the complete dataset for the period of years 2004–2005. The data of each station are differentiated respect to the data of the other stations in order to detect local field anomalies removing the contributions from the other sources, external and internal to the Earth. Unfortunately, no correlation between geomagnetic anomalies and the local seismic activity, recorded in Central Italy by the INGV Italian Seismic National Network, was found in this period. Some deceptive structures present in the differentiated data are pointed out.

  14. xarray: N-D labeled Arrays and Datasets in Python

    Directory of Open Access Journals (Sweden)

    Stephan Hoyer

    2017-04-01

    Full Text Available xarray is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays. Our approach combines an application programing interface (API inspired by pandas with the Common Data Model for self-described scientific data. Key features of the xarray package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, out-of-core computation on datasets that don’t fit into memory, a wide range of serialization and input/output (I/O options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. xarray, as a data model and analytics toolkit, has been widely adopted in the geoscience community but is also used more broadly for multi-dimensional data analysis in physics, machine learning and finance.

  15. Dataset concerning the analytical approximation of the Ae3 temperature.

    Science.gov (United States)

    Ennis, B L; Jimenez-Melero, E; Mostert, R; Santillana, B; Lee, P D

    2017-02-01

    In this paper we present a new polynomial function for calculating the local phase transformation temperature (Ae3 ) between the austenite+ferrite and the fully austenitic phase fields during heating and cooling of steel:[Formula: see text] The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled "The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel" 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016) [1].

  16. Pooling breast cancer datasets has a synergetic effect on classification performance and improves signature stability

    Directory of Open Access Journals (Sweden)

    van de Vijver Marc J

    2008-08-01

    Full Text Available Abstract Background Michiels et al. (Lancet 2005; 365: 488–92 employed a resampling strategy to show that the genes identified as predictors of prognosis from resamplings of a single gene expression dataset are highly variable. The genes most frequently identified in the separate resamplings were put forward as a 'gold standard'. On a higher level, breast cancer datasets collected by different institutions can be considered as resamplings from the underlying breast cancer population. The limited overlap between published prognostic signatures confirms the trend of signature instability identified by the resampling strategy. Six breast cancer datasets, totaling 947 samples, all measured on the Affymetrix platform, are currently available. This provides a unique opportunity to employ a substantial dataset to investigate the effects of pooling datasets on classifier accuracy, signature stability and enrichment of functional categories. Results We show that the resampling strategy produces a suboptimal ranking of genes, which can not be considered to be a 'gold standard'. When pooling breast cancer datasets, we observed a synergetic effect on the classification performance in 73% of the cases. We also observe a significant positive correlation between the number of datasets that is pooled, the validation performance, the number of genes selected, and the enrichment of specific functional categories. In addition, we have evaluated the support for five explanations that have been postulated for the limited overlap of signatures. Conclusion The limited overlap of current signature genes can be attributed to small sample size. Pooling datasets results in more accurate classification and a convergence of signature genes. We therefore advocate the analysis of new data within the context of a compendium, rather than analysis in isolation.

  17. 3D volumetric modeling of grapevine biomass using Tripod LiDAR

    Science.gov (United States)

    Keightley, K.E.; Bawden, G.W.

    2010-01-01

    Tripod mounted laser scanning provides the means to generate high-resolution volumetric measures of vegetation structure and perennial woody tissue for the calculation of standing biomass in agronomic and natural ecosystems. Other than costly destructive harvest methods, no technique exists to rapidly and accurately measure above-ground perennial tissue for woody plants such as Vitis vinifera (common grape vine). Data collected from grapevine trunks and cordons were used to study the accuracy of wood volume derived from laser scanning as compared with volume derived from analog measurements. A set of 10 laser scan datasets were collected for each of 36 vines from which volume was calculated using combinations of two, three, four, six and 10 scans. Likewise, analog volume measurements were made by submerging the vine trunks and cordons in water and capturing the displaced water. A regression analysis examined the relationship between digital and non-digital techniques among the 36 vines and found that the standard error drops rapidly as additional scans are added to the volume calculation process and stabilizes at the four-view geometry with an average Pearson's product moment correlation coefficient of 0.93. Estimates of digital volumes are systematically greater than those of analog volumes and can be explained by the manner in which each technique interacts with the vine tissue. This laser scanning technique yields a highly linear relationship between vine volume and tissue mass revealing a new, rapid and non-destructive method to remotely measure standing biomass. This application shows promise for use in other ecosystems such as orchards and forests. ?? 2010 Elsevier B.V.

  18. Histology-derived volumetric annotation of the human hippocampal subfields in postmortem MRI

    Science.gov (United States)

    Adler, Daniel H.; Pluta, John; Kadivar, Salmon; Craige, Caryne; Gee, James C.; Avants, Brian B.; Yushkevich, Paul A.

    2013-01-01

    Recently, there has been a growing effort to analyze the morphometry of hippocampal subfields using both in vivo and postmortem magnetic resonance imaging (MRI). However, given that boundaries between subregions of the hippocampal formation (HF) are conventionally defined on the basis of microscopic features that often lack discernible signature in MRI, subfield delineation in MRI literature has largely relied on heuristic geometric rules, the validity of which with respect to the underlying anatomy is largely unknown. The development and evaluation of such rules is challenged by the limited availability of data linking MRI appearance to microscopic hippocampal anatomy, particularly in three dimensions (3D). The present paper, for the first time, demonstrates the feasibility of labeling hippocampal subfields in a high resolution volumetric MRI dataset based directly on microscopic features extracted from histology. It uses a combination of computational techniques and manual post-processing to map subfield boundaries from a stack of histology images (obtained with 200 μm spacing and 5 μm slice thickness; stained using the Kluver-Barrera method) onto a postmortem 9.4 Tesla MRI scan of the intact, whole hippocampal formation acquired with 160 μm isotropic resolution. The histology reconstruction procedure consists of sequential application of a graph-theoretic slice stacking algorithm that mitigates the effects of distorted slices, followed by iterative affine and diffeomorphic co-registration to postmortem MRI scans of approximately 1 cm-thick tissue sub-blocks acquired with 200 μm isotropic resolution. These 1 cm blocks are subsequently co-registered to the MRI of the whole HF. Reconstruction accuracy is evaluated as the average displacement error between boundaries manually delineated in both the histology and MRI following the sequential stages of reconstruction. The methods presented and evaluated in this single-subject study can potentially be applied to

  19. Reconstructing thawing quintessence with multiple datasets

    CERN Document Server

    Lima, Nelson A; Sahlén, Martin; Parkinson, David

    2015-01-01

    In this work we model the quintessence potential in a Taylor series expansion, up to second order, around the present-day value of the scalar field. The field is evolved in a thawing regime assuming zero initial velocity. We use the latest data from the Planck satellite, baryonic acoustic oscillations observations from the Sloan Digital Sky Survey, and Supernovae luminosity distance information from Union$2.1$ to constrain our models parameters, and also include perturbation growth data from WiggleZ. We show explicitly that the growth data does not perform as well as the other datasets in constraining the dark energy parameters we introduce. We also show that the constraints we obtain for our model parameters, when compared to previous works of nearly a decade ago, have not improved significantly. This is indicative of how little dark energy constraints, overall, have improved in the last decade, even when we add new growth of structure data to previous existent types of data.

  20. Workflow to numerically reproduce laboratory ultrasonic datasets

    Institute of Scientific and Technical Information of China (English)

    A. Biryukov; N. Tisato; G. Grasselli

    2014-01-01

    The risks and uncertainties related to the storage of high-level radioactive waste (HLRW) can be reduced thanks to focused studies and investigations. HLRWs are going to be placed in deep geological re-positories, enveloped in an engineered bentonite barrier, whose physical conditions are subjected to change throughout the lifespan of the infrastructure. Seismic tomography can be employed to monitor its physical state and integrity. The design of the seismic monitoring system can be optimized via con-ducting and analyzing numerical simulations of wave propagation in representative repository geometry. However, the quality of the numerical results relies on their initial calibration. The main aim of this paper is to provide a workflow to calibrate numerical tools employing laboratory ultrasonic datasets. The finite difference code SOFI2D was employed to model ultrasonic waves propagating through a laboratory sample. Specifically, the input velocity model was calibrated to achieve a best match between experi-mental and numerical ultrasonic traces. Likely due to the imperfections of the contact surfaces, the resultant velocities of P- and S-wave propagation tend to be noticeably lower than those a priori assigned. Then, the calibrated model was employed to estimate the attenuation in a montmorillonite sample. The obtained low quality factors (Q) suggest that pronounced inelastic behavior of the clay has to be taken into account in geophysical modeling and analysis. Consequently, this contribution should be considered as a first step towards the creation of a numerical tool to evaluate wave propagation in nuclear waste repositories.

  1. Classification of antimicrobial peptides with imbalanced datasets

    Science.gov (United States)

    Camacho, Francy L.; Torres, Rodrigo; Ramos Pollán, Raúl

    2015-12-01

    In the last years, pattern recognition has been applied to several fields for solving multiple problems in science and technology as for example in protein prediction. This methodology can be useful for prediction of activity of biological molecules, e.g. for determination of antimicrobial activity of synthetic and natural peptides. In this work, we evaluate the performance of different physico-chemical properties of peptides (descriptors groups) in the presence of imbalanced data sets, when facing the task of detecting whether a peptide has antimicrobial activity. We evaluate undersampling and class weighting techniques to deal with the class imbalance with different classification methods and descriptor groups. Our classification model showed an estimated precision of 96% showing that descriptors used to codify the amino acid sequences contain enough information to correlate the peptides sequences with their antimicrobial activity by means of learning machines. Moreover, we show how certain descriptor groups (pseudoaminoacid composition type I) work better with imbalanced datasets while others (dipeptide composition) work better with balanced ones.

  2. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  3. Quantitative volumetric Raman imaging of three dimensional cell cultures

    Science.gov (United States)

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-03-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  4. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  5. Volumetric properties of water/AOT/isooctane microemulsions.

    Science.gov (United States)

    Du, Changfei; He, Wei; Yin, Tianxiang; Shen, Weiguo

    2014-12-23

    The densities of AOT/isooctane micelles and water/AOT/isooctane microemulsions with the molar ratios R of water to AOT being 2, 8, 10, 12, 16, 18, 20, 25, 30, and 40 were measured at 303.15 K. The apparent specific volumes of AOT and the quasi-component water/AOT at various concentrations were calculated and used to estimate the volumetric properties of AOT and water in the droplets and in the continuous oil phase, to discuss the interaction between the droplets, and to determine the critical micelle concentration and the critical microemulsion concentrations. A thermodynamic model was proposed to analysis the stability boundary of the microemulsion droplets, which confirms the maximum value of R being about 65 for the stable AOT/water/isooctane microemulsion droplets.

  6. Quantitative volumetric Raman imaging of three dimensional cell cultures

    KAUST Repository

    Kallepitis, Charalambos

    2017-03-22

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell–material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  7. In-line hologram segmentation for volumetric samples.

    Science.gov (United States)

    Orzó, László; Göröcs, Zoltán; Fehér, András; Tőkés, Szabolcs

    2013-01-01

    We propose a fast, noniterative method to segment an in-line hologram of a volumetric sample into in-line subholograms according to its constituent objects. In contrast to the phase retrieval or twin image elimination algorithms, we do not aim or require to reconstruct the complex wave field of all the objects, which would be a more complex task, but only provide a good estimate about the contribution of the particular objects to the original hologram quickly. The introduced hologram segmentation algorithm exploits the special inner structure of the in-line holograms and applies only the estimated supports and reconstruction distances of the corresponding objects as parameters. The performance of the proposed method is demonstrated and analyzed experimentally both on synthetic and measured holograms. We discussed how the proposed algorithm can be efficiently applied for object reconstruction and phase retrieval tasks.

  8. Three-Dimensional Volumetric Restoration by Structural Fat Grafting

    Science.gov (United States)

    Clauser, Luigi C.; Consorti, Giuseppe; Elia, Giovanni; Galié, Manlio; Tieghi, Riccardo

    2013-01-01

    The use of adipose tissue transfer for correction of maxillofacial defects was reported for the first time at the end of the 19th century. Structural fat grafting (SFG) was introduced as a way to improve facial esthetics and in recent years has evolved into applications in craniomaxillofacial reconstructive surgery. Several techniques have been proposed for harvesting and grafting the fat. However, owing to the damage of many adipocytes during these maneuvers, the results have not been satisfactory and have required several fat injection procedures for small corrections. The author's (L.C.) overview the application of SFG in the management of volumetric deficit in the craniomaxillofacial in patients treated with a long-term follow-up. PMID:24624259

  9. Semi-automatic volumetrics system to parcellate ROI on neocortex

    Science.gov (United States)

    Tan, Ou; Ichimiya, Tetsuya; Yasuno, Fumihiko; Suhara, Tetsuya

    2002-05-01

    A template-based and semi-automatic volumetrics system--BrainVol is build to divide the any given patient brain to neo-cortical and sub-cortical regions. The standard region is given as standard ROI drawn on a standard brain volume. After normalization between the standard MR image and the patient MR image, the sub-cortical ROIs' boundary are refined based on gray matter. The neo-cortical ROIs are refined by sulcus information that is semi-automatically marked on the patient brain. Then the segmentation is applied to 4D PET image of same patient for calculation of TAC (Time Activity Curve) by co-registration between MR and PET.

  10. Volumetric Survey Speed: A Figure of Merit for Transient Surveys

    CERN Document Server

    Bellm, Eric C

    2016-01-01

    Time-domain surveys can exchange sky coverage for revisit frequency, complicating the comparison of their relative capabilities. By using different revisit intervals, a specific camera may execute surveys optimized for discovery of different classes of transient objects. We propose a new figure of merit, the instantaneous volumetric survey speed, for evaluating transient surveys. This metric defines the trade between cadence interval and snapshot survey volume and so provides a natural means of comparing survey capability. The related metric of areal survey speed imposes a constraint on the range of possible revisit times: we show that many modern time-domain surveys are limited by the amount of fresh sky available each night. We introduce the concept of "spectroscopic accessibility" and discuss its importance for transient science goals requiring followup observing. We present an extension of the control time algorithm for cases where multiple consecutive detections are required. Finally, we explore how surv...

  11. The volumetric rate of superluminous supernovae at z ˜ 1

    Science.gov (United States)

    Prajs, S.; Sullivan, M.; Smith, M.; Levan, A.; Karpenka, N. V.; Edwards, T. D. P.; Walker, C. R.; Wolf, W. M.; Balland, C.; Carlberg, R.; Howell, D. A.; Lidman, C.; Pain, R.; Pritchet, C.; Ruhlmann-Kleider, V.

    2017-01-01

    We present a measurement of the volumetric rate of superluminous supernovae (SLSNe) at z ˜ 1.0, measured using archival data from the first four years of the Canada-France-Hawaii Telescope Supernova Legacy Survey (SNLS). We develop a method for the photometric classification of SLSNe to construct our sample. Our sample includes two previously spectroscopically identified objects, and a further new candidate selected using our classification technique. We use the point-source recovery efficiencies from Perrett et al. and a Monte Carlo approach to calculate the rate based on our SLSN sample. We find that the three identified SLSNe from SNLS give a rate of 91^{+76}_{-36} SNe yr-1 Gpc-3 at a volume-weighted redshift of z = 1.13. This is equivalent to 2.2^{+1.8}_{-0.9}× 10^{-4} of the volumetric core-collapse supernova rate at the same redshift. When combined with other rate measurements from the literature, we show that the rate of SLSNe increases with redshift in a manner consistent with that of the cosmic star formation history. We also estimate the rate of ultra-long gamma-ray bursts based on the events discovered by the Swift satellite, and show that it is comparable to the rate of SLSNe, providing further evidence of a possible connection between these two classes of events. We also examine the host galaxies of the SLSNe discovered in SNLS, and find them to be consistent with the stellar-mass distribution of other published samples of SLSNe.

  12. Volumetric analysis of corticocancellous bones using CT data

    Energy Technology Data Exchange (ETDEWEB)

    Krappinger, Dietmar; Linde, Astrid von; Rosenberger, Ralf; Blauth, Michael [Medical University Innsbruck, Department of Trauma Surgery and Sports Medicine, Innsbruck (Austria); Glodny, Bernhard; Niederwanger, Christian [Medical University Innsbruck, Department of Radiology I, Innsbruck (Austria)

    2012-05-15

    To present a method for an automated volumetric analysis of corticocancellous bones such as the superior pubic ramus using CT data and to assess the reliability of this method. Computed tomography scans of a consecutive series of 250 patients were analyzed. A Hounsfield unit (HU) thresholding-based reconstruction technique (''Vessel Tracking,'' GE Healthcare) was used. A contiguous space of cancellous bone with similar HU values between the starting and end points was automatically identified as the region of interest. The identification was based upon the density gradient to the adjacent cortical bone. The starting point was defined as the middle of the parasymphyseal corticocancellous transition zone on the axial slice showing the parasymphyseal superior pubic ramus in its maximum anteroposterior width. The end point was defined as the middle of the periarticular corticocancellous transition zone on the axial slice showing the quadrilateral plate as a thin cortical plate. The following parameters were automatically obtained on both sides: length of the center line, volume of the superior pubic ramus between the starting point and end point, minimum, maximum and mean diameter perpendicular to the center line, and mean cross-sectional area perpendicular to the center line. An automated analysis without manual adjustments was successful in 207 patients (82.8%). The center line showed a significantly greater length in female patients (67.6 mm vs 65.0 mm). The volume was greater in male patients (21.8 cm{sup 3} vs 19.4 cm{sup 3}). The intersite reliability was high with a mean difference between the left and right sides of between 0.1% (cross-sectional area) and 2.3% (volume). The method presented allows for an automated volumetric analysis of a corticocancellous bone using CT data. The method is intended to provide preoperative information for the use of intramedullary devices in fracture fixation and percutaneous cement augmentation techniques

  13. Volumetric and two-dimensional image interpretation show different cognitive processes in learners

    NARCIS (Netherlands)

    van der Gijp, Anouk; Ravesloot, C.J.; van der Schaaf, Marieke F; van der Schaaf, Irene C; Huige, Josephine C B M; Vincken, Koen L; Ten Cate, Olle Th J; van Schaik, JPJ

    2015-01-01

    RATIONALE AND OBJECTIVES: In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional imag

  14. MR volumetric study of piriform-cortical amygdala and orbitofrontal cortices: the aging effect.

    Directory of Open Access Journals (Sweden)

    Jing Shen

    Full Text Available INTRODUCTION: The piriform cortex and cortical amygdala (PCA and the orbitofrontal cortex (OFC are considered olfactory-related brain regions. This study aims to elucidate the normal volumes of PCA and OFC of each age groups (20.0-70.0 year old, and whether the volumes of PCA and OFC decline with increasing age and diminishing olfactory function. METHODS: One hundred and eleven healthy right-handed participants (54 males, 57 females, age 20.0 to 70.0 years were recruited to join this study after excluding all the major causes of olfactory dysfunction. Volumetric measurements of PCA and OFC were performed using consecutive 1-mm thick coronal slices of high-resolution 3-D MRIs. A validated olfactory function test (Sniffin' Sticks assessed olfactory function, which measured odor threshold (THD, odor discrimination (DIS, and odor identification (ID as well as their sum score (TDI. RESULTS: The volume of OFC decreased with age and significantly correlated with age-related declines in olfactory function. The volume of OFC showed significant age-group differences, particularly after 40 years old (p < 0.001, while olfactory function decreased significantly after 60 years old (p < 0.001. Similar age-related volumetric changes were not found for PCA (p = 0.772. Additionally, there was significant correlation between OFC and DIS on the Right Side (p = 0.028 and between OFC and TDI on both sides (p < 0.05. There was no similar correlation for PCA. CONCLUSIONS: Aging can have a great impact on the volume of OFC and olfactory function while it has much smaller effect on the volume of PCA. The result could be useful to establish normal volumes of PCA and OFC of each age group to assess neurological disorders that affect olfactory function.

  15. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    <正>Centre for Agriculture and Bioscience International( CABI) is a not-for-profit international Agricultural Information Institute with headquarters in Britain. It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment. CABI Full-text is one of the publishing products of CABI.CABI’s full text repository is growing rapidly and has now been integrated into all our databases including CAB Abstracts,Global Health,our Internet Resources and Abstract Journals. There are currently over 60,000 full text articles available to access. These documents,made possible by agreement with third

  16. High Volumetric Energy Density Hybrid Supercapacitors Based on Reduced Graphene Oxide Scrolls.

    Science.gov (United States)

    Rani, Janardhanan R; Thangavel, Ranjith; Oh, Se-I; Woo, Jeong Min; Chandra Das, Nayan; Kim, So-Yeon; Lee, Yun-Sung; Jang, Jae-Hyung

    2017-07-12

    The low volumetric energy density of reduced graphene oxide (rGO)-based electrodes limits its application in commercial electrochemical energy storage devices that require high-performance energy storage capacities in small volumes. The volumetric energy density of rGO-based electrode materials is very low due to their low packing density. A supercapacitor with enhanced packing density and high volumetric energy density is fabricated using doped rGO scrolls (GFNSs) as the electrode material. The restacking of rGO sheets is successfully controlled through synthesizing the doped scroll structures while increasing the packing density. The fabricated cell exhibits an ultrahigh volumetric energy density of 49.66 Wh/L with excellent cycling stability (>10 000 cycles). This unique design strategy for the electrode material has significant potential for the future supercapacitors with high volumetric energy densities.

  17. Global segmentation and curvature analysis of volumetric data sets using trivariate B-spline functions.

    Science.gov (United States)

    Soldea, Octavian; Elber, Gershon; Rivlin, Ehud

    2006-02-01

    This paper presents a method to globally segment volumetric images into regions that contain convex or concave (elliptic) iso-surfaces, planar or cylindrical (parabolic) iso-surfaces, and volumetric regions with saddle-like (hyperbolic) iso-surfaces, regardless of the value of the iso-surface level. The proposed scheme relies on a novel approach to globally compute, bound, and analyze the Gaussian and mean curvatures of an entire volumetric data set, using a trivariate B-spline volumetric representation. This scheme derives a new differential scalar field for a given volumetric scalar field, which could easily be adapted to other differential properties. Moreover, this scheme can set the basis for more precise and accurate segmentation of data sets targeting the identification of primitive parts. Since the proposed scheme employs piecewise continuous functions, it is precise and insensitive to aliasing.

  18. The duct selective volumetric receiver: potential for different selectivity strategies and stability issues

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Casals, X. [Universidad Pontificia Comillas-ICAI, Madrid (Spain). Dept. de Fluidos y Calor; Ajona, J.I. [Departamento de Energia Solar, Viessemann, Poligono Industrial San Marcos, Getafe (Spain)

    1999-07-01

    Recently much theoretical and experimental work has been conducted on volumetric receivers. However, not much attention has been paid to the possibilities of using different selectivity mechanisms to minimize radiation thermal losses, which are the main ones at high operating temperature. In this paper we present a duct volumetric receiver model and its results, which allow the evaluation of different selectivity strategies such as: conventional {epsilon}/{alpha}, geometry, frontal absorption and diffuse/specular reflection. We propose a new concept of selective volumetric receivers based on a solar-specular/infrared-diffuse radiative behaviour and evaluate its potential for efficiency improvement. In recent work on volumetric receivers based on simplified models, it has been concluded that the duct volumetric receiver is inherently unstable when working with high solar flux. We didn't find any unstable receiver behaviour even at very high solar fluxes, and conclude that a substantial potential for efficiency improvement exists if selectivity mechanisms are properly combined. (author)

  19. Enhanced volumetric visualization for real time 4D intraoperative ophthalmic swept-source OCT.

    Science.gov (United States)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar M; Nankivil, Derek; Shen, Liangbo; Mangalesh, Shwetha; Viet, Du Tran; Kuo, Anthony N; Toth, Cynthia A; Izatt, Joseph A

    2016-05-01

    Current-generation software for rendering volumetric OCT data sets based on ray casting results in volume visualizations with indistinct tissue features and sub-optimal depth perception. Recent developments in hand-held and microscope-integrated intrasurgical OCT designed for real-time volumetric imaging motivate development of rendering algorithms which are both visually appealing and fast enough to support real time rendering, potentially from multiple viewpoints for stereoscopic visualization. We report on an enhanced, real time, integrated volumetric rendering pipeline which incorporates high performance volumetric median and Gaussian filtering, boundary and feature enhancement, depth encoding, and lighting into a ray casting volume rendering model. We demonstrate this improved model implemented on graphics processing unit (GPU) hardware for real-time volumetric rendering of OCT data during tissue phantom and live human surgical imaging. We show that this rendering produces enhanced 3D visualizations of pathology and intraoperative maneuvers compared to standard ray casting.

  20. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2014-01-01

    <正>Centre for Agriculture and Bioscience International(CABI)is a not-for-profit international Agricultural Information Institute with headquarters in Britain.It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment.CABI Full-text is one of the publishing products of CABI.CABI’s full text repository is growing rapidly

  1. Volumetric and two-dimensional image interpretation show different cognitive processes in learners.

    Science.gov (United States)

    van der Gijp, Anouk; Ravesloot, Cécile J; van der Schaaf, Marieke F; van der Schaaf, Irene C; Huige, Josephine C B M; Vincken, Koen L; Ten Cate, Olle Th J; van Schaik, Jan P J

    2015-05-01

    In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images. Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings. Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  2. A reference GNSS tropospheric dataset over Europe.

    Science.gov (United States)

    Pacione, Rosa; Di Tomaso, Simona

    2016-04-01

    The present availability of 18 years of GNSS data belonging to the European Permanent Network (EPN, http://www.epncb.oma.be/) is a valuable database for the development of a climate data record of GNSS tropospheric products over Europe. This dataset has high potential for monitoring trend and variability in atmospheric water vapour, improving the knowledge of climatic trends of atmospheric water vapour and being useful for global and regional NWP reanalyses as well as climate model simulations. In the framework of the EPN-Repro2, a second reprocessing campaign of the EPN, five Analysis Centres have homogenously reprocessed the EPN network for the 1996-2013. Three Analysis Centres are providing homogenously reprocessed solutions for the entire network, which are analyzed by the three different software packages: Bernese, GAMIT and GIPSY-OASIS. Smaller subnetworks based on Bernese 5.2 are also provided. A huge effort is made for providing solutions that are the basis for deriving new coordinates, velocities and troposphere parameters, Zenith Tropospheric Delays and Horizontal Gradients, for the entire EPN. These individual contributions are combined in order to provide the official EPN reprocessed products. A preliminary tropospheric combined solution for the period 1996-2013 has been carried out. It is based on all the available homogenously reprocessed solutions and it offers the possibility to assess each of them prior to the ongoing final combination. We will present the results of the EPN Repro2 tropospheric combined products and how the climate community will benefit from them. Aknowledgment.The EPN Repro2 working group is acknowledged for providing the EPN solutions used in this work. E-GEOS activity is carried out in the framework of ASI contract 2015-050-R.0.

  3. Application of Huang-Hilbert Transforms to Geophysical Datasets

    Science.gov (United States)

    Duffy, Dean G.

    2003-01-01

    The Huang-Hilbert transform is a promising new method for analyzing nonstationary and nonlinear datasets. In this talk I will apply this technique to several important geophysical datasets. To understand the strengths and weaknesses of this method, multi- year, hourly datasets of the sea level heights and solar radiation will be analyzed. Then we will apply this transform to the analysis of gravity waves observed in a mesoscale observational net.

  4. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  5. Creating a distortion characterisation dataset for visual band cameras using fiducial markers

    CSIR Research Space (South Africa)

    Jermy, R

    2015-11-01

    Full Text Available . This will allow other researchers to perform the same steps and create better algorithms to accurately locate fiducial markers and calibrate cameras. A second dataset that can be used to assess the accuracy of the stereo vision of two calibrated cameras is also...

  6. full on riot

    Directory of Open Access Journals (Sweden)

    Moses Iten

    2005-08-01

    Full Text Available “hey moses full on riot in lawson st the station’s on fire! been going since 4. molotov and more. full on,” reads an SMS message received on the backseat of a Tasmanian bus. What follows is a journey through the landscape of a Gunavidji, whose brothers have all gone to the land of the dead; metallic scraping in the glass cases of the Hobart Museum; a Palestinian woman giving up on her people; land-snails exposing cultural inaccuracies; photographing Australia’s war zone; entering the St Peter’s Basilica of Rome with bulldozers - all in the name of preparing to interview prominent Israeli writer Etgar Keret.

  7. Full moon and crime.

    Science.gov (United States)

    Thakur, C P; Sharma, D

    The incidence of crimes reported to three police stations in different towns (one rural, one urban, one industrial) was studied to see if it varied with the day of the lunar cycle. The period of the study covered 1978-82. The incidence of crimes committed on full moon days was much higher than on all other days, new moon days, and seventh days after the full moon and new moon. A small peak in the incidence of crimes was observed on new moon days, but this was not significant when compared with crimes committed on other days. The incidence of crimes on equinox and solstice days did not differ significantly from those on other days, suggesting that the sun probably does not influence the incidence of crime. The increased incidence of crimes on full moon days may be due to "human tidal waves" caused by the gravitational pull of the moon.

  8. Full tree harvesting update

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, K.; White, K.

    1981-03-01

    An important harvesting alternative in North America is the Full Tree Method, in which trees are felled and transported to roadside, intermediate or primary landings with limbs and branches intact. The acceptance of Full Tree Systems is due to many factors including: labour productivity and increased demands on the forest for ''new products''. These conditions are shaping the future look for forest Harvesting Systems, but must not be the sole determinants. All harvesting implications, such as those affecting Productivity and silviculture, should be thoroughly understood. This paper does not try to discuss every implication, nor any particular one in depth; its purpose is to highlight those areas requiring consideration and to review several current North American Full Tree Systems. (Refs. 5).

  9. Analysis of Public Datasets for Wearable Fall Detection Systems

    Directory of Open Access Journals (Sweden)

    Eduardo Casilari

    2017-06-01

    Full Text Available Due to the boom of wireless handheld devices such as smartwatches and smartphones, wearable Fall Detection Systems (FDSs have become a major focus of attention among the research community during the last years. The effectiveness of a wearable FDS must be contrasted against a wide variety of measurements obtained from inertial sensors during the occurrence of falls and Activities of Daily Living (ADLs. In this regard, the access to public databases constitutes the basis for an open and systematic assessment of fall detection techniques. This paper reviews and appraises twelve existing available data repositories containing measurements of ADLs and emulated falls envisaged for the evaluation of fall detection algorithms in wearable FDSs. The analysis of the found datasets is performed in a comprehensive way, taking into account the multiple factors involved in the definition of the testbeds deployed for the generation of the mobility samples. The study of the traces brings to light the lack of a common experimental benchmarking procedure and, consequently, the large heterogeneity of the datasets from a number of perspectives (length and number of samples, typology of the emulated falls and ADLs, characteristics of the test subjects, features and positions of the sensors, etc.. Concerning this, the statistical analysis of the samples reveals the impact of the sensor range on the reliability of the traces. In addition, the study evidences the importance of the selection of the ADLs and the need of categorizing the ADLs depending on the intensity of the movements in order to evaluate the capability of a certain detection algorithm to discriminate falls from ADLs.

  10. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    <正>Centre for Agriculture and Bioscience International(CABI)is a not-for-profit international Agricultural Information Institute with headquarters in Britain.It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment.CABI Full-text is one of the publishing products of CABI.CABI’s full text repository is growing rapidly and has now been integrated into all our databases including CAB Abstracts,Global Health

  11. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  12. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    <正>Centre for Agriculture and Bioscience International(CABI)is a not-for-profit international Agricultural Information Institute with headquarters in Britain.It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment.CABI Full-text is one of the publishing products of CABI.CABI’s full text repository is growing rapidly and has now been integrated into all our databases including CAB Abstracts,Global Health,our Internet Resources and Jour-

  13. Compression method based on training dataset of SVM

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The method to compress the training dataset of Support Vector Machine (SVM) based on the character of the Support Vector Machine is proposed.First,the distance between the unit in two training datasets,and then the samples that keep away from hyper-plane are discarded in order to compress the training dataset.The time spent in training SVM with the training dataset compressed by the method is shortened obviously.The result of the experiment shows that the algorithm is effective.

  14. Providing Geographic Datasets as Linked Data in Sdi

    Science.gov (United States)

    Hietanen, E.; Lehto, L.; Latvala, P.

    2016-06-01

    In this study, a prototype service to provide data from Web Feature Service (WFS) as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI) are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF) data format. Next, a Web Ontology Language (OWL) ontology is created to describe the dataset information content using the Open Geospatial Consortium's (OGC) GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML) format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID). The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  15. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    National Research Council Canada - National Science Library

    Rodrigues, João; Andrade, Alexandre

    2015-01-01

    Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics...

  16. BIA Indian Lands Dataset (Indian Lands of the United States)

    Data.gov (United States)

    Federal Geographic Data Committee — The American Indian Reservations / Federally Recognized Tribal Entities dataset depicts feature location, selected demographics and other associated data for the 561...

  17. BDML Datasets: 9 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available Mn_HKM Mus musculus gene expression data Experiment Hiroki R. Ueda, Koh-hei Masumot...asufumi Shigeyoshi Hiroki R. Ueda, RIKEN, Center for Developmental Biology, Functional Genomics Unit See det

  18. BDML Datasets: 6 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available Eco_MinE_AS E. coli single molecule dynamics Simulation Arjunan, S. N. V. and Tomit...a, M. Satya Nanda Vel Arjunan, RIKEN, Quantitative Biology Center, Laboratory for Biochemical Simulation See

  19. BDML Datasets: 1 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available Ce_KK_P002 C. elegans nuclear division dynamics Measurement Kyoda, K., Furukawa, M....ometer), 40 (second) Nuclear division dynamics in embryogenesis of Caenorhabditis elegans obtained from differential interference contrast microscopy ...

  20. BDML Datasets: 3 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available antella, A., Khairy, K., Bao, Z., Wittbrodt, J., and Stelzer, E.H.K. Philipp J. Keller, European Molecular Biology... Laboratory, Cell Biology and Biophysics Unit, Stelzer Laboratory See details in Keller et al. (2010)

  1. BDML Datasets: 2 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available Ce_AK C. elegans cell simulation Simulation Kimura, A. and Onami, S. Shuichi Onami, RIKEN, Quantitative Biol...ogy Center, Laboratory for Developmental Dynamics See details in Kimura, A. and Ona

  2. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    <正>Centre for Agriculture and Bioscience International( CABI) is a not-for-profit international Agricultural Information Institute with headquarters in Britain. It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment. CABI Full-text is one of the publishing products of CABI.

  3. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    <正>Centre for Agriculture and Bioscience International(CABI) is a not-for-profit international Agricultural Information Institute with headquarters in Britain. It aims to improve people’s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment. CABI Full-text is one of the publishing products of CABI.

  4. About CABI Full Text

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    <正>Centre for Agriculture and Bioscience International(CABI)is a not-for-profit international Agricultural Information Institute with headquarters in Britain. It aims to improve people s lives by providing information and applying scientific expertise to solve problems in agriculture and the environment. CABI Full-text is one of the publishing products of CABI.

  5. Fuzzy-rough set and fuzzy ID3 decision approaches to knowledge discovery in datasets

    Directory of Open Access Journals (Sweden)

    O. G. Elbarbary

    2012-07-01

    Full Text Available Fuzzy rough sets are the generalization of traditional rough sets to deal with both fuzziness and vagueness in data. The existing researches on fuzzy rough sets mainly concentrate on the construction of approximation operators. Less effort has been put on the knowledge discovery in datasets with fuzzy rough sets. This paper mainly focuses on knowledge discovery in datasets with fuzzy rough sets. After analyzing the previous works on knowledge discovery with fuzzy rough sets, we introduce formal concepts of attribute reduction with fuzzy rough sets and completely study the structure of attribute reduction.

  6. Hybridoma cell-culture and glycan profile dataset at various bioreactor conditions

    Directory of Open Access Journals (Sweden)

    Hemlata Bhatia

    2016-12-01

    Full Text Available This is an “11 factor-2 level-12 run” Plackett-Burman experimental design dataset. The dataset includes 11 engineering bioreactor parameters as input variables. These 11 factors were varied at 2 levels and 23 response variables that are glycan profile attributes, were measured “A Design Space Exploration for Control of Critical Quality Attributes of mAb” (H. Bhatia, E.K. Read, C.D. Agarabi, K.A. Brorson, S.C. Lute, S. Yoon S, 2016 [2].

  7. Harmonisation of variables names prior to conducting statistical analyses with multiple datasets: an automated approach

    Directory of Open Access Journals (Sweden)

    Bosch-Capblanch Xavier

    2011-05-01

    Full Text Available Abstract Background Data requirements by governments, donors and the international community to measure health and development achievements have increased in the last decade. Datasets produced in surveys conducted in several countries and years are often combined to analyse time trends and geographical patterns of demographic and health related indicators. However, since not all datasets have the same structure, variables definitions and codes, they have to be harmonised prior to submitting them to the statistical analyses. Manually searching, renaming and recoding variables are extremely tedious and prone to errors tasks, overall when the number of datasets and variables are large. This article presents an automated approach to harmonise variables names across several datasets, which optimises the search of variables, minimises manual inputs and reduces the risk of error. Results Three consecutive algorithms are applied iteratively to search for each variable of interest for the analyses in all datasets. The first search (A captures particular cases that could not be solved in an automated way in the search iterations; the second search (B is run if search A produced no hits and identifies variables the labels of which contain certain key terms defined by the user. If this search produces no hits, a third one (C is run to retrieve variables which have been identified in other surveys, as an illustration. For each variable of interest, the outputs of these engines can be (O1 a single best matching variable is found, (O2 more than one matching variable is found or (O3 not matching variables are found. Output O2 is solved by user judgement. Examples using four variables are presented showing that the searches have a 100% sensitivity and specificity after a second iteration. Conclusion Efficient and tested automated algorithms should be used to support the harmonisation process needed to analyse multiple datasets. This is especially relevant when

  8. Harmonized dataset of ozone profiles from satellite limb and occultation measurements

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2013-06-01

    Full Text Available In this paper, we present a HARMonized dataset of OZone profiles (HARMOZ based on limb and occultation measurements from Envisat (GOMOS, MIPAS and SCIAMACHY, Odin (OSIRIS, SMR and SCISAT (ACE-FTS satellite instruments. These measurements provide high-vertical-resolution ozone profiles covering the altitude range from the upper troposphere up to the mesosphere in years 2001–2012. HARMOZ has been created in the framework of European Space Agency Climate Change Initiative project. The harmonized dataset consists of original retrieved ozone profiles from each instrument, which are screened for invalid data by the instrument teams. While the original ozone profiles are presented in different units and on different vertical grids, the harmonized dataset is given on a common pressure grid in netcdf format. The pressure grid corresponds to vertical sampling of ~ 1 km below 20 km and 2–3 km above 20 km. The vertical range of the ozone profiles is specific for each instrument, thus all information contained in the original data is preserved. Provided altitude and temperature profiles allow the representation of ozone profiles in number density or mixing ratio on a pressure or altitude vertical grids. Geolocation, uncertainty estimates and vertical resolution are provided for each profile. For each instrument, optional parameters, which might be related to the data quality, are also included. For convenience of users, tables of biases between each pair of instruments for each month, as well as bias uncertainties, are provided. These tables characterize the data consistency and can be used in various bias and drift analyses, which are needed, for instance, for combining several datasets to obtain a long-term climate dataset. This user-friendly dataset can be interesting and useful for various analyses and applications, such as data merging, data validation, assimilation and scientific research. Dataset is available at: http

  9. Filter Bank Common Spatial Pattern algorithm on BCI Competition IV Datasets 2a and 2b

    Directory of Open Access Journals (Sweden)

    Kai Keng eAng

    2012-03-01

    Full Text Available The Common Spatial Pattern (CSP algorithm is an effective and popular method for classifying 2-class motor imagery electroencephalogram (EEG data, but its effectiveness depends on the subject-specific frequency band. This paper presents the Filter Bank Common Spatial Pattern (FBCSP algorithm to optimize the subject-specific frequency band for CSP on Datasets 2a and 2b of the Brain-Computer Interface (BCI Competition IV. Dataset 2a comprised 4 classes of 22 channels EEG data from 9 subjects, and Dataset 2b comprised 2 classes of 3 bipolar channels EEG data from 9 subjects. Multi-class extensions to FBCSP are also presented to handle the 4-class EEG data in Dataset 2a, namely, Divide-and-Conquer (DC, Pair-Wise (PW, and One-Versus-Rest (OVR approaches. Two feature selection algorithms are also presented to select discriminative CSP features on Dataset 2b, namely, the Mutual Information-based Best Individual Feature (MIBIF algorithm, and the Mutual Information-based Rough Set Reduction (MIRSR algorithm. The single-trial classification accuracies were presented using 10x10-fold cross-validations on the training data and session-to-session transfer on the evaluation data from both datasets. Disclosure of the test data labels after the BCI Competition IV showed that the FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b respectively.

  10. Improving Accuracy and Coverage of Data Mining Systems that are Built from Noisy Datasets: A New Model

    Directory of Open Access Journals (Sweden)

    Luai A. Al Shalabi

    2009-01-01

    Full Text Available Problem statement: Noise within datasets has to be dealt with under most circumstances. This noise includes misclassified data or information as well as missing data or information. Simple human error is considered as misclassification. These errors will decrease the accuracy of the data mining system so it will not be likely to be used. The objective was to propose an effective algorithm to deal with noise which is represented by missing data in datasets. Approach: A model for improving the accuracy and coverage of data mining systems was proposed and the algorithm of this model was constructed. The algorithm was dealing with missing values in datasets. It splits the original dataset into two new datasets; one contains tuples that have no missing values and the other one contains tuples that have missing values. The proposed algorithm was applied to each of the two new datasets. It finds the reduct of each of them and then it merges the new reducts into one new dataset which will be ready for training. Results: The results showed interesting as it increases the accuracy and coverage of the tested dataset compared to the traditional models. Conclusion: The proposed algorithm performs effectively and generates better results than the previous ones.

  11. Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting.

    Directory of Open Access Journals (Sweden)

    Daniëlle van der Waal

    Full Text Available The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of personalised screening or screening evaluation.Digital mammographic exams (N = 992 of women participating in the Dutch breast cancer screening programme (age 50-75y in 2013 were included. Breast density was measured in three different ways: BI-RADS density (5th edition and with two commercially available automated software programs (Quantra and Volpara volumetric density. BI-RADS density (ordinal scale was assessed by three radiologists. Quantra (v1.3 and Volpara (v1.5.0 provide continuous estimates. Different comparison methods were used, including Bland-Altman plots and correlation coefficients (e.g., intraclass correlation coefficient [ICC].Based on the BI-RADS classification, 40.8% of the women had 'heterogeneously or extremely dense' breasts. The median volumetric percent density was 12.1% (IQR: 9.6-16.5 for Quantra, which was higher than the Volpara estimate (median 6.6%, IQR: 4.4-10.9. The mean difference between Quantra and Volpara was 5.19% (95% CI: 5.04-5.34 (ICC: 0.64. There was a clear increase in volumetric percent dense volume as BI-RADS density increased. The highest accuracy for predicting the presence of BI-RADS c+d (heterogeneously or extremely dense was observed with a cut-off value of 8.0% for Volpara and 13.8% for Quantra.Although there was no perfect agreement, there appeared to be a strong association between all three measures. Both volumetric density measures seem to be usable in breast cancer screening programmes, provided that the required data flow can be realized.

  12. Volumetric Spectroscopic Imaging of Glioblastoma Multiforme Radiation Treatment Volumes

    Energy Technology Data Exchange (ETDEWEB)

    Parra, N. Andres [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Maudsley, Andrew A. [Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida (United States); Gupta, Rakesh K. [Department of Radiology and Imaging, Fortis Memorial Research Institute, Gurgaon, Haryana (India); Ishkanian, Fazilat; Huang, Kris [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Walker, Gail R. [Biostatistics and Bioinformatics Core Resource, Sylvester Cancer Center, University of Miami Miller School of Medicine, Miami, Florida (United States); Padgett, Kyle [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida (United States); Roy, Bhaswati [Department of Radiology and Imaging, Fortis Memorial Research Institute, Gurgaon, Haryana (India); Panoff, Joseph; Markoe, Arnold [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States); Stoyanova, Radka, E-mail: RStoyanova@med.miami.edu [Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida (United States)

    2014-10-01

    Purpose: Magnetic resonance (MR) imaging and computed tomography (CT) are used almost exclusively in radiation therapy planning of glioblastoma multiforme (GBM), despite their well-recognized limitations. MR spectroscopic imaging (MRSI) can identify biochemical patterns associated with normal brain and tumor, predominantly by observation of choline (Cho) and N-acetylaspartate (NAA) distributions. In this study, volumetric 3-dimensional MRSI was used to map these compounds over a wide region of the brain and to evaluate metabolite-defined treatment targets (metabolic tumor volumes [MTV]). Methods and Materials: Volumetric MRSI with effective voxel size of ∼1.0 mL and standard clinical MR images were obtained from 19 GBM patients. Gross tumor volumes and edema were manually outlined, and clinical target volumes (CTVs) receiving 46 and 60 Gy were defined (CTV{sub 46} and CTV{sub 60}, respectively). MTV{sub Cho} and MTV{sub NAA} were constructed based on volumes with high Cho and low NAA relative to values estimated from normal-appearing tissue. Results: The MRSI coverage of the brain was between 70% and 76%. The MTV{sub NAA} were almost entirely contained within the edema, and the correlation between the 2 volumes was significant (r=0.68, P=.001). In contrast, a considerable fraction of MTV{sub Cho} was outside of the edema (median, 33%) and for some patients it was also outside of the CTV{sub 46} and CTV{sub 60}. These untreated volumes were greater than 10% for 7 patients (37%) in the study, and on average more than one-third (34.3%) of the MTV{sub Cho} for these patients were outside of CTV{sub 60}. Conclusions: This study demonstrates the potential usefulness of whole-brain MRSI for radiation therapy planning of GBM and revealed that areas of metabolically active tumor are not covered by standard RT volumes. The described integration of MTV into the RT system will pave the way to future clinical trials investigating outcomes in patients treated based on

  13. Hepatosplenic volumetric assessment at MDCT for staging liver fibrosis.

    Science.gov (United States)

    Pickhardt, Perry J; Malecki, Kyle; Hunt, Oliver F; Beaumont, Claire; Kloke, John; Ziemlewicz, Timothy J; Lubner, Meghan G

    2017-07-01

    To investigate hepatosplenic volumetry at MDCT for non-invasive prediction of hepatic fibrosis. Hepatosplenic volume analysis in 624 patients (mean age, 48.8 years; 311 M/313 F) at MDCT was performed using dedicated software and compared against pathological fibrosis stage (F0 = 374; F1 = 48; F2 = 40; F3 = 65; F4 = 97). The liver segmental volume ratio (LSVR) was defined by Couinaud segments I-III over segments IV-VIII. All pre-cirrhotic fibrosis stages (METAVIR F1-F3) were based on liver biopsy within 1 year of MDCT. LSVR and total splenic volumes increased with stage of fibrosis, with mean(±SD) values of: F0: 0.26 ± 0.06 and 215.1 ± 88.5 mm(3); F1: 0.25 ± 0.08 and 294.8 ± 153.4 mm(3); F2: 0.331 ± 0.12 and 291.6 ± 197.1 mm(3); F3: 0.39 ± 0.15 and 509.6 ± 402.6 mm(3); F4: 0.56 ± 0.30 and 790.7 ± 450.3 mm(3), respectively. Total hepatic volumes showed poor discrimination (F0: 1674 ± 320 mm(3); F4: 1631 ± 691 mm(3)). For discriminating advanced fibrosis (≥F3), the ROC AUC values for LSVR, total liver volume, splenic volume and LSVR/spleen combined were 0.863, 0.506, 0.890 and 0.947, respectively. Relative changes in segmental liver volumes and total splenic volume allow for non-invasive staging of hepatic fibrosis, whereas total liver volume is a poor predictor. Unlike liver biopsy or elastography, these CT volumetric biomarkers can be obtained retrospectively on routine scans obtained for other indications. • Regional changes in hepatic volume (LSVR) correlate well with degree of fibrosis. • Total liver volume is a very poor predictor of underlying fibrosis. • Total splenic volume is associated with the degree of hepatic fibrosis. • Hepatosplenic volume assessment is comparable to elastography for staging fibrosis. • Unlike elastography, volumetric analysis can be performed retrospectively.

  14. Development of a Global Historic Monthly Mean Precipitation Dataset

    Institute of Scientific and Technical Information of China (English)

    杨溯; 徐文慧; 许艳; 李庆祥

    2016-01-01

    Global historic precipitation dataset is the base for climate and water cycle research. There have been several global historic land surface precipitation datasets developed by international data centers such as the US National Climatic Data Center (NCDC), European Climate Assessment & Dataset project team, Met Office, etc., but so far there are no such datasets developed by any research institute in China. In addition, each dataset has its own focus of study region, and the existing global precipitation datasets only contain sparse observational stations over China, which may result in uncertainties in East Asian precipitation studies. In order to take into account comprehensive historic information, users might need to employ two or more datasets. However, the non-uniform data formats, data units, station IDs, and so on add extra difficulties for users to exploit these datasets. For this reason, a complete historic precipitation dataset that takes advantages of various datasets has been developed and produced in the National Meteorological Information Center of China. Precipitation observations from 12 sources are aggregated, and the data formats, data units, and station IDs are unified. Duplicated stations with the same ID are identified, with duplicated observations removed. Consistency test, correlation coefficient test, significance t-test at the 95% confidence level, and significance F-test at the 95% confidence level are conducted first to ensure the data reliability. Only those datasets that satisfy all the above four criteria are integrated to produce the China Meteorological Administration global precipitation (CGP) historic precipitation dataset version 1.0. It contains observations at 31 thousand stations with 1.87 × 107 data records, among which 4152 time series of precipitation are longer than 100 yr. This dataset plays a critical role in climate research due to its advantages in large data volume and high density of station network, compared to

  15. Development of a global historic monthly mean precipitation dataset

    Science.gov (United States)

    Yang, Su; Xu, Wenhui; Xu, Yan; Li, Qingxiang

    2016-04-01

    Global historic precipitation dataset is the base for climate and water cycle research. There have been several global historic land surface precipitation datasets developed by international data centers such as the US National Climatic Data Center (NCDC), European Climate Assessment & Dataset project team, Met Office, etc., but so far there are no such datasets developed by any research institute in China. In addition, each dataset has its own focus of study region, and the existing global precipitation datasets only contain sparse observational stations over China, which may result in uncertainties in East Asian precipitation studies. In order to take into account comprehensive historic information, users might need to employ two or more datasets. However, the non-uniform data formats, data units, station IDs, and so on add extra difficulties for users to exploit these datasets. For this reason, a complete historic precipitation dataset that takes advantages of various datasets has been developed and produced in the National Meteorological Information Center of China. Precipitation observations from 12 sources are aggregated, and the data formats, data units, and station IDs are unified. Duplicated stations with the same ID are identified, with duplicated observations removed. Consistency test, correlation coefficient test, significance t-test at the 95% confidence level, and significance F-test at the 95% confidence level are conducted first to ensure the data reliability. Only those datasets that satisfy all the above four criteria are integrated to produce the China Meteorological Administration global precipitation (CGP) historic precipitation dataset version 1.0. It contains observations at 31 thousand stations with 1.87 × 107 data records, among which 4152 time series of precipitation are longer than 100 yr. This dataset plays a critical role in climate research due to its advantages in large data volume and high density of station network, compared to

  16. Accuracy assessment of gridded precipitation datasets in the Himalayas

    Science.gov (United States)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  17. Emptiness and Fullness

    DEFF Research Database (Denmark)

    Bregnbæk, Susanne; Bunkenborg, Mikkel

    As critical voices question the quality, authenticity, and value of people, goods, and words in post-Mao China, accusations of emptiness render things open to new investments of meaning, substance, and value. Exploring the production of lack and desire through fine-grained ethnography, this volume......, there is a pervasive concern with states of lack and emptiness and the contributions suggest that this play of emptiness and fullness is crucial to ongoing constructions of quality, value, and subjectivity in China....

  18. Evolutionary optimization of PAW data-sets for accurate high pressure simulations

    Science.gov (United States)

    Sarkar, Kanchan; Topsakal, Mehmet; Holzwarth, N. A. W.; Wentzcovitch, Renata M.

    2017-10-01

    We examine the challenge of performing accurate electronic structure calculations at high pressures by comparing the results of all-electron full potential linearized augmented-plane-wave calculations, as implemented in the WIEN2k code, with those of the projector augmented wave (PAW) method, as implemented in Quantum ESPRESSO or Abinit code. In particular, we focus on developing an automated and consistent way of generating transferable PAW data-sets that can closely produce the all electron equation of state defined from zero to arbitrary high pressures. The technique we propose is an evolutionary search procedure that exploits the ATOMPAW code to generate atomic data-sets and the Quantum ESPRESSO software suite for total energy calculations. We demonstrate different aspects of its workability by optimizing PAW basis functions of some elements relatively abundant in planetary interiors. In addition, we introduce a new measure of atomic data-set goodness by considering their performance uniformity over an extended pressure range.

  19. Synthetic ALSPAC longitudinal datasets for the Big Data VR project [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Demetris Avraam

    2017-08-01

    Full Text Available Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables without including the original data itself or disclosing participant information.  In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.

  20. Dataset Preservation for the Long Term: Results of the DareLux Project

    Directory of Open Access Journals (Sweden)

    Eugène Dürr

    2008-08-01

    Full Text Available The purpose of the DareLux (Data Archiving River Environment Luxembourg Project was the preservation of unique and irreplaceable datasets, for which we chose hydrology data that will be required to be used in future climatic models. The results are: an operational archive built with XML containers, the OAI-PMH protocol and an architecture based upon web services. Major conclusions are: quality control on ingest is important; digital rights management demands attention; and cost aspects of ingest and retrieval cannot be underestimated. We propose a new paradigm for information retrieval of this type of dataset. We recommend research into visualisation tools for the search and retrieval of this type of dataset.

  1. Dataset from proteomic analysis of rat, mouse, and human liver microsomes and S9 fractions

    Directory of Open Access Journals (Sweden)

    Makan Golizeh

    2015-06-01

    Full Text Available Rat, mouse and human liver microsomes and S9 fractions were analyzed using an optimized method combining ion exchange fractionation of digested peptides, and ultra-high performance liquid chromatography (UHPLC coupled to high resolution tandem mass spectrometry (HR-MS/MS. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org via the PRIDE partner repository (Vizcaíno et al., 2013 [1] with the dataset identifiers PXD000717, PXD000720, PXD000721, PXD000731, PXD000733 and PXD000734. Data related to the peptides (trypsin digests only were also uploaded to Peptide Atlas (Farrah et al., 2013 [2] and are available with the dataset identifiers PASS00407, PASS00409, PASS00411, PASS00412, PASS00413 and PASS00414. The present dataset is associated with a research article published in EuPA Open Proteomics [3].

  2. The Problem with Big Data: Operating on Smaller Datasets to Bridge the Implementation Gap

    Directory of Open Access Journals (Sweden)

    Richard Mann

    2016-12-01

    Full Text Available Big datasets have the potential to revolutionize public health. However, there is a mismatch between the political and scientific optimism surrounding big data and the public’s perception of its benefit. We suggest a systematic and concerted emphasis on developing models derived from smaller datasets to illustrate to the public how big data can produce tangible benefits in the long-term. In order to highlight the immediate value of a small data approach, we produced a proof-of-concept model predicting hospital length of stay. The results demonstrate that existing small datasets can be used to create models that generate a reasonable prediction, facilitating healthcare delivery. We propose that greater attention (and funding needs to be directed toward the utilization of existing information resources in parallel with current efforts to create and exploit ‘big data’.

  3. Dataset of milk whey proteins of three indigenous Greek sheep breeds

    Directory of Open Access Journals (Sweden)

    Athanasios K. Anagnostopoulos

    2016-09-01

    Full Text Available The importance and unique biological traits, as well as the growing financial value, of milk from small Greek ruminants is continuously attracting interest from both the scientific community and industry. In this regard the construction of a reference dataset of the milk of the Greek sheep breeds is of great interest. In order to obtain such a dataset we employed cutting-edge proteomics methodologies to investigate and characterize, the proteome of milk from the three indigenous Greek sheep breeds Mpoutsko, Karagouniko and Chios. In total, more than 1300 protein groups were identified in milk whey from these breeds, reporting for the first time the most detailed proteome dataset of this precious biological material. The present results are further discussed in the research paper “Milk of Greek sheep and goat breeds; characterization by means of proteomics” (Anagnostopoulos et al. 2016 [1].

  4. Dataset Preservation for the Long Term: Results of the DareLux Project

    Directory of Open Access Journals (Sweden)

    Eugène Dürr

    2008-08-01

    Full Text Available The purpose of the DareLux (Data Archiving River Environment Luxembourg Project was the preservation of unique and irreplaceable datasets, for which we chose hydrology data that will be required to be used in future climatic models. The results are: an operational archive built with XML containers, the OAI-PMH protocol and an architecture based upon web services. Major conclusions are: quality control on ingest is important; digital rights management demands attention; and cost aspects of ingest and retrieval cannot be underestimated. We propose a new paradigm for information retrieval of this type of dataset. We recommend research into visualisation tools for the search and retrieval of this type of dataset.

  5. Proteomic dataset of Paracentrotus lividus gonads of different sexes and at different maturation stages

    Directory of Open Access Journals (Sweden)

    Stefania Ghisaura

    2016-09-01

    Full Text Available We report the proteomic dataset of gonads from wild Paracentrotus lividus related to the research article entitled “Proteomic changes occurring along gonad maturation in the edible sea urchin Paracentrotus lividus” [1]. Gonads of three individuals per sex in the recovery, pre-mature, mature, and spent stages were analyzed using a shotgun proteomics approach based on filter-aided sample preparation followed by tandem mass spectrometry, protein identification carried out using Sequest-HT as the search engine within the Proteome Discoverer informatics platform, and label-free differential analysis. The dataset has been deposited in the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PRIDE: PXD004200.

  6. A human gut metaproteomic dataset from stool samples pretreated or not by differential centrifugation

    Directory of Open Access Journals (Sweden)

    Alessandro Tanca

    2015-09-01

    Full Text Available We present a human gut metaproteomic dataset deposited in the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD001573. Ten aliquots of a single stool sample collected from a healthy human volunteer were either pretreated by differential centrifugation (DC; N=5 or not centrifuged (NC; N=5. Protein extracts were then processed by filter-aided sample preparation, single-run liquid chromatography and high-resolution mass spectrometry, and peptide identification was carried out using Sequest-HT as search engine within the Proteome Discoverer informatic platform. The dataset described here is also related to the research article entitled “Enrichment or depletion? The impact of stool pretreatment on metaproteomic characterization of the human gut microbiota” published in Proteomics (Tanca et al., 2015, [1].

  7. Evolutionary optimization of PAW data-sets for accurate high pressure simulations

    CERN Document Server

    Sarkar, Kanchan; Holzwarth, N A W; Wentzcovitch, Renata M

    2016-01-01

    We examine the challenge of performing accurate electronic structure calculations at high pressures by comparing the results of all-electron full potential linearized augmented-plane-wave calculations with those of the projector augmented wave (PAW) method. In particular, we focus on developing an automated and consistent way of generating transferable PAW data-sets that can closely produce the all electron equation of state defined from zero to arbitrary high pressures. The technique we propose is an evolutionary search procedure that exploits the ATOMPAW code to generate atomic data-sets and the Quantum ESPRESSO software suite for total energy calculations. We demonstrate different aspects of its workability by optimizing PAW basis functions of some elements relatively abundant in planetary interiors. In addition, we introduce a new measure of atomic data-set goodness by considering their performance uniformity over an enlarged pressure range.

  8. Advanced Neuropsychological Diagnostics Infrastructure (ANDI: A Normative Database Created from Control Datasets.

    Directory of Open Access Journals (Sweden)

    Nathalie R. de Vent

    2016-10-01

    Full Text Available In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI, datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  9. Structural dataset for the PPARγ V290M mutant

    Directory of Open Access Journals (Sweden)

    Ana C. Puhl

    2016-06-01

    Full Text Available Loss-of-function mutation V290M in the ligand-binding domain of the peroxisome proliferator activated receptor γ (PPARγ is associated with a ligand resistance syndrome (PLRS, characterized by partial lipodystrophy and severe insulin resistance. In this data article we discuss an X-ray diffraction dataset that yielded the structure of PPARγ LBD V290M mutant refined at 2.3 Å resolution, that allowed building of 3D model of the receptor mutant with high confidence and revealed continuous well-defined electron density for the partial agonist diclofenac bound to hydrophobic pocket of the PPARγ. These structural data provide significant insights into molecular basis of PLRS caused by V290M mutation and are correlated with the receptor disability of rosiglitazone binding and increased affinity for corepressors. Furthermore, our structural evidence helps to explain clinical observations which point out to a failure to restore receptor function by the treatment with a full agonist of PPARγ, rosiglitazone.

  10. The CORA dataset: validation and diagnostics of in-situ ocean temperature and salinity measurements

    Directory of Open Access Journals (Sweden)

    C. Cabanes

    2013-01-01

    Full Text Available The French program Coriolis, as part of the French operational oceanographic system, produces the COriolis dataset for Re-Analysis (CORA on a yearly basis. This dataset contains in-situ temperature and salinity profiles from different data types. The latest release CORA3 covers the period 1990 to 2010. Several tests have been developed to ensure a homogeneous quality control of the dataset and to meet the requirements of the physical ocean reanalysis activities (assimilation and validation. Improved tests include some simple tests based on comparison with climatology and a model background check based on a global ocean reanalysis. Visual quality control is performed on all suspicious temperature and salinity profiles identified by the tests, and quality flags are modified in the dataset if necessary. In addition, improved diagnostic tools have been developed – including global ocean indicators – which give information on the quality of the CORA3 dataset and its potential applications. CORA3 is available on request through the MyOcean Service Desk (http://www.myocean.eu/.

  11. Anonymising the Sparse Dataset: A New Privacy Preservation Approach while Predicting Diseases

    Directory of Open Access Journals (Sweden)

    V. Shyamala Susan

    2016-09-01

    Full Text Available Data mining techniques analyze the medical dataset with the intention of enhancing patient’s health and privacy. Most of the existing techniques are properly suited for low dimensional medical dataset. The proposed methodology designs a model for the representation of sparse high dimensional medical dataset with the attitude of protecting the patient’s privacy from an adversary and additionally to predict the disease’s threat degree. In a sparse data set many non-zero values are randomly spread in the entire data space. Hence, the challenge is to cluster the correlated patient’s record to predict the risk degree of the disease earlier than they occur in patients and to keep privacy. The first phase converts the sparse dataset right into a band matrix through the Genetic algorithm along with Cuckoo Search (GCS.This groups the correlated patient’s record together and arranges them close to the diagonal. The next segment dissociates the patient’s disease, which is a sensitive value (SA with the parameters that determine the disease normally Quasi Identifier (QI.Finally, density based clustering technique is used over the underlying data to  create anonymized groups to maintain privacy and to predict the risk level of disease. Empirical assessments on actual health care data corresponding to V.A.Medical Centre heart disease dataset reveal the efficiency of this model pertaining to information loss, utility and privacy.

  12. The health care and life sciences community profile for dataset descriptions

    Directory of Open Access Journals (Sweden)

    Michel Dumontier

    2016-08-01

    Full Text Available Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG identified Resource Description Framework (RDF vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.

  13. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    Directory of Open Access Journals (Sweden)

    Shokoufeh Aalaei

    2016-05-01

    Full Text Available Objective(s: This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN and PS-classifier and genetic algorithm based classifier (GA-classifier on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC, Wisconsin diagnosis breast cancer (WDBC, and Wisconsin prognosis breast cancer (WPBC. Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets.

  14. A treatment-planning comparison of three beam arrangement strategies for stereotactic body radiation therapy for centrally located lung tumors using volumetric-modulated arc therapy

    OpenAIRE

    Ishii, Kentaro; Okada, Wataru; Ogino, Ryo; Kubo, Kazuki; Kishimoto, Shun; Nakahara, Ryuta; Kawamorita, Ryu; Ishii, Yoshie; Tada, Takuhito; Nakajima, Toshifumi

    2016-01-01

    The purpose of this study was to determine appropriate beam arrangement for volumetric-modulated arc therapy (VMAT)-based stereotactic body radiation therapy (SBRT) in the treatment of patients with centrally located lung tumors. Fifteen consecutive patients with centrally located lung tumors treated at our institution were enrolled. For each patient, three VMAT plans were generated using two coplanar partial arcs (CP VMAT), two non-coplanar partial arcs (NCP VMAT), and one coplanar full arc ...

  15. BDML Datasets: 5 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available ZF_PJK zebrafish nuclear positions Measurement Keller, P.J., Schmidt, A.D., Wittbro...dt, J. and Stelzer, E.H.K. Philipp J. Keller, European Molecular Biology Laboratory, Cell Biology and Biophy...sics Unit, Stelzer Laboratory See details in Keller et al. (2008) Science 322, 1065-1069. CC BY-NC-SA 1 x 1

  16. BDML Datasets: 7 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available , T., Kobayashi, T.J. Md. Khayrul Bashar, The University of Tokyo, Institute of Industrial Science, Laboratory for Quantitative Biolo...gy See details in Bashar et al. (2012) PLoS ONE 7, e35550. CC BY-NC-SA 0.385 x 0.38

  17. BDML Datasets: 8 [SSBD[Archive

    Lifescience Database Archive (English)

    Full Text Available , Y.M., Stirbl, R.C., Bruck, J., and Sternberg, P.W. Paul W. Sternberg, California Institute of Technology, HHMI and Division of Biol...ogy, Sternberg Laboratory See details in Cronin et al. (2005) BMC Genetics 6, 5. CC

  18. Is there a role for the use of volumetric cone beam computed tomography in periodontics?

    Science.gov (United States)

    du Bois, A H; Kardachi, B; Bartold, P M

    2012-03-01

    Volumetric computed cone beam tomography offers a number of significant advantages over conventional intraoral and extraoral panoramic radiography, as well as computed tomography. To date, periodontal diagnosis has relied heavily on the assessment of both intraoral radiographs and extraoral panoramic radiographs. With emerging technology in radiology there has been considerable interest in the role that volumetric cone beam computed tomography might play in periodontal diagnostics. This narrative reviews the current evidence and considers whether there is a role for volumetric cone beam computed tomography in periodontics.

  19. Global Drought Assessment using a Multi-Model Dataset

    NARCIS (Netherlands)

    Lanen, van H.A.J.; Huijgevoort, van M.H.J.; Corzo Perez, G.; Wanders, N.; Hazenberg, P.; Loon, van A.F.; Estifanos, S.; Melsen, L.A.

    2011-01-01

    Large-scale models are often applied to study past drought (forced with global reanalysis datasets) and to assess future drought (using downscaled, bias-corrected forcing from climate models). The EU project WATer and global CHange (WATCH) provides a 0.5o degree global dataset of meteorological

  20. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  1. Primary Datasets for Case Studies of River-Water Quality

    Science.gov (United States)

    Goulder, Raymond

    2008-01-01

    Level 6 (final-year BSc) students undertook case studies on between-site and temporal variation in river-water quality. They used professionally-collected datasets supplied by the Environment Agency. The exercise gave students the experience of working with large, real-world datasets and led to their understanding how the quality of river water is…

  2. An Analysis of the GTZAN Music Genre Dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    Most research in automatic music genre recognition has used the dataset assembled by Tzanetakis et al. in 2001. The composition and integrity of this dataset, however, has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine...

  3. Primary Datasets for Case Studies of River-Water Quality

    Science.gov (United States)

    Goulder, Raymond

    2008-01-01

    Level 6 (final-year BSc) students undertook case studies on between-site and temporal variation in river-water quality. They used professionally-collected datasets supplied by the Environment Agency. The exercise gave students the experience of working with large, real-world datasets and led to their understanding how the quality of river water is…

  4. Global Drought Assessment using a Multi-Model Dataset

    NARCIS (Netherlands)

    Lanen, van H.A.J.; Huijgevoort, van M.H.J.; Corzo Perez, G.; Wanders, N.; Hazenberg, P.; Loon, van A.F.; Estifanos, S.; Melsen, L.A.

    2011-01-01

    Large-scale models are often applied to study past drought (forced with global reanalysis datasets) and to assess future drought (using downscaled, bias-corrected forcing from climate models). The EU project WATer and global CHange (WATCH) provides a 0.5o degree global dataset of meteorological forc

  5. Querying Patterns in High-Dimensional Heterogenous Datasets

    Science.gov (United States)

    Singh, Vishwakarma

    2012-01-01

    The recent technological advancements have led to the availability of a plethora of heterogenous datasets, e.g., images tagged with geo-location and descriptive keywords. An object in these datasets is described by a set of high-dimensional feature vectors. For example, a keyword-tagged image is represented by a color-histogram and a…

  6. Parkinson's disease: diagnostic utility of volumetric imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Che; Chen, Meng-Hsiang [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); Chou, Kun-Hsien [National Yang-Ming University, Brain Research Center, Taipei (China); Lee, Pei-Lin [National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Tsai, Nai-Wen; Lu, Cheng-Hsien [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Neurology, Kaohsiung (China); Chen, Hsiu-Ling [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Hsu, Ai-Ling [National Taiwan University, Institute of Biomedical Electronics and Bioinformatics, Taipei (China); Huang, Yung-Cheng [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Nuclear Medicine, Kaohsiung (China); Lin, Ching-Po [National Yang-Ming University, Brain Research Center, Taipei (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China)

    2017-04-15

    This paper aims to examine the effectiveness of structural imaging as an aid in the diagnosis of Parkinson's disease (PD). High-resolution T{sub 1}-weighted magnetic resonance imaging was performed in 72 patients with idiopathic PD (mean age, 61.08 years) and 73 healthy subjects (mean age, 58.96 years). The whole brain was parcellated into 95 regions of interest using composite anatomical atlases, and region volumes were calculated. Three diagnostic classifiers were constructed using binary multiple logistic regression modeling: the (i) basal ganglion prior classifier, (ii) data-driven classifier, and (iii) basal ganglion prior/data-driven hybrid classifier. Leave-one-out cross validation was used to unbiasedly evaluate the predictive accuracy of imaging features. Pearson's correlation analysis was further performed to correlate outcome measurement using the best PD classifier with disease severity. Smaller volume in susceptible regions is diagnostic for Parkinson's disease. Compared with the other two classifiers, the basal ganglion prior/data-driven hybrid classifier had the highest diagnostic reliability with a sensitivity of 74%, specificity of 75%, and accuracy of 74%. Furthermore, outcome measurement using this classifier was associated with disease severity. Brain structural volumetric analysis with multiple logistic regression modeling can be a complementary tool for diagnosing PD. (orig.)

  7. Volumetric Analysis of Regional Cerebral Development in Preterm Children

    Science.gov (United States)

    Kesler, Shelli R.; Ment, Laura R.; Vohr, Betty; Pajot, Sarah K.; Schneider, Karen C.; Katz, Karol H.; Ebbitt, Timothy B.; Duncan, Charles C.; Makuch, Robert W.; Reiss, Allan L.

    2011-01-01

    Preterm birth is frequently associated with both neuropathologic and cognitive sequelae. This study examined cortical lobe, subcortical, and lateral ventricle development in association with perinatal variables and cognitive outcome. High-resolution volumetric magnetic resonance imaging scans were acquired and quantified using advanced image processing techniques. Seventy-three preterm and 33 term control children ages 7.3-11.4 years were included in the study. Results indicated disproportionately enlarged parietal and frontal gray matter, occipital horn, and ventricular body, as well as reduced temporal and subcortical gray volumes in preterm children compared with control subjects. Birth weight was negatively correlated with parietal and frontal gray, as well as occipital horn volumes. Intraventricular hemorrhage was associated with reduced subcortical gray matter. Ventricular cerebrospinal fluid was negatively correlated with subcortical gray matter volumes but not with white matter volumes. Maternal education was the strongest predictor of cognitive function in the preterm group. Preterm birth appears to be associated with disorganized cortical development, possibly involving disrupted synaptic pruning and neural migration. Lower birth weight and the presence of intraventricular hemorrhage may increase the risk for neuroanatomic abnormality. PMID:15519112

  8. Volumetric microscale particle tracking velocimetry (PTV) in porous media

    Science.gov (United States)

    Guo, Tianqi; Aramideh, Soroush; Ardekani, Arezoo M.; Vlachos, Pavlos P.

    2016-11-01

    The steady-state flow through refractive-index-matched glass bead microchannels is measured using microscopic particle tracking velocimetry (μPTV). A novel technique is developed to volumetrically reconstruct particles from oversampled two-dimensional microscopic images of fluorescent particles. Fast oversampling of the quasi-steady-state flow field in the lateral direction is realized by a nano-positioning piezo stage synchronized with a fast CMOS camera. Experiments at different Reynolds numbers are carried out for flows through a series of both monodispersed and bidispersed glass bead microchannels with various porosities. The obtained velocity fields at pore-scale (on the order of 10 μm) are compared with direct numerical simulations (DNS) conducted in the exact same geometries reconstructed from micro-CT scans of the glass bead microchannels. The developed experimental method would serve as a new approach for exploring the flow physics at pore-scale in porous media, and also provide benchmark measurements for validation of numerical simulations.

  9. Buoyancy Driven Mixing with Continuous Volumetric Energy Deposition

    Science.gov (United States)

    Wachtor, Adam J.; Jebrail, Farzaneh F.; Dennisen, Nicholas A.; Andrews, Malcolm J.; Gore, Robert A.

    2014-11-01

    An experiment involving a miscible fluid pair is presented which transitioned from a Rayleigh-Taylor (RT) stable to RT unstable configuration through continuous volumetric energy deposition (VED) by microwave radiation. Initially a light, low microwave absorbing fluid rested above a heavier, more absorbing fluid. The alignment of the density gradient with gravity made the system stable, and the Atwood number (At) for the initial setup was approximately -0.12. Exposing the fluid pair to microwave radiation preferentially heated the bottom fluid, and caused its density to drop due to thermal expansion. As heating of the bottom fluid continued, the At varied from negative to positive, and after the system passed through the neutral stability point, At = 0, buoyancy driven mixing ensued. Continuous VED caused the At to continue increasing and further drive the mixing process. Successful VED mixing required careful design of the fluid pair used in the experiment. Therefore, fluid selection is discussed, along with challenges and limitations of data collection using the experimental microwave facility. Experimental and model predictions of the neutral stability point, and onset of buoyancy driven mixing, are compared, and differences with classical, constant At RT driven turbulence are discussed.

  10. FELIX 3D display: an interactive tool for volumetric imaging

    Science.gov (United States)

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  11. Toward a Philosophy and Theory of Volumetric Nonthermal Processing.

    Science.gov (United States)

    Sastry, Sudhir K

    2016-06-01

    Nonthermal processes for food preservation have been under intensive investigation for about the past quarter century, with varying degrees of success. We focus this discussion on two volumetrically acting nonthermal processes, high pressure processing (HPP) and pulsed electric fields (PEF), with emphasis on scientific understanding of each, and the research questions that need to be addressed for each to be more successful in the future. We discuss the character or "philosophy" of food preservation, with a question about the nature of the kill step(s), and the sensing challenges that need to be addressed. For HPP, key questions and needs center around whether its nonthermal effectiveness can be increased by increased pressures or pulsing, the theoretical treatment of rates of reaction as influenced by pressure, the assumption of uniform pressure distribution, and the need for (and difficulties involved in) in-situ measurement. For PEF, the questions include the rationale for pulsing, difficulties involved in continuous flow treatment chambers, the difference between electroporation theory and experimental observations, and the difficulties involved in in-situ measurement and monitoring of electric field distribution.

  12. Optical artefact characterization and correction in volumetric scintillation dosimetry

    Science.gov (United States)

    Robertson, Daniel; Hui, Cheukkai; Archambault, Louis; Mohan, Radhe; Beddar, Sam

    2014-01-01

    The goals of this study were (1) to characterize the optical artefacts affecting measurement accuracy in a volumetric liquid scintillator detector, and (2) to develop methods to correct for these artefacts. The optical artefacts addressed were photon scattering, refraction, camera perspective, vignetting, lens distortion, the lens point spread function, stray radiation, and noise in the camera. These artefacts were evaluated by theoretical and experimental means, and specific correction strategies were developed for each artefact. The effectiveness of the correction methods was evaluated by comparing raw and corrected images of the scintillation light from proton pencil beams against validated Monte Carlo calculations. Blurring due to the lens and refraction at the scintillator tank-air interface were found to have the largest effect on the measured light distribution, and lens aberrations and vignetting were important primarily at the image edges. Photon scatter in the scintillator was not found to be a significant source of artefacts. The correction methods effectively mitigated the artefacts, increasing the average gamma analysis pass rate from 66% to 98% for gamma criteria of 2% dose difference and 2 mm distance to agreement. We conclude that optical artefacts cause clinically meaningful errors in the measured light distribution, and we have demonstrated effective strategies for correcting these optical artefacts.

  13. An MRI-based semiautomated volumetric quantification of hip osteonecrosis

    Energy Technology Data Exchange (ETDEWEB)

    Malizos, K.N.; Siafakas, M.S.; Karachalios, T.S. [Dept. of Orthopaedics, Univ. of Thessalia, Larissa (Greece); Fotiadis, D.I. [Dept. of Computer Science, Univ. of Ioannina (Greece); Soucacos, P.N. [Dept. of Orthopaedic Surgery, Univ. of Ioannina (Greece)

    2001-12-01

    Objective: To objectively and precisely define the spatial distribution of osteonecrosis and to investigate the influence of various factors including etiology. Design: A volumetric method is presented to describe the size and spatial distribution of necrotic lesions of the femoral head, using MRI scans. The technique is based on the definition of an equivalent sphere model for the femoral head. Patients: The gender, age, number of hips involved, disease duration, pain intensity, limping disability and etiology were correlated with the distribution of the pathologic bone. Seventy-nine patients with 122 hips affected by osteonecrosis were evaluated. Results: The lesion size ranged from 7% to 73% of the sphere equivalent. The lateral octants presented considerable variability, ranging from wide lateral lesions extending beyond the lip of the acetabulum, to narrow medial lesions, leaving a lateral supporting pillar of intact bone. Patients with sickle cell disease and steroid administration presented the largest lesions. The extent of the posterior superior medial octant involvement correlated with the symptom intensity, a younger age and male gender. Conclusion: The methodology presented here has proven a reliable and straightforward imaging tool for precise assessment of necrotic lesions. It also enables us to target accurately the drilling and grafting procedures. (orig.)

  14. Connectivity network measures predict volumetric atrophy in mild cognitive impairment.

    Science.gov (United States)

    Nir, Talia M; Jahanshad, Neda; Toga, Arthur W; Bernstein, Matt A; Jack, Clifford R; Weiner, Michael W; Thompson, Paul M

    2015-01-01

    Alzheimer's disease (AD) is characterized by cortical atrophy and disrupted anatomic connectivity, and leads to abnormal interactions between neural systems. Diffusion-weighted imaging (DWI) and graph theory can be used to evaluate major brain networks and detect signs of a breakdown in network connectivity. In a longitudinal study using both DWI and standard magnetic resonance imaging (MRI), we assessed baseline white-matter connectivity patterns in 30 subjects with mild cognitive impairment (MCI, mean age 71.8 ± 7.5 years, 18 males and 12 females) from the Alzheimer's Disease Neuroimaging Initiative. Using both standard MRI-based cortical parcellations and whole-brain tractography, we computed baseline connectivity maps from which we calculated global "small-world" architecture measures, including mean clustering coefficient and characteristic path length. We evaluated whether these baseline network measures predicted future volumetric brain atrophy in MCI subjects, who are at risk for developing AD, as determined by 3-dimensional Jacobian "expansion factor maps" between baseline and 6-month follow-up anatomic scans. This study suggests that DWI-based network measures may be a novel predictor of AD progression.

  15. Femoral head osteonecrosis: Volumetric MRI assessment and outcome

    Energy Technology Data Exchange (ETDEWEB)

    Bassounas, Athanasios E. [Department of Medical Physics, School of Medicine, University of Ioannina, GR 451 10 Ioannina (Greece); Karantanas, Apostolos H. [Department of Radiology, School of Medicine, University of Crete, Heraklion, GR 711 10 (Greece); Fotiadis, Dimitrios I. [Unit of Medical Technology and Intelligent Information Systems, Department of Computer Science, University of Ioannina and Biomedical Research Institute-FORTH, GR 451 10 Ioannina (Greece); Malizos, Konstantinos N. [Orthopaedic Department, Medical School, University of Thessalia, GR 412 22 Larissa (Greece)]. E-mail: kmalizos@otenet.gr

    2007-07-15

    Effective treatment of femoral head osteonecrosis (FHON) requires early diagnosis and accurate assessment of the disease severity. The ability to predict in the early stages the risk of collapse is important for selecting a joint salvage procedure. The aim of the present study was to evaluate the outcome in patients treated with vascularized fibular grafts in relation to preoperative MR imaging volumetry. We studied 58 patients (87 hips) with FHON. A semi-automated octant-based lesion measurement method, previously described, was performed on the T1-w MR images. The mean time of postoperative follow-up was 7.8 years. Sixty-three hips were successful and 24 failed and converted to total hip arthroplasty within a period of 2-4 years after the initial operation. The rate of failures for hips of male patients was higher than in female patients. The mean lesion size was 28% of the sphere equivalent of the femoral head, 24 {+-} 12% for the successful hips and 37 {+-} 9% for the failed (p < 0.001). The most affected octants were antero-supero-medial (58 {+-} 26%) and postero-supero-medial (54 {+-} 31%). All but postero-infero-medial and postero-infero-lateral octants, showed statistically significant differences in the lesion size between patients with successful and failed hips. In conclusion, the volumetric analysis of preoperative MRI provides useful information with regard to a successful outcome in patients treated with vascularized fibular grafts.

  16. Three-dimensional volumetric quantification of fat loss following cryolipolysis.

    Science.gov (United States)

    Garibyan, Lilit; Sipprell, William H; Jalian, H Ray; Sakamoto, Fernanda H; Avram, Mathew; Anderson, R Rox

    2014-02-01

    Cryolipolysis is a noninvasive and well-tolerated treatment for reduction of localized subcutaneous fat. Although several studies demonstrate the safety and efficacy of this procedure, volumetric fat reduction from this treatment has not been quantified. This prospective study investigated the change in volume of fat after cryolipolysis treatment using three-dimensional (3D) photography. A prospective study of subjects treated with cryolipolysis on the flank (love handle) was performed at Massachusetts General Hospital. Volume measurements were performed with a Canfield Scientific Vectra three-dimensional camera and software to evaluate the amount of post procedure volume change. Clinical outcomes were assessed with caliper measurements, subject surveys, and blinded physician assessment of photographs. Eleven subjects were enrolled in this study. Each subject underwent a single cycle of cryolipolysis to one flank. The untreated flank served as an internal control. The follow-up time after treatment was 2 months. The mean amount of calculated absolute fat volume loss using 3D photography from baseline to 2 months follow-up visit was 56.2 ± 25.6 from the treatment site and 16.6 ± 17.6 cc from the control (P fat removal methodology that on average leads to 39.6 cc of fat loss of the treated flank at 2 months after a single treatment cycle. © 2013 Wiley Periodicals, Inc.

  17. Cortical thickness and brain volumetric analysis in body dysmorphic disorder.

    Science.gov (United States)

    Madsen, Sarah K; Zai, Alex; Pirnia, Tara; Arienzo, Donatello; Zhan, Liang; Moody, Teena D; Thompson, Paul M; Feusner, Jamie D

    2015-04-30

    Individuals with body dysmorphic disorder (BDD) suffer from preoccupations with perceived defects in physical appearance, causing severe distress and disability. Although BDD affects 1-2% of the population, the neurobiology is not understood. Discrepant results in previous volumetric studies may be due to small sample sizes, and no study has investigated cortical thickness in BDD. The current study is the largest neuroimaging analysis of BDD. Participants included 49 medication-free, right-handed individuals with DSM-IV BDD and 44 healthy controls matched by age, sex, and education. Using high-resolution T1-weighted magnetic resonance imaging, we computed vertex-wise gray matter (GM) thickness on the cortical surface and GM volume using voxel-based morphometry. We also computed volumes in cortical and subcortical regions of interest. In addition to group comparisons, we investigated associations with symptom severity, insight, and anxiety within the BDD group. In BDD, greater anxiety was significantly associated with thinner GM in the left superior temporal cortex and greater GM volume in the right caudate nucleus. There were no significant differences in cortical thickness, GM volume, or volumes in regions of interest between BDD and control subjects. Subtle associations with clinical symptoms may characterize brain morphometric patterns in BDD, rather than large group differences in brain structure. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual hull reconstruction, resulting in a first 3D approximation of the target, followed by a region-of-interest estimation, tasked with identifying features of interest, which in turn are used to locally refine the voxel grid and extract a higher-resolution surface representation for those regions. This approach is illustrated for the reconstruction of avatars for use in tele-immersion environments, where head and hand regions are of higher interest. To allow reproducability and direct comparison a publicly available data set for human visual hull reconstruction is used. This paper shows that region-of-interest reconstruction of the target is faster and visually comparable to higher resolution focused visual hull reconstructions. This approach reduces the amount of data generated through the reconstruction, allowing faster post processing, as rendering or networking of the surface voxels. Reconstruction speeds support smooth interactions between the avatar and the virtual environment, while the improved resolution of its facial region and hands creates a higher-degree of immersion and potentially impacts the perception of body language, facial expressions and eye-to-eye contact. Copyright © 2010 by the Association for Computing Machinery, Inc.

  19. A volumetric flow sensor for automotive injection systems

    Science.gov (United States)

    Schmid, U.; Krötz, G.; Schmitt-Landsiedel, D.

    2008-04-01

    For further optimization of the automotive power train of diesel engines, advanced combustion processes require a highly flexible injection system, provided e.g. by the common rail (CR) injection technique. In the past, the feasibility to implement injection nozzle volumetric flow sensors based on the thermo-resistive measurement principle has been demonstrated up to injection pressures of 135 MPa (1350 bar). To evaluate the transient behaviour of the system-integrated flow sensors as well as an injection amount indicator used as a reference method, hydraulic simulations on the system level are performed for a CR injection system. Experimentally determined injection timings were found to be in good agreement with calculated values, especially for the novel sensing element which is directly implemented into the hydraulic system. For the first time pressure oscillations occurring after termination of the injection pulse, predicted theoretically, could be verified directly in the nozzle. In addition, the injected amount of fuel is monitored with the highest resolution ever reported in the literature.

  20. Plate Full of Color

    Centers for Disease Control (CDC) Podcasts

    2008-08-04

    The Eagle Books are a series of four books that are brought to life by wise animal characters - Mr. Eagle, Miss Rabbit, and Coyote - who engage Rain That Dances and his young friends in the joy of physical activity, eating healthy foods, and learning from their elders about health and diabetes prevention. Plate Full of Color teaches the value of eating a variety of colorful and healthy foods.  Created: 8/4/2008 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 8/5/2008.

  1. Squish: Near-Optimal Compression for Archival of Relational Datasets

    Science.gov (United States)

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.

  2. New model for datasets citation and extraction reproducibility in VAMDC

    CERN Document Server

    Zwölf, Carlo Maria; Dubernet, Marie-Lise

    2016-01-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favour reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  3. New model for datasets citation and extraction reproducibility in VAMDC

    Science.gov (United States)

    Zwölf, Carlo Maria; Moreau, Nicolas; Dubernet, Marie-Lise

    2016-09-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favor reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  4. Needle Segmentation in Volumetric Optical Coherence Tomography Images for Ophthalmic Microsurgery

    Directory of Open Access Journals (Sweden)

    Mingchuan Zhou

    2017-07-01

    Full Text Available Needle segmentation is a fundamental step for needle reconstruction and image-guided surgery. Although there has been success stories in needle segmentation for non-microsurgeries, the methods cannot be directly extended to ophthalmic surgery due to the challenges bounded to required spatial resolution. As the ophthalmic surgery is performed by finer and smaller surgical instruments in micro-structural anatomies, specifically in retinal domains, difficulties are raised for delicate operation and sensitive perception. To address these challenges, in this paper we investigate needle segmentation in ophthalmic operation on 60 Optical Coherence Tomography (OCT cubes captured during needle injection surgeries on ex-vivo pig eyes. Furthermore, we developed two different approaches, a conventional method based on morphological features (MF and a specifically designed full convolution neural networks (FCN method, moreover, we evaluate them on the benchmark for needle segmentation in the volumetric OCT images. The experimental results show that FCN method has a better segmentation performance based on four evaluation metrics while MF method has a short inference time, which provides valuable reference for future works.

  5. The identification of informative genes from multiple datasets with increasing complexity

    Directory of Open Access Journals (Sweden)

    't Hoen Peter AC

    2010-01-01

    Full Text Available Abstract Background In microarray data analysis, factors such as data quality, biological variation, and the increasingly multi-layered nature of more complex biological systems complicates the modelling of regulatory networks that can represent and capture the interactions among genes. We believe that the use of multiple datasets derived from related biological systems leads to more robust models. Therefore, we developed a novel framework for modelling regulatory networks that involves training and evaluation on independent datasets. Our approach includes the following steps: (1 ordering the datasets based on their level of noise and informativeness; (2 selection of a Bayesian classifier with an appropriate level of complexity by evaluation of predictive performance on independent data sets; (3 comparing the different gene selections and the influence of increasing the model complexity; (4 functional analysis of the informative genes. Results In this paper, we identify the most appropriate model complexity using cross-validation and independent test set validation for predicting gene expression in three published datasets related to myogenesis and muscle differentiation. Furthermore, we demonstrate that models trained on simpler datasets can be used to identify interactions among genes and select the most informative. We also show that these models can explain the myogenesis-related genes (genes of interest significantly better than others (P et al. in identifying informative genes from multiple datasets with increasing complexity whilst additionally modelling the interaction between genes. Conclusions We show that Bayesian networks derived from simpler controlled systems have better performance than those trained on datasets from more complex biological systems. Further, we present that highly predictive and consistent genes, from the pool of differentially expressed genes, across independent datasets are more likely to be fundamentally

  6. 2006 Fynmeet sea clutter measurement trial: Datasets

    CSIR Research Space (South Africa)

    Herselman, PLR

    2007-09-06

    Full Text Available 3000 3200 3400 3600 -30 -25 -20 -15 -10 -5 0 5 10 15 Experiment Summary Value Radar Setup Value Type Sea Clutter Tx Frequency 9.125 GHz Date 31-Jul-2006 PRF 5 kHz Start Time 15:45:38.468 Tracking Range 2300 m Duration 183090 PRI's (36.6178 s... 15 Experiment Summary Value Radar Setup Value Type Sea Clutter Tx Frequency 9.125 GHz Date 31-Jul-2006 PRF 5 kHz Start Time 15:46:15.103 Tracking Range 2300 m Duration 212925 PRI's (42.5848 s) Range Extend 1440 m (96 gates), 15 m res...

  7. The Fullness of Space

    Science.gov (United States)

    Wynn-Williams, Gareth

    1992-06-01

    A brief glance at the night sky reveals a remarkable fact about the Universe: it is extremely patchy. The light we see on a moonless night comes from bright specks we call planets and stars. Between the stars we see blackness. Most of astronomy, not to mention geology, biology, and all humanistic studies, is concerned with what happens in and on these bright specks. Yet these lumps and specks, which include the Earth, the Sun, the planets of our solar system, and all the stars together occupy less than one billion billion billionth (10-27) of the total volume of the Universe. It is astonishing to think that the interstellar medium within our Galaxy, the Milky Way, is anything but empty space. But in most of the Galaxy, the density of interstellar matter is thousands of times lower than that of the best vacuum produced on Earth. In fact, there is enough interstellar matter in the Galaxy to make ten billion stars the size of the Sun. In this excellently crafted book, the author gives full treatment to the nature of the stuff between the stars and to the methods that astronomers use to study it. He explains where the matter came from in the first place, how it collects together in clouds and clumps, and the way in which new stars and planets form from material in space. Through his descriptions we see the matter as glorious gas clouds, such as the Orion Nebula, shimmering in rich hues of red and orange. Telescopes reveal inky black clouds, the molecule factories in which new stars and planets are made. Radio, infrared, and ultraviolet telescopes have given astronomers stunning new images of interstellar matter. The Fullness of Space is written for the general reader interested in science. It assumes no scientific or mathematical background, and the only equations in the whole book are found in the appendices. It is beautifully illustrated with many of the finest photographs available of dust clouds and bright nebulae. Readers from high school age to adult will find

  8. On the Uncertain Future of the Volumetric 3D Display Paradigm

    Science.gov (United States)

    Blundell, Barry G.

    2017-06-01

    Volumetric displays permit electronically processed images to be depicted within a transparent physical volume and enable a range of cues to depth to be inherently associated with image content. Further, images can be viewed directly by multiple simultaneous observers who are able to change vantage positions in a natural way. On the basis of research to date, we assume that the technologies needed to implement useful volumetric displays able to support translucent image formation are available. Consequently, in this paper we review aspects of the volumetric paradigm and identify important issues which have, to date, precluded their successful commercialization. Potentially advantageous characteristics are outlined and demonstrate that significant research is still needed in order to overcome barriers which continue to hamper the effective exploitation of this display modality. Given the recent resurgence of interest in developing commercially viable general purpose volumetric systems, this discussion is of particular relevance.

  9. Volumetric study of the olfactory bulb in patients with chronic rhinonasal sinusitis using MRI

    Directory of Open Access Journals (Sweden)

    Reda A. Alarabawy

    2016-06-01

    Conclusions: MRI with volumetric analysis is a useful tool in assessment of the olfactory bulb volume in patients with olfactory loss and appears to be of help in assessment of the degree of recovery in patients after sinus surgery.

  10. Effect of consolidation pressure on volumetric composition and stiffness of unidirectional flax fibre composites

    DEFF Research Database (Denmark)

    Aslan, Mustafa; Mehmood, S.; Madsen, Bo

    2013-01-01

    Unidirectional flax/polyethylene terephthalate composites are manufactured by filament winding, followed by compression moulding with low and high consolidation pressure, and with variable flax fibre content. The experimental data of volumetric composition and tensile stiffness are analysed with ...

  11. Mechanical properties, volumetric shrinkage and depth of cure of short fiber-reinforced resin composite.

    Science.gov (United States)

    Tsujimoto, Akimasa; Barkmeier, Wayne W; Takamizawa, Toshiki; Latta, Mark A; Miyazaki, Masashi

    2016-01-01

    The mechanical properties, volumetric shrinkage and depth of cure of a short fiber-reinforced resin composite (SFRC) were investigated in this study and compared to both a bulk fill resin composite (BFRC) and conventional glass/ceramic-filled resin composite (CGRC). Fracture toughness, flexural properties, volumetric shrinkage and depth of cure of the SFRC, BFRC and CGRC were measured. SFRC had significantly higher fracture toughness than BFRCs and CGRCs. The flexural properties of SFRC were comparable with BFRCs and CGRCs. SFRC showed significantly lower volumetric shrinkage than the other tested resin composites. The depth of cure of the SFRC was similar to BFRCs and higher than CGRCs. The data from this laboratory investigation suggests that SFRC exhibits improvements in fracture toughness, volumetric shrinkage and depth of cure when compared with CGRC, but depth of cure of SFRC was similar to BFRC.

  12. Review of prospects and challenges of eye tracking in volumetric imaging.

    Science.gov (United States)

    Venjakob, Antje C; Mello-Thoms, Claudia R

    2016-01-01

    While eye tracking research in conventional radiography has flourished over the past decades, the number of eye tracking studies that looked at multislice images lags behind. A possible reason for the lack of studies in this area might be that the eye tracking methodology used in the context of conventional radiography cannot be applied one-on-one to volumetric imaging material. Challenges associated with eye tracking in volumetric imaging are particularly associated with the selection of stimulus material, the detection of events in the eye tracking data, the calculation of meaningful eye tracking parameters, and the reporting of abnormalities. However, all of these challenges can be addressed in the design of the experiment. If this is done, eye tracking studies using volumetric imaging material offer almost unlimited opportunity for perception research and are highly relevant as the number of volumetric images that are acquired and interpreted is rising.

  13. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    Science.gov (United States)

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.

  14. Full metal jacket!

    CERN Multimedia

    Laëtitia Pedroso

    2011-01-01

    Ten years ago, standard issue clothing only gave CERN firemen partial protection but today our fire-fighters are equipped with state-of-the-art, full personal protective equipment.   CERN's Fire Brigade team. For many years, the members of CERN's Fire Brigade went on call-outs clad in their work trousers and fire-rescue coats, which only afforded them partial protection. Today, textile manufacturing techniques have moved on a long way and CERN's firemen are now kitted out with state-of-the-art personal protective equipment. The coat and trousers are three-layered, comprising fire-resistant aramide, a protective membrane and a thermal lining. The CERN Fire Brigade' new state-of-the-art personal protection equipment. "This equipment is fully compliant with the standards in force and is therefore resistant to cuts, abrasion, electrical arcs with thermal effects and, of course, fire," explains Patrick Berlinghi, the CERN Fire Brigade's Logistics Officer. You might think that su...

  15. Full Color Holographic Endoscopy

    Science.gov (United States)

    Osanlou, A.; Bjelkhagen, H.; Mirlis, E.; Crosby, P.; Shore, A.; Henderson, P.; Napier, P.

    2013-02-01

    The ability to produce color holograms from the human tissue represents a major medical advance, specifically in the areas of diagnosis and teaching. This has been achieved at Glyndwr University. In corporation with partners at Gooch & Housego, Moor Instruments, Vivid Components and peninsula medical school, Exeter, UK, for the first time, we have produced full color holograms of human cell samples in which the cell boundary and the nuclei inside the cells could be clearly focused at different depths - something impossible with a two-dimensional photographic image. This was the main objective set by the peninsula medical school at Exeter, UK. Achieving this objective means that clinically useful images essentially indistinguishable from the object human cells could be routinely recorded. This could potentially be done at the tip of a holo-endoscopic probe inside the body. Optimised recording exposure and development processes for the holograms were defined for bulk exposures. This included the optimisation of in-house recording emulsions for coating evaluation onto polymer substrates (rather than glass plates), a key step for large volume commercial exploitation. At Glyndwr University, we also developed a new version of our in-house holographic (world-leading resolution) emulsion.

  16. Viability of Controlling Prosthetic Hand Utilizing Electroencephalograph (EEG) Dataset Signal

    Science.gov (United States)

    Miskon, Azizi; A/L Thanakodi, Suresh; Raihan Mazlan, Mohd; Mohd Haziq Azhar, Satria; Nooraya Mohd Tawil, Siti

    2016-11-01

    This project presents the development of an artificial hand controlled by Electroencephalograph (EEG) signal datasets for the prosthetic application. The EEG signal datasets were used as to improvise the way to control the prosthetic hand compared to the Electromyograph (EMG). The EMG has disadvantages to a person, who has not used the muscle for a long time and also to person with degenerative issues due to age factor. Thus, the EEG datasets found to be an alternative for EMG. The datasets used in this work were taken from Brain Computer Interface (BCI) Project. The datasets were already classified for open, close and combined movement operations. It served the purpose as an input to control the prosthetic hand by using an Interface system between Microsoft Visual Studio and Arduino. The obtained results reveal the prosthetic hand to be more efficient and faster in response to the EEG datasets with an additional LiPo (Lithium Polymer) battery attached to the prosthetic. Some limitations were also identified in terms of the hand movements, weight of the prosthetic, and the suggestions to improve were concluded in this paper. Overall, the objective of this paper were achieved when the prosthetic hand found to be feasible in operation utilizing the EEG datasets.

  17. Planning strategies in volumetric modulated are therapy for breast.

    Science.gov (United States)

    Giorgia, Nicolini; Antonella, Fogliata; Alessandro, Clivio; Eugenio, Vanetti; Luca, Cozzi

    2011-07-01

    In breast radiotherapy with intensity modulation, it is a well established practice to extend the dose fluence outside the limit of the body contour to account for small changes in size and position of the target and the rest of the tissues due to respiration or to possible oedema. A simple approach is not applicable with RapidArc volumetric modulated are therapy not being based on a fixed field fluence delivery. In this study, a viable technical strategy to account for this need is presented. RapidArc (RA) plans for six breast cancer patients (three right and three left cases), were optimized (PRO version III) on the original CT data set (0) and on an alternative CT (E) generated with an artificial expansion (and assignment of soft-tissue equivalent HU) of 10 mm of the body in the breast region and of the PTV contours toward the external direction. Final dose calculations for the two set of plans were performed on the same original CT data set O, normalizing the dose prescription (50 Gy) to the target mean. In this way, two treatment plans on the same CT set O for each patient were obtained: the no action plan (OO) and the alternative plan based on an expanded optimization (EO). Fixing MU, these two plans were then recomputed on the expanded CT data set and on an intermediate one (with expansion = 5 mm), to mimic, possible changes in size due to edema during treatment or residual displacements due to breathing not properly controlled. Aim of the study was to quantify the robustness of this planning strategy on dose distributions when either the OO or the EO strategies were adopted. For all the combinations, a DVH analysis of all involved structures is reported. I. The two optimization approaches gave comparable dose distributions on the original CT data set. II. When plans were evaluated on the expanded CTs (mimicking the presence of edema), the EO approach showed improved target coverage if compared to OO: on CT_10 mm, Dv = 98% [%]= 92.5 +/- 0.9 and 68.5 +/- 3

  18. Planning strategies in volumetric modulated arc therapy for breast.

    Science.gov (United States)

    Nicolini, Giorgia; Fogliata, Antonella; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca

    2011-07-01

    In breast radiotherapy with intensity modulation, it is a well established practice to extend the dose fluence outside the limit of the body contour to account for small changes in size and position of the target and the rest of the tissues due to respiration or to possible oedema. A simple approach is not applicable with RapidArc volumetric modulated arc therapy not being based on a fixed field fluence delivery. In this study, a viable technical strategy to account for this need is presented. RapidArc (RA) plans for six breast cancer patients (three right and three left cases), were optimized (PRO version III) on the original CT data set (O) and on an alternative CT (E) generated with an artificial expansion (and assignment of soft-tissue equivalent HU) of 10 mm of the body in the breast region and of the PTV contours toward the external direction. Final dose calculations for the two set of plans were performed on the same original CT data set O, normalizing the dose prescription (50 Gy) to the target mean. In this way, two treatment plans on the same CT set O for each patient were obtained: the no action plan (OO) and the alternative plan based on an expanded optimization (EO). Fixing MU, these two plans were then recomputed on the expanded CT data set and on an intermediate one (with expansion = 5 mm), to mimic, possible changes in size due to edema during treatment or residual displacements due to breathing not properly controlled. Aim of the study was to quantify the robustness of this planning strategy on dose distributions when either the OO or the EO strategies were adopted. For all the combinations, a DVH analysis of all involved structures is reported. I. The two optimization approaches gave comparable dose distributions on the original CT data set. II. When plans were evaluated on the expanded CTs (mimicking the presence of edema), the EO approach showed improved target coverage if compared to OO: on CT_10 mm, DV = 98% [%] = 92.5 ± 0

  19. Volumetric Properties of the Mixture Tetrachloromethane CCl4 + CHCl3 Trichloromethane (VMSD1212, LB4576_V)

    Science.gov (United States)

    Cibulka, I.; Fontaine, J.-C.; Sosnkowska-Kehiaian, K.; Kehiaian, H. V.

    This document is part of Subvolume C 'Binary Liquid Systems of Nonelectrolytes III' of Volume 26 'Heats of Mixing, Vapor-Liquid Equilibrium, and Volumetric Properties of Mixtures and Solutions' of Landolt-Börnstein Group IV 'Physical Chemistry'. It contains the Chapter 'Volumetric Properties of the Mixture Tetrachloromethane CCl4 + CHCl3 Trichloromethane (VMSD1212, LB4576_V)' providing data by calculation of molar excess volume from low-pressure density measurements at variable mole fraction and constant temperature.

  20. Determination of ferrous iron in rock and mineral samples by three volumetric methods

    OpenAIRE

    Saikkonen, R.J.; Rautiainen, I.A.

    1993-01-01

    Ferrous iron was determined by three volumetric methods in 13 in-house reference rock samples and in 31 international geological reference samples. The methods used were Amonette & Scott' s oxidimetric method, Wilson's oxidimetric method and Pratt's method. The results for FeO by these volumetric methods in 13 in-house rock samples were compared to the results obtained in other analytical laboratories in Finland. The results for FeO in the international samples were compared with published da...