WorldWideScience

Sample records for volumetric dataset full

  1. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    International Nuclear Information System (INIS)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho; Woo, Hyun Soo; Jo, Jae Min; Lee, Min Hee

    2015-01-01

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques

  2. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  3. Volumetric full-range magnetomotive optical coherence tomography

    Science.gov (United States)

    Ahmad, Adeel; Kim, Jongsik; Shemonski, Nathan D.; Marjanovic, Marina; Boppart, Stephen A.

    2014-01-01

    Abstract. Magnetomotive optical coherence tomography (MM-OCT) can be utilized to spatially localize the presence of magnetic particles within tissues or organs. These magnetic particle-containing regions are detected by using the capability of OCT to measure small-scale displacements induced by the activation of an external electromagnet coil typically driven by a harmonic excitation signal. The constraints imposed by the scanning schemes employed and tissue viscoelastic properties limit the speed at which conventional MM-OCT data can be acquired. Realizing that electromagnet coils can be designed to exert MM force on relatively large tissue volumes (comparable or larger than typical OCT imaging fields of view), we show that an order-of-magnitude improvement in three-dimensional (3-D) MM-OCT imaging speed can be achieved by rapid acquisition of a volumetric scan during the activation of the coil. Furthermore, we show volumetric (3-D) MM-OCT imaging over a large imaging depth range by combining this volumetric scan scheme with full-range OCT. Results with tissue equivalent phantoms and a biological tissue are shown to demonstrate this technique. PMID:25472770

  4. Full-spectrum volumetric solar thermal conversion via photonic nanofluids.

    Science.gov (United States)

    Liu, Xianglei; Xuan, Yimin

    2017-10-12

    Volumetric solar thermal conversion is an emerging technique for a plethora of applications such as solar thermal power generation, desalination, and solar water splitting. However, achieving broadband solar thermal absorption via dilute nanofluids is still a daunting challenge. In this work, full-spectrum volumetric solar thermal conversion is demonstrated over a thin layer of the proposed 'photonic nanofluids'. The underlying mechanism is found to be the photonic superposition of core resonances, shell plasmons, and core-shell resonances at different wavelengths, whose coexistence is enabled by the broken symmetry of specially designed composite nanoparticles, i.e., Janus nanoparticles. The solar thermal conversion efficiency can be improved by 10.8% compared with core-shell nanofluids. The extinction coefficient of Janus dimers with various configurations is also investigated to unveil the effects of particle couplings. This work provides the possibility to achieve full-spectrum volumetric solar thermal conversion, and may have potential applications in efficient solar energy harvesting and utilization.

  5. Inkjet printing-based volumetric display projecting multiple full-colour 2D patterns

    Science.gov (United States)

    Hirayama, Ryuji; Suzuki, Tomotaka; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Naruse, Makoto; Nakayama, Hirotaka; Kakue, Takashi; Ito, Tomoyoshi

    2017-04-01

    In this study, a method to construct a full-colour volumetric display is presented using a commercially available inkjet printer. Photoreactive luminescence materials are minutely and automatically printed as the volume elements, and volumetric displays are constructed with high resolution using easy-to-fabricate means that exploit inkjet printing technologies. The results experimentally demonstrate the first prototype of an inkjet printing-based volumetric display composed of multiple layers of transparent films that yield a full-colour three-dimensional (3D) image. Moreover, we propose a design algorithm with 3D structures that provide multiple different 2D full-colour patterns when viewed from different directions and experimentally demonstrate prototypes. It is considered that these types of 3D volumetric structures and their fabrication methods based on widely deployed existing printing technologies can be utilised as novel information display devices and systems, including digital signage, media art, entertainment and security.

  6. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  7. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  8. A feasibility study of digital tomosynthesis for volumetric dental imaging

    International Nuclear Information System (INIS)

    Cho, M K; Kim, H K; Youn, H; Kim, S S

    2012-01-01

    We present a volumetric dental tomography method that compensates for insufficient projection views obtained from limited-angle scans. The reconstruction algorithm is based on the backprojection filtering method which employs apodizing filters that reduce out-of-plane blur artifacts and suppress high-frequency noise. In order to accompolish this volumetric imaging two volume-reconstructed datasets are synthesized. These individual datasets provide two different limited-angle scans performed at orthogonal angles. The obtained reconstructed images, using less than 15% of the number of projection views needed for a full skull phantom scan, demonstrate the potential use of the proposed method in dental imaging applications. This method enables a much smaller radiation dose for the patient compared to conventional dental tomography.

  9. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  10. An automatic algorithm for detecting stent endothelialization from volumetric optical coherence tomography datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bonnema, Garret T; Barton, Jennifer K [College of Optical Sciences, University of Arizona, Tucson, AZ (United States); Cardinal, Kristen O' Halloran [Biomedical and General Engineering, California Polytechnic State University (United States); Williams, Stuart K [Cardiovascular Innovation Institute, University of Louisville, Louisville, KY 40292 (United States)], E-mail: barton@u.arizona.edu

    2008-06-21

    Recent research has suggested that endothelialization of vascular stents is crucial to reducing the risk of late stent thrombosis. With a resolution of approximately 10 {mu}m, optical coherence tomography (OCT) may be an appropriate imaging modality for visualizing the vascular response to a stent and measuring the percentage of struts covered with an anti-thrombogenic cellular lining. We developed an image analysis program to locate covered and uncovered stent struts in OCT images of tissue-engineered blood vessels. The struts were found by exploiting the highly reflective and shadowing characteristics of the metallic stent material. Coverage was evaluated by comparing the luminal surface with the depth of the strut reflection. Strut coverage calculations were compared to manual assessment of OCT images and epi-fluorescence analysis of the stented grafts. Based on the manual assessment, the strut identification algorithm operated with a sensitivity of 93% and a specificity of 99%. The strut coverage algorithm was 81% sensitive and 96% specific. The present study indicates that the program can automatically determine percent cellular coverage from volumetric OCT datasets of blood vessel mimics. The program could potentially be extended to assessments of stent endothelialization in native stented arteries.

  11. Area and volumetric density estimation in processed full-field digital mammograms for risk assessment of breast cancer.

    Directory of Open Access Journals (Sweden)

    Abbas Cheddad

    Full Text Available INTRODUCTION: Mammographic density, the white radiolucent part of a mammogram, is a marker of breast cancer risk and mammographic sensitivity. There are several means of measuring mammographic density, among which are area-based and volumetric-based approaches. Current volumetric methods use only unprocessed, raw mammograms, which is a problematic restriction since such raw mammograms are normally not stored. We describe fully automated methods for measuring both area and volumetric mammographic density from processed images. METHODS: The data set used in this study comprises raw and processed images of the same view from 1462 women. We developed two algorithms for processed images, an automated area-based approach (CASAM-Area and a volumetric-based approach (CASAM-Vol. The latter method was based on training a random forest prediction model with image statistical features as predictors, against a volumetric measure, Volpara, for corresponding raw images. We contrast the three methods, CASAM-Area, CASAM-Vol and Volpara directly and in terms of association with breast cancer risk and a known genetic variant for mammographic density and breast cancer, rs10995190 in the gene ZNF365. Associations with breast cancer risk were evaluated using images from 47 breast cancer cases and 1011 control subjects. The genetic association analysis was based on 1011 control subjects. RESULTS: All three measures of mammographic density were associated with breast cancer risk and rs10995190 (p0.10 for risk, p>0.03 for rs10995190. CONCLUSIONS: Our results show that it is possible to obtain reliable automated measures of volumetric and area mammographic density from processed digital images. Area and volumetric measures of density on processed digital images performed similar in terms of risk and genetic association.

  12. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  13. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  14. Level-1 muon trigger performance with the full 2017 dataset

    CERN Document Server

    CMS Collaboration

    2018-01-01

    This document describes the performance of the CMS Level-1 Muon Trigger with the full dataset of 2017. Efficiency plots are included for each track finder (TF) individually and for the system as a whole. The efficiency is measured to be greater than 90% for all track finders.

  15. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  16. Operating scheme for the light-emitting diode array of a volumetric display that exhibits multiple full-color dynamic images

    Science.gov (United States)

    Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-07-01

    We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.

  17. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  18. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  19. Reconstructing flaw image using dataset of full matrix capture technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Tae Hun; Kim, Yong Sik; Lee, Jeong Seok [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2017-02-15

    A conventional phased array ultrasonic system offers the ability to steer an ultrasonic beam by applying independent time delays of individual elements in the array and produce an ultrasonic image. In contrast, full matrix capture (FMC) is a data acquisition process that collects a complete matrix of A-scans from every possible independent transmit-receive combination in a phased array transducer and makes it possible to reconstruct various images that cannot be produced by conventional phased array with the post processing as well as images equivalent to a conventional phased array image. In this paper, a basic algorithm based on the LLL mode total focusing method (TFM) that can image crack type flaws is described. And this technique was applied to reconstruct flaw images from the FMC dataset obtained from the experiments and ultrasonic simulation.

  20. Volumetric multimodality neural network for brain tumor segmentation

    Science.gov (United States)

    Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo

    2017-11-01

    Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.

  1. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A; Toga, Arthur W; Thompson, Paul M

    2011-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic "Demons" algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future.

  2. Hierarchical anatomical brain networks for MCI prediction: revisiting volumetric measures.

    Directory of Open Access Journals (Sweden)

    Luping Zhou

    Full Text Available Owning to its clinical accessibility, T1-weighted MRI (Magnetic Resonance Imaging has been extensively studied in the past decades for prediction of Alzheimer's disease (AD and mild cognitive impairment (MCI. The volumes of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF are the most commonly used measurements, resulting in many successful applications. It has been widely observed that disease-induced structural changes may not occur at isolated spots, but in several inter-related regions. Therefore, for better characterization of brain pathology, we propose in this paper a means to extract inter-regional correlation based features from local volumetric measurements. Specifically, our approach involves constructing an anatomical brain network for each subject, with each node representing a Region of Interest (ROI and each edge representing Pearson correlation of tissue volumetric measurements between ROI pairs. As second order volumetric measurements, network features are more descriptive but also more sensitive to noise. To overcome this limitation, a hierarchy of ROIs is used to suppress noise at different scales. Pairwise interactions are considered not only for ROIs with the same scale in the same layer of the hierarchy, but also for ROIs across different scales in different layers. To address the high dimensionality problem resulting from the large number of network features, a supervised dimensionality reduction method is further employed to embed a selected subset of features into a low dimensional feature space, while at the same time preserving discriminative information. We demonstrate with experimental results the efficacy of this embedding strategy in comparison with some other commonly used approaches. In addition, although the proposed method can be easily generalized to incorporate other metrics of regional similarities, the benefits of using Pearson correlation in our application are reinforced by the experimental

  3. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  4. A Combined Random Forests and Active Contour Model Approach for Fully Automatic Segmentation of the Left Atrium in Volumetric MRI

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-01-01

    Full Text Available Segmentation of the left atrium (LA from cardiac magnetic resonance imaging (MRI datasets is of great importance for image guided atrial fibrillation ablation, LA fibrosis quantification, and cardiac biophysical modelling. However, automated LA segmentation from cardiac MRI is challenging due to limited image resolution, considerable variability in anatomical structures across subjects, and dynamic motion of the heart. In this work, we propose a combined random forests (RFs and active contour model (ACM approach for fully automatic segmentation of the LA from cardiac volumetric MRI. Specifically, we employ the RFs within an autocontext scheme to effectively integrate contextual and appearance information from multisource images together for LA shape inferring. The inferred shape is then incorporated into a volume-scalable ACM for further improving the segmentation accuracy. We validated the proposed method on the cardiac volumetric MRI datasets from the STACOM 2013 and HVSMR 2016 databases and showed that it outperforms other latest automated LA segmentation methods. Validation metrics, average Dice coefficient (DC and average surface-to-surface distance (S2S, were computed as 0.9227±0.0598 and 1.14±1.205 mm, versus those of 0.6222–0.878 and 1.34–8.72 mm, obtained by other methods, respectively.

  5. Vessel suppressed chest Computed Tomography for semi-automated volumetric measurements of solid pulmonary nodules.

    Science.gov (United States)

    Milanese, Gianluca; Eberhard, Matthias; Martini, Katharina; Vittoria De Martini, Ilaria; Frauenfelder, Thomas

    2018-04-01

    To evaluate whether vessel-suppressed computed tomography (VSCT) can be reliably used for semi-automated volumetric measurements of solid pulmonary nodules, as compared to standard CT (SCT) MATERIAL AND METHODS: Ninety-three SCT were elaborated by dedicated software (ClearRead CT, Riverain Technologies, Miamisburg, OH, USA), that allows subtracting vessels from lung parenchyma. Semi-automated volumetric measurements of 65 solid nodules were compared between SCT and VSCT. The measurements were repeated by two readers. For each solid nodule, volume measured on SCT by Reader 1 and Reader 2 was averaged and the average volume between readers acted as standard of reference value. Concordance between measurements was assessed using Lin's Concordance Correlation Coefficient (CCC). Limits of agreement (LoA) between readers and CT datasets were evaluated. Standard of reference nodule volume ranged from 13 to 366 mm 3 . The mean overestimation between readers was 3 mm 3 and 2.9 mm 3 on SCT and VSCT, respectively. Semi-automated volumetric measurements on VSCT showed substantial agreement with the standard of reference (Lin's CCC = 0.990 for Reader 1; 0.985 for Reader 2). The upper and lower LoA between readers' measurements were (16.3, -22.4 mm 3 ) and (15.5, -21.4 mm 3 ) for SCT and VSCT, respectively. VSCT datasets are feasible for the measurements of solid nodules, showing an almost perfect concordance between readers and with measurements on SCT. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Full-field mapping of internal strain distribution in red sandstone specimen under compression using digital volumetric speckle photography and X-ray computed tomography

    Directory of Open Access Journals (Sweden)

    Lingtao Mao

    2015-04-01

    Full Text Available It is always desirable to know the interior deformation pattern when a rock is subjected to mechanical load. Few experimental techniques exist that can represent full-field three-dimensional (3D strain distribution inside a rock specimen. And yet it is crucial that this information is available for fully understanding the failure mechanism of rocks or other geomaterials. In this study, by using the newly developed digital volumetric speckle photography (DVSP technique in conjunction with X-ray computed tomography (CT and taking advantage of natural 3D speckles formed inside the rock due to material impurities and voids, we can probe the interior of a rock to map its deformation pattern under load and shed light on its failure mechanism. We apply this technique to the analysis of a red sandstone specimen under increasing uniaxial compressive load applied incrementally. The full-field 3D displacement fields are obtained in the specimen as a function of the load, from which both the volumetric and the deviatoric strain fields are calculated. Strain localization zones which lead to the eventual failure of the rock are identified. The results indicate that both shear and tension are contributing factors to the failure mechanism.

  7. Soft bilateral filtering volumetric shadows using cube shadow maps.

    Directory of Open Access Journals (Sweden)

    Hatam H Ali

    Full Text Available Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications.

  8. Soil volumetric water content measurements using TDR technique

    Directory of Open Access Journals (Sweden)

    S. Vincenzi

    1996-06-01

    Full Text Available A physical model to measure some hydrological and thermal parameters in soils will to be set up. The vertical profiles of: volumetric water content, matric potential and temperature will be monitored in different soils. The volumetric soil water content is measured by means of the Time Domain Reflectometry (TDR technique. The result of a test to determine experimentally the reproducibility of the volumetric water content measurements is reported together with the methodology and the results of the analysis of the TDR wave forms. The analysis is based on the calculation of the travel time of the TDR signal in the wave guide embedded in the soil.

  9. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  10. Aspects of volumetric efficiency measurement for reciprocating engines

    Directory of Open Access Journals (Sweden)

    Pešić Radivoje B.

    2013-01-01

    Full Text Available The volumetric efficiency significantly influences engine output. Both design and dimensions of an intake and exhaust system have large impact on volumetric efficiency. Experimental equipment for measuring of airflow through the engine, which is placed in the intake system, may affect the results of measurements and distort the real picture of the impact of individual structural factors. This paper deals with the problems of experimental determination of intake airflow using orifice plates and the influence of orifice plate diameter on the results of the measurements. The problems of airflow measurements through a multi-process Otto/Diesel engine were analyzed. An original method for determining volumetric efficiency was developed based on in-cylinder pressure measurement during motored operation, and appropriate calibration of the experimental procedure was performed. Good correlation between the results of application of the original method for determination of volumetric efficiency and the results of theoretical model used in research of influence of the intake pipe length on volumetric efficiency was determined. [Acknowledgments. The paper is the result of the research within the project TR 35041 financed by the Ministry of Science and Technological Development of the Republic of Serbia

  11. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  12. Three-dimensional volumetric gray-scale uterine cervix histogram prediction of days to delivery in full term pregnancy.

    Science.gov (United States)

    Kim, Ji Youn; Kim, Hai-Joong; Hahn, Meong Hi; Jeon, Hye Jin; Cho, Geum Joon; Hong, Sun Chul; Oh, Min Jeong

    2013-09-01

    Our aim was to figure out whether volumetric gray-scale histogram difference between anterior and posterior cervix can indicate the extent of cervical consistency. We collected data of 95 patients who were appropriate for vaginal delivery with 36th to 37th weeks of gestational age from September 2010 to October 2011 in the Department of Obstetrics and Gynecology, Korea University Ansan Hospital. Patients were excluded who had one of the followings: Cesarean section, labor induction, premature rupture of membrane. Thirty-four patients were finally enrolled. The patients underwent evaluation of the cervix through Bishop score, cervical length, cervical volume, three-dimensional (3D) cervical volumetric gray-scale histogram. The interval days from the cervix evaluation to the delivery day were counted. We compared to 3D cervical volumetric gray-scale histogram, Bishop score, cervical length, cervical volume with interval days from the evaluation of the cervix to the delivery. Gray-scale histogram difference between anterior and posterior cervix was significantly correlated to days to delivery. Its correlation coefficient (R) was 0.500 (P = 0.003). The cervical length was significantly related to the days to delivery. The correlation coefficient (R) and P-value between them were 0.421 and 0.013. However, anterior lip histogram, posterior lip histogram, total cervical volume, Bishop score were not associated with days to delivery (P >0.05). By using gray-scale histogram difference between anterior and posterior cervix and cervical length correlated with the days to delivery. These methods can be utilized to better help predict a cervical consistency.

  13. Exploring interaction with 3D volumetric displays

    Science.gov (United States)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  14. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  15. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms.

    Science.gov (United States)

    Li, Hui; Giger, Maryellen L; Huynh, Benjamin Q; Antropova, Natalia O

    2017-10-01

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text

  16. Hologlyphics: volumetric image synthesis performance system

    Science.gov (United States)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  17. DIFFERENTIAL ANALYSIS OF VOLUMETRIC STRAINS IN POROUS MATERIALS IN TERMS OF WATER FREEZING

    Directory of Open Access Journals (Sweden)

    Rusin Z.

    2013-06-01

    Full Text Available The paper presents the differential analysis of volumetric strain (DAVS. The method allows measurements of volumetric deformations of capillary-porous materials caused by water-ice phase change. The VSE indicator (volumetric strain effect, which under certain conditions can be interpreted as the minimum degree of phase change of water contained in the material pores, is proposed. The test results (DAVS for three materials with diversified microstructure: clinker brick, calcium-silicate brick and Portland cement mortar were compared with the test results for pore characteristics obtained with the mercury intrusion porosimetry.

  18. Extended Kalman filtering for continuous volumetric MR-temperature imaging.

    Science.gov (United States)

    Denis de Senneville, Baudouin; Roujol, Sébastien; Hey, Silke; Moonen, Chrit; Ries, Mario

    2013-04-01

    Real time magnetic resonance (MR) thermometry has evolved into the method of choice for the guidance of high-intensity focused ultrasound (HIFU) interventions. For this role, MR-thermometry should preferably have a high temporal and spatial resolution and allow observing the temperature over the entire targeted area and its vicinity with a high accuracy. In addition, the precision of real time MR-thermometry for therapy guidance is generally limited by the available signal-to-noise ratio (SNR) and the influence of physiological noise. MR-guided HIFU would benefit of the large coverage volumetric temperature maps, including characterization of volumetric heating trajectories as well as near- and far-field heating. In this paper, continuous volumetric MR-temperature monitoring was obtained as follows. The targeted area was continuously scanned during the heating process by a multi-slice sequence. Measured data and a priori knowledge of 3-D data derived from a forecast based on a physical model were combined using an extended Kalman filter (EKF). The proposed reconstruction improved the temperature measurement resolution and precision while maintaining guaranteed output accuracy. The method was evaluated experimentally ex vivo on a phantom, and in vivo on a porcine kidney, using HIFU heating. On the in vivo experiment, it allowed the reconstruction from a spatio-temporally under-sampled data set (with an update rate for each voxel of 1.143 s) to a 3-D dataset covering a field of view of 142.5×285×54 mm(3) with a voxel size of 3×3×6 mm(3) and a temporal resolution of 0.127 s. The method also provided noise reduction, while having a minimal impact on accuracy and latency.

  19. Adaptive controller for volumetric display of neuroimaging studies

    Science.gov (United States)

    Bleiberg, Ben; Senseney, Justin; Caban, Jesus

    2014-03-01

    Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.

  20. Soil moisture datasets at five sites in the central Sierra Nevada and northern Coast Ranges, California

    Science.gov (United States)

    Stern, Michelle A.; Anderson, Frank A.; Flint, Lorraine E.; Flint, Alan L.

    2018-05-03

    In situ soil moisture datasets are important inputs used to calibrate and validate watershed, regional, or statewide modeled and satellite-based soil moisture estimates. The soil moisture dataset presented in this report includes hourly time series of the following: soil temperature, volumetric water content, water potential, and total soil water content. Data were collected by the U.S. Geological Survey at five locations in California: three sites in the central Sierra Nevada and two sites in the northern Coast Ranges. This report provides a description of each of the study areas, procedures and equipment used, processing steps, and time series data from each site in the form of comma-separated values (.csv) tables.

  1. Potential Applications of Flat-Panel Volumetric CT in Morphologic, Functional Small Animal Imaging

    Directory of Open Access Journals (Sweden)

    Susanne Greschus

    2005-08-01

    Full Text Available Noninvasive radiologic imaging has recently gained considerable interest in basic, preclinical research for monitoring disease progression, therapeutic efficacy. In this report, we introduce flat-panel volumetric computed tomography (fpVCT as a powerful new tool for noninvasive imaging of different organ systems in preclinical research. The three-dimensional visualization that is achieved by isotropic high-resolution datasets is illustrated for the skeleton, chest, abdominal organs, brain of mice. The high image quality of chest scans enables the visualization of small lung nodules in an orthotopic lung cancer model, the reliable imaging of therapy side effects such as lung fibrosis. Using contrast-enhanced scans, fpVCT displayed the vascular trees of the brain, liver, kidney down to the subsegmental level. Functional application of fpVCT in dynamic contrast-enhanced scans of the rat brain delivered physiologically reliable data of perfusion, tissue blood volume. Beyond scanning of small animal models as demonstrated here, fpVCT provides the ability to image animals up to the size of primates.

  2. Search for the lepton flavour violating decay μ{sup +} → e{sup +}γ with the full dataset of the MEG experiment

    Energy Technology Data Exchange (ETDEWEB)

    Baldini, A.M.; Cerri, C.; Dussoni, S.; Galli, L.; Grassi, M.; Morsani, F.; Pazzi, R.; Raffaelli, F.; Sergiampietri, F.; Signorelli, G. [Pisa Univ. (Italy); INFN Sezione di Pisa, Pisa (Italy); Bao, Y.; Egger, J.; Hildebrandt, M.; Kettle, P.R.; Mtchedilishvili, A.; Papa, A.; Ritt, S. [Paul Scherrer Institut PSI, Villigen (Switzerland); Baracchini, E. [ICEPP, The University of Tokyo, Tokyo (Japan); Bemporad, C.; Cei, F.; D' Onofrio, A.; Nicolo, D.; Tenchini, F. [Pisa Univ. (Italy). Dipt. di Fisica; INFN Sezione di Pisa, Pisa (Italy); Berg, F.; Hodge, Z.; Rutar, G. [Paul Scherrer Institut PSI, Villigen (Switzerland); Swiss Federal Institute of Technology ETH, Zurich (Switzerland); Biasotti, M.; Gatti, F.; Pizzigoni, G. [INFN Sezione di Genova, Genoa (Italy); Genoa Univ., Dipartimento di Fisica (Italy); Boca, G.; De Bari, A.; Nardo, R.; Simonetta, M. [INFN Sezione di Pavia, Pavia (Italy); Pavia Univ., Dipartimento di Fisica (Italy); Cascella, M. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); University College London, Department of Physics and Astronomy, London (United Kingdom); Cattaneo, P.W.; Rossella, M. [Pavia Univ. (Italy); INFN Sezione di Pavia, Pavia (Italy); Cavoto, G.; Piredda, G.; Voena, C. [Rome Univ. ' ' Sapienza' ' (Italy); INFN Sezione di Roma, Rome (Italy); Chiarello, G.; Chiri, C.; Corvaglia, A.; Panareo, M.; Pepino, A. [INFN Sezione di Lecce, Lecce (Italy); Universita del Salento, Dipartimento di Matematica e Fisica, Lecce (Italy); De Gerone, M. [Genoa Univ. (Italy); INFN Sezione di Genova, Genoa (Italy); Doke, T. [Waseda University, Research Institute for Science and Engineering, Tokyo (Japan); Fujii, Y.; Ieki, K.; Iwamoto, T.; Kaneko, D.; Mori, Toshinori; Nakaura, S.; Nishimura, M.; Ogawa, S.; Ootani, W.; Orito, S.; Sawada, R.; Uchiyama, Y.; Yoshida, K. [ICEPP, The University of Tokyo, Tokyo (Japan); Grancagnolo, F.; Tassielli, G.F. [Universita del Salento (Italy); INFN Sezione di Lecce, Lecce (Italy); Graziosi, A.; Ripiccini, E. [INFN Sezione di Roma, Rome (Italy); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica (Italy); Grigoriev, D.N. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (Russian Federation); Novosibirsk State Technical University, Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Haruyama, T.; Maki, A.; Mihara, S.; Nishiguchi, H.; Yamamoto, A. [KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (JP); Ignatov, F.; Khazin, B.I.; Popov, A.; Yudin, Yu.V. [Budker Institute of Nuclear Physics, Russian Academy of Sciences, Novosibirsk (RU); Novosibirsk State University, Novosibirsk (RU); Kang, T.I.; Lim, G.M.A.; Molzon, W.; You, Z.; Zanello, D. [University of California, Irvine, CA (US); Khomutov, N.; Korenchenko, A.; Kravchuk, N.; Mzavia, D. [Joint Institute for Nuclear Research, Dubna (RU); Renga, F. [Paul Scherrer Institut PSI, Villigen (CH); INFN Sezione di Roma, Rome (IT); Rome Univ. ' ' Sapienza' ' , Dipartimento di Fisica, Rome (IT); Venturini, M. [INFN Sezione di Pisa, Pisa (IT); Pisa Univ., Scuola Normale Superiore (IT); Collaboration: MEG Collaboration

    2016-08-15

    The final results of the search for the lepton flavour violating decay μ{sup +} → e{sup +}γ based on the full dataset collected by the MEG experiment at the Paul Scherrer Institut in the period 2009-2013 and totalling 7.5 x 10{sup 14} stopped muons on target are presented. No significant excess of events is observed in the dataset with respect to the expected background and a new upper limit on the branching ratio of this decay of B(μ{sup +} → e{sup +}γ) < 4.2 x 10{sup -13} (90 % confidence level) is established, which represents the most stringent limit on the existence of this decay to date. (orig.)

  3. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  4. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    Science.gov (United States)

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (Psockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  5. Characterizing volumetric deformation behavior of naturally occuring bituminous sand materials

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2009-05-01

    Full Text Available newly proposed hydrostatic compression test procedure. The test procedure applies field loading conditions of off-road construction and mining equipment to closely simulate the volumetric deformation and stiffness behaviour of oil sand materials. Based...

  6. The Influence of Water and Mineral Oil On Volumetric Losses in a Hydraulic Motor

    Directory of Open Access Journals (Sweden)

    Śliwiński Pawel

    2017-04-01

    Full Text Available In this paper volumetric losses in hydraulic motor supplied with water and mineral oil (two liquids having significantly different viscosity and lubricating properties are described and compared. The experimental tests were conducted using an innovative hydraulic satellite motor, that is dedicated to work with different liquids, including water. The sources of leaks in this motor are also characterized and described. On this basis, a mathematical model of volumetric losses and model of effective rotational speed have been developed and presented. The results of calculation of volumetric losses according to the model are compared with the results of experiment. It was found that the difference is not more than 20%. Furthermore, it has been demonstrated that this model well describes in both the volumetric losses in the motor supplied with water and oil. Experimental studies have shown that the volumetric losses in the motor supplied with water are even three times greater than the volumetric losses in the motor supplied with oil. It has been shown, that in a small constant stream of water the speed of the motor is reduced even by half in comparison of speed of motor supplied with the same stream of oil.

  7. Optical Addressing of Multi-Colour Photochromic Material Mixture for Volumetric Display

    Science.gov (United States)

    Hirayama, Ryuji; Shiraki, Atsushi; Naruse, Makoto; Nakamura, Shinichiro; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-08-01

    This is the first study to demonstrate that colour transformations in the volume of a photochromic material (PM) are induced at the intersections of two control light channels, one controlling PM colouration and the other controlling decolouration. Thus, PM colouration is induced by position selectivity, and therefore, a dynamic volumetric display may be realised using these two control lights. Moreover, a mixture of multiple PM types with different absorption properties exhibits different colours depending on the control light spectrum. Particularly, the spectrum management of the control light allows colour-selective colouration besides position selectivity. Therefore, a PM-based, full-colour volumetric display is realised. We experimentally construct a mixture of two PM types and validate the operating principles of such a volumetric display system. Our system is constructed simply by mixing multiple PM types; therefore, the display hardware structure is extremely simple, and the minimum size of a volume element can be as small as the size of a molecule. Volumetric displays can provide natural three-dimensional (3D) perception; therefore, the potential uses of our system include high-definition 3D visualisation for medical applications, architectural design, human-computer interactions, advertising, and entertainment.

  8. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems.

    Science.gov (United States)

    Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-04-06

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.

  9. Volumetric polymerization shrinkage of contemporary composite resins

    Directory of Open Access Journals (Sweden)

    Halim Nagem Filho

    2007-10-01

    Full Text Available The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill Magic, Alert, and Solitaire to determine whether there are differences among these materials. The tests were conducted with precision of 0.1 mg. The volumetric shrinkage was measured by hydrostatic weighing before and after polymerization and calculated by known mathematical equations. One-way ANOVA (a or = 0.05 was used to determine statistically significant differences in volumetric shrinkage among the tested composite resins. Suprafill (1.87±0.01 and Definite (1.89±0.01 shrank significantly less than the other composite resins. SureFil (2.01±0.06, Filtek Z250 (1.99±0.03, and Fill Magic (2.02±0.02 presented intermediate levels of polymerization shrinkage. Alert and Solitaire presented the highest degree of polymerization shrinkage. Knowing the polymerization shrinkage rates of the commercially available composite resins, the dentist would be able to choose between using composite resins with lower polymerization shrinkage rates or adopting technical or operational procedures to minimize the adverse effects deriving from resin contraction during light-activation.

  10. Volumetric CT-images improve testing of radiological image interpretation skills

    Energy Technology Data Exchange (ETDEWEB)

    Ravesloot, Cécile J., E-mail: C.J.Ravesloot@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Schaaf, Marieke F. van der, E-mail: M.F.vanderSchaaf@uu.nl [Department of Pedagogical and Educational Sciences at Utrecht University, Heidelberglaan 1, 3584 CS Utrecht (Netherlands); Schaik, Jan P.J. van, E-mail: J.P.J.vanSchaik@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Cate, Olle Th.J. ten, E-mail: T.J.tenCate@umcutrecht.nl [Center for Research and Development of Education at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Gijp, Anouk van der, E-mail: A.vanderGijp-2@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Mol, Christian P., E-mail: C.Mol@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Vincken, Koen L., E-mail: K.Vincken@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands)

    2015-05-15

    Rationale and objectives: Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Materials and methods: Two groups of medical students (n = 139; n = 143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students’ test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Results: Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p < .001). The volumetric CT-image testing program was considered user-friendly. Conclusion: This study shows that volumetric image questions can be successfully integrated in students’ radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test.

  11. Breast Density Estimation with Fully Automated Volumetric Method: Comparison to Radiologists' Assessment by BI-RADS Categories.

    Science.gov (United States)

    Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan

    2016-01-01

    The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  13. Coaxial volumetric velocimetry

    Science.gov (United States)

    Schneiders, Jan F. G.; Scarano, Fulvio; Jux, Constantin; Sciacchitano, Andrea

    2018-06-01

    This study describes the working principles of the coaxial volumetric velocimeter (CVV) for wind tunnel measurements. The measurement system is derived from the concept of tomographic PIV in combination with recent developments of Lagrangian particle tracking. The main characteristic of the CVV is its small tomographic aperture and the coaxial arrangement between the illumination and imaging directions. The system consists of a multi-camera arrangement subtending only few degrees solid angle and a long focal depth. Contrary to established PIV practice, laser illumination is provided along the same direction as that of the camera views, reducing the optical access requirements to a single viewing direction. The laser light is expanded to illuminate the full field of view of the cameras. Such illumination and imaging conditions along a deep measurement volume dictate the use of tracer particles with a large scattering area. In the present work, helium-filled soap bubbles are used. The fundamental principles of the CVV in terms of dynamic velocity and spatial range are discussed. Maximum particle image density is shown to limit tracer particle seeding concentration and instantaneous spatial resolution. Time-averaged flow fields can be obtained at high spatial resolution by ensemble averaging. The use of the CVV for time-averaged measurements is demonstrated in two wind tunnel experiments. After comparing the CVV measurements with the potential flow in front of a sphere, the near-surface flow around a complex wind tunnel model of a cyclist is measured. The measurements yield the volumetric time-averaged velocity and vorticity field. The measurements of the streamlines in proximity of the surface give an indication of the skin-friction lines pattern, which is of use in the interpretation of the surface flow topology.

  14. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  15. Volumetric expiratory high-resolution CT of the lung

    International Nuclear Information System (INIS)

    Nishino, Mizuki; Hatabu, Hiroto

    2004-01-01

    We developed a volumetric expiratory high-resolution CT (HRCT) protocol that provides combined inspiratory and expiratory volumetric imaging of the lung without increasing radiation exposure, and conducted a preliminary feasibility assessment of this protocol to evaluate diffuse lung disease with small airway abnormalities. The volumetric expiratory high-resolution CT increased the detectability of the conducting airway to the areas of air trapping (P<0.0001), and added significant information about extent and distribution of air trapping (P<0.0001)

  16. SU-E-J-217: Accuracy Comparison Between Surface and Volumetric Registrations for Patient Setup of Head and Neck Radiation Therapy

    International Nuclear Information System (INIS)

    Kim, Y; Li, R; Na, Y; Jenkins, C; Xing, L; Lee, R

    2014-01-01

    Purpose: Optical surface imaging has been applied to radiation therapy patient setup. This study aims to investigate the accuracy of the surface registration of the optical surface imaging compared with that of the conventional method of volumetric registration for patient setup in head and neck radiation therapy. Methods: Clinical datasets of planning CT and treatment Cone Beam CT (CBCT) were used to compare the surface and volumetric registrations in radiation therapy patient setup. The Iterative Closest Points based on point-plane closest method was implemented for surface registration. We employed 3D Slicer for rigid volumetric registration of planning CT and treatment CBCT. 6 parameters of registration results (3 rotations and 3 translations) were obtained by the two registration methods, and the results were compared. Digital simulation tests in ideal cases were also performed to validate each registration method. Results: Digital simulation tests showed that both of the registration methods were accurate and robust enough to compare the registration results. In experiments with the actual clinical data, the results showed considerable deviation between the surface and volumetric registrations. The average root mean squared translational error was 2.7 mm and the maximum translational error was 5.2 mm. Conclusion: The deviation between the surface and volumetric registrations was considerable. Special caution should be taken in using an optical surface imaging. To ensure the accuracy of optical surface imaging in radiation therapy patient setup, additional measures are required. This research was supported in part by the KIST institutional program (2E24551), the Industrial Strategic technology development program (10035495) funded by the Ministry of Trade, Industry and Energy (MOTIE, KOREA), and the Radiation Safety Research Programs (1305033) through the Nuclear Safety and Security Commission, and the NIH (R01EB016777)

  17. SU-E-J-217: Accuracy Comparison Between Surface and Volumetric Registrations for Patient Setup of Head and Neck Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y [Stanford University School of Medicine, Stanford, CA (United States); Korea Institute of Science and Technology, Seoul (Korea, Republic of); Li, R; Na, Y; Jenkins, C; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Lee, R [Ewha Womans University, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: Optical surface imaging has been applied to radiation therapy patient setup. This study aims to investigate the accuracy of the surface registration of the optical surface imaging compared with that of the conventional method of volumetric registration for patient setup in head and neck radiation therapy. Methods: Clinical datasets of planning CT and treatment Cone Beam CT (CBCT) were used to compare the surface and volumetric registrations in radiation therapy patient setup. The Iterative Closest Points based on point-plane closest method was implemented for surface registration. We employed 3D Slicer for rigid volumetric registration of planning CT and treatment CBCT. 6 parameters of registration results (3 rotations and 3 translations) were obtained by the two registration methods, and the results were compared. Digital simulation tests in ideal cases were also performed to validate each registration method. Results: Digital simulation tests showed that both of the registration methods were accurate and robust enough to compare the registration results. In experiments with the actual clinical data, the results showed considerable deviation between the surface and volumetric registrations. The average root mean squared translational error was 2.7 mm and the maximum translational error was 5.2 mm. Conclusion: The deviation between the surface and volumetric registrations was considerable. Special caution should be taken in using an optical surface imaging. To ensure the accuracy of optical surface imaging in radiation therapy patient setup, additional measures are required. This research was supported in part by the KIST institutional program (2E24551), the Industrial Strategic technology development program (10035495) funded by the Ministry of Trade, Industry and Energy (MOTIE, KOREA), and the Radiation Safety Research Programs (1305033) through the Nuclear Safety and Security Commission, and the NIH (R01EB016777)

  18. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  19. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity to the migrated image. We propose a combined FWI and RWI optimization problem through dividing the velocity into the background and perturbed components. We optimize both the background and perturbed components, as independent parameters. The new objective function is quadratic with respect to the perturbed component, which will reduce the nonlinearity of the optimization problem. Solving this optimization provides a true amplitude image and utilizes the diving waves to update the velocity of the shallow parts. To insure a proper wavenumber continuation, we use an efficient scattering angle filter to direct the inversion at the early stages to direct energy corresponding to large (smooth velocity) scattering angles to the background velocity update and the small (high wavenumber) scattering angles to the perturbed velocity update. This efficient implementation of the filter is fast and requires less memory than the conventional approach based on extended images. Thus, the new FWI procedure updates the background velocity mainly along the wavepath for both diving and reflected waves in the initial stages. At the same time, it updates the perturbation with mainly reflections (filtering out the diving waves). To demonstrate the capability of this method, we apply it to a real 2D marine dataset.

  20. Visualization and volumetric structures from MR images of the brain

    Energy Technology Data Exchange (ETDEWEB)

    Parvin, B.; Johnston, W.; Robertson, D.

    1994-03-01

    Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.

  1. An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.

    Science.gov (United States)

    Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto

    2018-04-15

    This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-valuefield strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  3. Process conditions and volumetric composition in composites

    DEFF Research Database (Denmark)

    Madsen, Bo

    2013-01-01

    The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated...... as a function of the fiber weight fraction, and where parameters are included for the composite microstructure, and the fiber assembly compaction behavior. Based on experimental data of composites manufactured with different process conditions, together with model predictions, different types of process related...... effects are analyzed. The applied consolidation pressure is found to have a marked effect on the volumetric composition. A power-law relationship is found to well describe the found relations between the maximum obtainable fiber volume fraction and the consolidation pressure. The degree of fiber...

  4. QSAR ligand dataset for modelling mutagenicity, genotoxicity, and rodent carcinogenicity

    Directory of Open Access Journals (Sweden)

    Davy Guan

    2018-04-01

    Full Text Available Five datasets were constructed from ligand and bioassay result data from the literature. These datasets include bioassay results from the Ames mutagenicity assay, Greenscreen GADD-45a-GFP assay, Syrian Hamster Embryo (SHE assay, and 2 year rat carcinogenicity assay results. These datasets provide information about chemical mutagenicity, genotoxicity and carcinogenicity.

  5. Volumetric fat-water separated T2-weighted MRI

    International Nuclear Information System (INIS)

    Vasanawala, Shreyas S.; Sonik, Arvind; Madhuranthakam, Ananth J.; Venkatesan, Ramesh; Lai, Peng; Brau, Anja C.S.

    2011-01-01

    Pediatric body MRI exams often cover multiple body parts, making the development of broadly applicable protocols and obtaining uniform fat suppression a challenge. Volumetric T2 imaging with Dixon-type fat-water separation might address this challenge, but it is a lengthy process. We develop and evaluate a faster two-echo approach to volumetric T2 imaging with fat-water separation. A volumetric spin-echo sequence was modified to include a second shifted echo so two image sets are acquired. A region-growing reconstruction approach was developed to decompose separate water and fat images. Twenty-six children were recruited with IRB approval and informed consent. Fat-suppression quality was graded by two pediatric radiologists and compared against conventional fat-suppressed fast spin-echo T2-W images. Additionally, the value of in- and opposed-phase images was evaluated. Fat suppression on volumetric images had high quality in 96% of cases (95% confidence interval of 80-100%) and were preferred over or considered equivalent to conventional two-dimensional fat-suppressed FSE T2 imaging in 96% of cases (95% confidence interval of 78-100%). In- and opposed-phase images had definite value in 12% of cases. Volumetric fat-water separated T2-weighted MRI is feasible and is likely to yield improved fat suppression over conventional fat-suppressed T2-weighted imaging. (orig.)

  6. Volumetric composition in composites and historical data

    DEFF Research Database (Denmark)

    Lilholt, Hans; Madsen, Bo

    2013-01-01

    The obtainable volumetric composition in composites is of importance for the prediction of mechanical and physical properties, and in particular to assess the best possible (normally the highest) values for these properties. The volumetric model for the composition of (fibrous) composites gives...... guidance to the optimal combination of fibre content, matrix content and porosity content, in order to achieve the best obtainable properties. Several composite materials systems have been shown to be handleable with this model. An extensive series of experimental data for the system of cellulose fibres...... and polymer (resin) was produced in 1942 – 1944, and these data have been (re-)analysed by the volumetric composition model, and the property values for density, stiffness and strength have been evaluated. Good agreement has been obtained and some further observations have been extracted from the analysis....

  7. Cost-effectiveness of volumetric alcohol taxation in Australia.

    Science.gov (United States)

    Byrnes, Joshua M; Cobiac, Linda J; Doran, Christopher M; Vos, Theo; Shakeshaft, Anthony P

    2010-04-19

    To estimate the potential health benefits and cost savings of an alcohol tax rate that applies equally to all alcoholic beverages based on their alcohol content (volumetric tax) and to compare the cost savings with the cost of implementation. Mathematical modelling of three scenarios of volumetric alcohol taxation for the population of Australia: (i) no change in deadweight loss, (ii) no change in tax revenue, and (iii) all alcoholic beverages taxed at the same rate as spirits. Estimated change in alcohol consumption, tax revenue and health benefit. The estimated cost of changing to a volumetric tax rate is $18 million. A volumetric tax that is deadweight loss-neutral would increase the cost of beer and wine and reduce the cost of spirits, resulting in an estimated annual increase in taxation revenue of $492 million and a 2.77% reduction in annual consumption of pure alcohol. The estimated net health gain would be 21 000 disability-adjusted life-years (DALYs), with potential cost offsets of $110 million per annum. A tax revenue-neutral scenario would result in an 0.05% decrease in consumption, and a tax on all alcohol at a spirits rate would reduce consumption by 23.85% and increase revenue by $3094 million [corrected]. All volumetric tax scenarios would provide greater health benefits and cost savings to the health sector than the existing taxation system, based on current understandings of alcohol-related health effects. An equalized volumetric tax that would reduce beer and wine consumption while increasing the consumption of spirits would need to be approached with caution. Further research is required to examine whether alcohol-related health effects vary by type of alcoholic beverage independent of the amount of alcohol consumed to provide a strong evidence platform for alcohol taxation policies.

  8. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H. [Brigham and Women' s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Li, Ruijiang [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  9. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  10. Volumetric breast density estimation from full-field digital mammograms.

    NARCIS (Netherlands)

    Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.

    2006-01-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast

  11. TU-CD-BRB-04: Automated Radiomic Features Complement the Prognostic Value of VASARI in the TCGA-GBM Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, E Rios [Dana-Farber Cancer Institute | Harvard Medical School, Boston, MA (United States); Narayan, V [Dana-Farber Cancer Institute, Brigham and Womens Hospital, Harvard Medic, Boston, MA (United States); Grossmann, P [Dana-Farber Cancer Institute/Harvard Medical School, Boston, MA (United States); Dunn, W; Gutman, D [Emory University School of Medicine, Atlanta, GA (United States); Aerts, H [Dana-Farber/Brigham Womens Cancer Center, Boston, MA (United States)

    2015-06-15

    Purpose: To compare the complementary prognostic value of automated Radiomic features to that of radiologist-annotated VASARI features in TCGA-GBM MRI dataset. Methods: For 96 GBM patients, pre-operative MRI images were obtained from The Cancer Imaging Archive. The abnormal tumor bulks were manually defined on post-contrast T1w images. The contrast-enhancing and necrotic regions were segmented using FAST. From these sub-volumes and the total abnormal tumor bulk, a set of Radiomic features quantifying phenotypic differences based on the tumor intensity, shape and texture, were extracted from the post-contrast T1w images. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative Radiomic, VASARI and combined Radiomic-VASARI features in 70% of the dataset (training-set). Multivariate Cox-proportional hazards models were evaluated in 30% of the dataset (validation-set) using the C-index for OS. A bootstrap procedure was used to assess significance while comparing the C-Indices of the different models. Results: Overall, the Radiomic features showed a moderate correlation with the radiologist-annotated VASARI features (r = −0.37 – 0.49); however that correlation was stronger for the Tumor Diameter and Proportion of Necrosis VASARI features (r = −0.71 – 0.69). After MRMR feature selection, the best-performing Radiomic, VASARI, and Radiomic-VASARI Cox-PH models showed a validation C-index of 0.56 (p = NS), 0.58 (p = NS) and 0.65 (p = 0.01), respectively. The combined Radiomic-VASARI model C-index was significantly higher than that obtained from either the Radiomic or VASARI model alone (p = <0.001). Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative annotated VASARI feature set. The prognostic value of informative qualitative VASARI features such as Eloquent Brain and Multifocality is increased with the addition of quantitative volumetric and textural features from the

  12. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  13. Mapping of coastal landforms and volumetric change analysis in the south west coast of Kanyakumari, South India using remote sensing and GIS techniques

    Directory of Open Access Journals (Sweden)

    S. Kaliraj

    2017-12-01

    Full Text Available The coastal landforms along the south west coast of Kanyakumari have undergone remarkable change in terms of shape and disposition due to both natural and anthropogenic interference. An attempt is made here to map the coastal landforms along the coast using remote sensing and GIS techniques. Spatial data sources, such as, topographical map published by Survey of India, Landsat ETM+ (30 m image, IKONOS image (0.82 m, SRTM and ASTER DEM datasets have been comprehensively analyzed for extracting coastal landforms. Change detection methods, such as, (i topographical change detection, (ii cross-shore profile analysis, (iii Geomorphic Change Detection (GCD using DEM of Difference (DoD were adopted for assessment of volumetric changes of coastal landforms for the period between 2000 and 2011. The GCD analysis uses ASTER and SRTM DEM datasets by resampling them into common scale (pixel size using pixel-by-pixel based Wavelet Transform and Pan-Sharpening techniques in ERDAS Imagine software. Volumetric changes of coastal landforms were validated with data derived from GPS-based field survey. Coastal landform units were mapped based on process of their evolution such as beach landforms including sandy beach, cusp, berm, scarp, beach terrace, upland, rockyshore, cliffs, wave-cut notches and wave-cut platforms; and the fluvial landforms. Comprising of alluvial plain, flood plains, and other shallow marshes in estuaries. The topographical change analysis reveals that the beach landforms have reduced their elevation ranging from 1 to 3 m probably due to sediment removal or flattening. Analysis of cross-shore profiles for twelve locations indicate varying degrees of loss or gain of coastal landforms. For example, the K3-K3′ profile across the Kovalam coast has shown significant erosion (−0.26 to −0.76 m of the sandy beaches resulting in the formation of beach cusps and beach scarps within a distance of 300 m from the shoreline. The volumetric change

  14. Design, Implementation and Characterization of a Quantum-Dot-Based Volumetric Display

    Science.gov (United States)

    Hirayama, Ryuji; Naruse, Makoto; Nakayama, Hirotaka; Tate, Naoya; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ohtsu, Motoichi; Ito, Tomoyoshi

    2015-02-01

    In this study, we propose and experimentally demonstrate a volumetric display system based on quantum dots (QDs) embedded in a polymer substrate. Unlike conventional volumetric displays, our system does not require electrical wiring; thus, the heretofore unavoidable issue of occlusion is resolved because irradiation by external light supplies the energy to the light-emitting voxels formed by the QDs. By exploiting the intrinsic attributes of the QDs, the system offers ultrahigh definition and a wide range of colours for volumetric displays. In this paper, we discuss the design, implementation and characterization of the proposed volumetric display's first prototype. We developed an 8 × 8 × 8 display comprising two types of QDs. This display provides multicolour three-type two-dimensional patterns when viewed from different angles. The QD-based volumetric display provides a new way to represent images and could be applied in leisure and advertising industries, among others.

  15. The Effect of Elevation on Volumetric Measurements of the Lower Extremity

    Directory of Open Access Journals (Sweden)

    Cordial M. Gillette

    2017-07-01

    Full Text Available Background: The empirical evidence for the use of RICE (rest, ice, compression, elevation has been questioned regarding its   clinical effectiveness. The component of RICE that has the least literature regarding its effectiveness is elevation. Objective: The objective of this study was to determine if various positions of elevation result in volumetric changes of the lower extremity. Methodology: A randomized crossover design was used to determine the effects of the four following conditions on volumetric changes of the lower extremity: seated at the end of a table (seated, lying supine (flat, lying supine with the foot elevated 12 inches off the table (elevated, and lying prone with the knees bent to 90 degrees (prone. The conditions were randomized using a Latin Square. Each subject completed all conditions with at least 24 hours between each session. Pre and post volumetric measurements were taken using a volumetric tank. The subject was placed in one of the four described testing positions for 30 minutes. The change in weight of the displaced water was the main outcome measure. The data was analyzed using an ANOVA of the pre and post measurements with a Bonferroni post hoc analysis. The level of significance was set at P<.05 for all analyses. Results: The only statistically significant difference was between the gravity dependent position (seated and all other positions (p <.001. There was no significant difference between lying supine (flat, on a bolster (elevated, or prone with the knees flexed to 90 degrees (prone. Conclusions: From these results, the extent of elevation does not appear to have an effect on changes in low leg volume. Elevation above the heart did not significantly improve reduction in limb volume, but removing the limb from a gravity dependent position might be beneficial.

  16. The importance of accurate anatomic assessment for the volumetric analysis of the amygdala

    Directory of Open Access Journals (Sweden)

    L. Bonilha

    2005-03-01

    Full Text Available There is a wide range of values reported in volumetric studies of the amygdala. The use of single plane thick magnetic resonance imaging (MRI may prevent the correct visualization of anatomic landmarks and yield imprecise results. To assess whether there is a difference between volumetric analysis of the amygdala performed with single plane MRI 3-mm slices and with multiplanar analysis of MRI 1-mm slices, we studied healthy subjects and patients with temporal lobe epilepsy. We performed manual delineation of the amygdala on T1-weighted inversion recovery, 3-mm coronal slices and manual delineation of the amygdala on three-dimensional volumetric T1-weighted images with 1-mm slice thickness. The data were compared using a dependent t-test. There was a significant difference between the volumes obtained by the coronal plane-based measurements and the volumes obtained by three-dimensional analysis (P < 0.001. An incorrect estimate of the amygdala volume may preclude a correct analysis of the biological effects of alterations in amygdala volume. Three-dimensional analysis is preferred because it is based on more extensive anatomical assessment and the results are similar to those obtained in post-mortem studies.

  17. Method for Determining Volumetric Efficiency and Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ambrozik Andrzej

    2017-12-01

    Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.

  18. Volumetric Synthetic Aperture Imaging with a Piezoelectric 2-D Row-Column Probe

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann

    2016-01-01

    The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addres......The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row...

  19. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  20. A volumetric data system for environmental robotics

    International Nuclear Information System (INIS)

    Tourtellott, J.

    1994-01-01

    A three-dimensional, spatially organized or volumetric data system provides an effective means for integrating and presenting environmental sensor data to robotic systems and operators. Because of the unstructed nature of environmental restoration applications, new robotic control strategies are being developed that include environmental sensors and interactive data interpretation. The volumetric data system provides key features to facilitate these new control strategies including: integrated representation of surface, subsurface and above-surface data; differentiation of mapped and unmapped regions in space; sculpting of regions in space to best exploit data from line-of-sight sensors; integration of diverse sensor data (for example, dimensional, physical/geophysical, chemical, and radiological); incorporation of data provided at different spatial resolutions; efficient access for high-speed visualization and analysis; and geometric modeling tools to update a open-quotes world modelclose quotes of an environment. The applicability to underground storage tank remediation and buried waste site remediation are demonstrated in several examples. By integrating environmental sensor data into robotic control, the volumetric data system will lead to safer, faster, and more cost-effective environmental cleanup

  1. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  2. Gradients estimation from random points with volumetric tensor in turbulence

    Science.gov (United States)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  3. Determination of Uncertainty for a One Milli Litre Volumetric Pipette

    International Nuclear Information System (INIS)

    Torowati; Asminar; Rahmiati; Arif-Sasongko-Adi

    2007-01-01

    An observation had been conducted to determine the uncertainty of volumetric pipette. The uncertainty was determined from data obtained from a determine process which used method of gravimetry. Calculation result from an uncertainty of volumetric pipette the confidence level of 95% and k=2. (author)

  4. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  5. Green chemistry volumetric titration kit for pharmaceutical formulations: Econoburette

    Directory of Open Access Journals (Sweden)

    Man Singh

    2009-08-01

    Full Text Available Stopcock SC and Spring Sp models of Econoburette (Calibrated, RTC (NR, Ministry of Small Scale Industries, Government of India, developed for semimicro volumetric titration of pharmaceutical formulations are reported. These are economized and risk free titration where pipette is replaced by an inbuilt pipette and conical flask by inbuilt bulb. A step of pipetting of stock solution by mouth is deleted. It is used to allow solution exposure to user’s body. This risk is removed and even volatile and toxic solutions are titrated with full proof safety. Econoburette minimizes use of materials and time by 90 % and prevent discharge of polluting effluent to environment. Few acid and base samples are titrated and an analysis of experimental expenditure is described in the papers.

  6. Effects of Different Reconstruction Parameters on CT Volumetric Measurement 
of Pulmonary Nodules

    Directory of Open Access Journals (Sweden)

    Rongrong YANG

    2012-02-01

    Full Text Available Background and objective It has been proven that volumetric measurements could detect subtle changes in small pulmonary nodules in serial CT scans, and thus may play an important role in the follow-up of indeterminate pulmonary nodules and in differentiating malignant nodules from benign nodules. The current study aims to evaluate the effects of different reconstruction parameters on the volumetric measurements of pulmonary nodules in chest CT scans. Methods Thirty subjects who underwent chest CT scan because of indeterminate pulmonary nodules in General Hospital of Tianjin Medical University from December 2009 to August 2011 were retrospectively analyzed. A total of 52 pulmonary nodules were included, and all CT data were reconstructed using three reconstruction algorithms and three slice thicknesses. The volumetric measurements of the nodules were performed using the advanced lung analysis (ALA software. The effects of the reconstruction algorithms, slice thicknesses, and nodule diameters on the volumetric measurements were assessed using the multivariate analysis of variance for repeated measures, the correlation analysis, and the Bland-Altman method. Results The reconstruction algorithms (F=13.6, P<0.001 and slice thicknesses (F=4.4, P=0.02 had significant effects on the measured volume of pulmonary nodules. In addition, the coefficients of variation of nine measurements were inversely related with nodule diameter (r=-0.814, P<0.001. The volume measured at the 2.5 mm slice thickness had poor agreement with the volumes measured at 1.25 mm and 0.625 mm, respectively. Moreover, the best agreement was achieved between the slice thicknesses of 1.25 mm and 0.625 mm using the bone algorithm. Conclusion Reconstruction algorithms and slice thicknesses have significant impacts on the volumetric measurements of lung nodules, especially for the small nodules. Therefore, the reconstruction setting in serial CT scans should be consistent in the follow

  7. Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.

    Science.gov (United States)

    Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde

    2014-07-01

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.

  8. Volumetric breast density affects performance of digital screening mammography

    OpenAIRE

    Wanders, JO; Holland, K; Veldhuis, WB; Mann, RM; Pijnappel, RM; Peeters, PH; Van Gils, CH; Karssemeijer, N

    2016-01-01

    PURPOSE: To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). METHODS: We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the Ameri...

  9. MR volumetric assessment of endolymphatic hydrops

    International Nuclear Information System (INIS)

    Guerkov, R.; Berman, A.; Jerin, C.; Krause, E.; Dietrich, O.; Flatz, W.; Ertl-Wagner, B.; Keeser, D.

    2015-01-01

    We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)

  10. MR volumetric assessment of endolymphatic hydrops

    Energy Technology Data Exchange (ETDEWEB)

    Guerkov, R.; Berman, A.; Jerin, C.; Krause, E. [University of Munich, Department of Otorhinolaryngology Head and Neck Surgery, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); Dietrich, O.; Flatz, W.; Ertl-Wagner, B. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); Keeser, D. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); University of Munich, Department of Psychiatry and Psychotherapy, Innenstadtkliniken Medical Centre, Munich (Germany)

    2014-10-16

    We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)

  11. Volumetric display using a roof mirror grid array

    Science.gov (United States)

    Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuuki; Ohno, Keisuke; Maekawa, Satoshi

    2010-02-01

    A volumetric display system using a roof mirror grid array (RMGA) is proposed. The RMGA consists of a two-dimensional array of dihedral corner reflectors and forms a real image at a plane-symmetric position. A two-dimensional image formed with a RMGA is moved at thigh speed by a mirror scanner. Cross-sectional images of a three-dimensional object are displayed in accordance with the position of the image plane. A volumetric image can be observed as a stack of the cross-sectional images by high-speed scanning. Image formation by a RMGA is free from aberrations. Moreover, a compact optical system can be constructed because a RMGA doesn't have a focal length. An experimental volumetric display system using a galvanometer mirror and a digital micromirror device was constructed. The formation of a three-dimensional image consisting of 1024 × 768 × 400 voxels is confirmed by the experimental system.

  12. Enhancing the discrimination accuracy between metastases, gliomas and meningiomas on brain MRI by volumetric textural features and ensemble pattern recognition methods.

    Science.gov (United States)

    Georgiadis, Pantelis; Cavouras, Dionisis; Kalatzis, Ioannis; Glotsos, Dimitris; Athanasiadis, Emmanouil; Kostopoulos, Spiros; Sifaki, Koralia; Malamas, Menelaos; Nikiforidis, George; Solomou, Ekaterini

    2009-01-01

    Three-dimensional (3D) texture analysis of volumetric brain magnetic resonance (MR) images has been identified as an important indicator for discriminating among different brain pathologies. The purpose of this study was to evaluate the efficiency of 3D textural features using a pattern recognition system in the task of discriminating benign, malignant and metastatic brain tissues on T1 postcontrast MR imaging (MRI) series. The dataset consisted of 67 brain MRI series obtained from patients with verified and untreated intracranial tumors. The pattern recognition system was designed as an ensemble classification scheme employing a support vector machine classifier, specially modified in order to integrate the least squares features transformation logic in its kernel function. The latter, in conjunction with using 3D textural features, enabled boosting up the performance of the system in discriminating metastatic, malignant and benign brain tumors with 77.14%, 89.19% and 93.33% accuracy, respectively. The method was evaluated using an external cross-validation process; thus, results might be considered indicative of the generalization performance of the system to "unseen" cases. The proposed system might be used as an assisting tool for brain tumor characterization on volumetric MRI series.

  13. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  14. Agreement of mammographic measures of volumetric breast density to MRI.

    Directory of Open Access Journals (Sweden)

    Jeff Wang

    Full Text Available Clinical scores of mammographic breast density are highly subjective. Automated technologies for mammography exist to quantify breast density objectively, but the technique that most accurately measures the quantity of breast fibroglandular tissue is not known.To compare the agreement of three automated mammographic techniques for measuring volumetric breast density with a quantitative volumetric MRI-based technique in a screening population.Women were selected from the UCSF Medical Center screening population that had received both a screening MRI and digital mammogram within one year of each other, had Breast Imaging Reporting and Data System (BI-RADS assessments of normal or benign finding, and no history of breast cancer or surgery. Agreement was assessed of three mammographic techniques (Single-energy X-ray Absorptiometry [SXA], Quantra, and Volpara with MRI for percent fibroglandular tissue volume, absolute fibroglandular tissue volume, and total breast volume.Among 99 women, the automated mammographic density techniques were correlated with MRI measures with R(2 values ranging from 0.40 (log fibroglandular volume to 0.91 (total breast volume. Substantial agreement measured by kappa statistic was found between all percent fibroglandular tissue measures (0.72 to 0.63, but only moderate agreement for log fibroglandular volumes. The kappa statistics for all percent density measures were highest in the comparisons of the SXA and MRI results. The largest error source between MRI and the mammography techniques was found to be differences in measures of total breast volume.Automated volumetric fibroglandular tissue measures from screening digital mammograms were in substantial agreement with MRI and if associated with breast cancer could be used in clinical practice to enhance risk assessment and prevention.

  15. Volumetric evaluation of dual-energy perfusion CT by the presence of intrapulmonary clots using a 64-slice dual-source CT

    Energy Technology Data Exchange (ETDEWEB)

    Okada, Munemasa; Nakashima, Yoshiteru; Kunihiro, Yoshie; Nakao, Sei; Matsunaga, Naofumi [Dept. of Radiology, Yamaguchi Univ. Graduate School of Medicine, Yamaguchi (Japan)], e-mail: radokada@yamaguchi-u.ac.jp; Morikage, Noriyasu [Medical Bioregulation Dept. of Organ Regulatory Surgery, Yamaguchi Univ. Graduate School of Medicine, Yamaguchi (Japan); Sano, Yuichi [Dept. of Radiology, Yamaguchi Univ. Hospital, Yamaguchi (Japan); Suga, Kazuyoshi [Dept. of Radiology, St Hills Hospital, Yamaguchi (Japan)

    2013-07-15

    Background: Dual-energy perfusion CT (DE{sub p}CT) directly represents the iodine distribution in lung parenchyma and low perfusion areas caused by intrapulmonary clots (IPCs) are visualized as low attenuation areas. Purpose: To evaluate if volumetric evaluation of DE{sub p}CT can be used as a predictor of right heart strain by the presence of IPCs. Material and Methods: One hundred and ninety-six patients suspected of having acute pulmonary embolism (PE) underwent DE{sub p}CT using a 64-slice dual-source CT. DE{sub p}CT images were three-dimensionally reconstructed with four threshold ranges: 1-120 HU (V{sub 120}), 1-15 HU (V{sub 15}), 1-10 HU (V{sub 10}), and 1-5 HU (V{sub 5}). Each relative ratio per V{sub 120} was expressed as the %V{sub 15}, %V{sub 10}, and %V{sub 5}. Volumetric data-sets were compared with D-dimer, pulmonary arterial (PA) pressure, right ventricular (RV) diameter, RV/left ventricular (RV/LV) diameter ratio, PA diameter, and PA/aorta (PA/Ao) diameter ratio. The areas under the ROC curves (AUCs) were examined for their relationship to the presence of IPCs. This study was approved by the local ethics committee. Results: PA pressure and D-dimer were significantly higher in the patients who had IPCs. In the patients with IPCs, V{sub 15}, V{sub 10}, V{sub 5}, %V{sub 15}, %V{sub 10}, and %V{sub 5} were also significantly higher than those without IPC (P = 0.001). %V{sub 5} had a better correlation with D-dimer (r = 0.30, P < 0.001) and RV/LV diameter ratio (r = 0.27, P < 0.001), and showed a higher AUC (0.73) than the other CT measurements. Conclusion: The volumetric evaluation by DE{sub p}CT had a correlation with D-dimer and RV/LV diameter ratio, and the relative ratio of volumetric CT measurements with a lower attenuation threshold might be recommended for the analysis of acute PE.

  16. Very high frame rate volumetric integration of depth images on mobile devices.

    Science.gov (United States)

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  17. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  18. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  19. In-situ volumetric topography of IC chips for defect detection using infrared confocal measurement with active structured light

    International Nuclear Information System (INIS)

    Chen, Liang-Chia; Le, Manh-Trung; Phuc, Dao Cong; Lin, Shyh-Tsong

    2014-01-01

    The article presents the development of in-situ integrated circuit (IC) chip defect detection techniques for automated clipping detection by proposing infrared imaging and full-field volumetric topography. IC chip inspection, especially held during or post IC packaging, has become an extremely critical procedure in IC fabrication to assure manufacturing quality and reduce production costs. To address this, in the article, microscopic infrared imaging using an electromagnetic light spectrum that ranges from 0.9 to 1.7 µm is developed to perform volumetric inspection of IC chips, in order to identify important defects such as silicon clipping, cracking or peeling. The main difficulty of infrared (IR) volumetric imaging lies in its poor image contrast, which makes it incapable of achieving reliable inspection, as infrared imaging is sensitive to temperature difference but insensitive to geometric variance of materials, resulting in difficulty detecting and quantifying defects precisely. To overcome this, 3D volumetric topography based on 3D infrared confocal measurement with active structured light, as well as light refractive matching principles, is developed to detect defects the size, shape and position of defects in ICs. The experimental results show that the algorithm is effective and suitable for in-situ defect detection of IC semiconductor packaging. The quality of defect detection, such as measurement repeatability and accuracy, is addressed. Confirmed by the experimental results, the depth measurement resolution can reach up to 0.3 µm, and the depth measurement uncertainty with one standard deviation was verified to be less than 1.0% of the full-scale depth-measuring range. (paper)

  20. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  1. Assessment of Volumetric versus Manual Measurement in Disseminated Testicular Cancer; No Difference in Assessment between Non-Radiologists and Genitourinary Radiologist.

    Directory of Open Access Journals (Sweden)

    Çiğdem Öztürk

    Full Text Available The aim of this study was to assess the feasibility and reproducibility of semi-automatic volumetric measurement of retroperitoneal lymph node metastases in testicular cancer (TC patients treated with chemotherapy versus the standardized manual measurements based on RECIST criteria.21 TC patients with retroperitoneal lymph node metastases of testicular cancer were studied with a CT scan of chest and abdomen before and after cisplatin based chemotherapy. Three readers, a surgical resident, a radiological technician and a radiologist, assessed tumor response independently using computerized volumetric analysis with Vitrea software® and manual measurement according to RECIST criteria (version 1.1. Intra- and inter-rater variability were evaluated with intra class correlations and Bland-Altman analysis.Assessment of intra observer and inter observer variance proved non-significant in both measurement modalities. In particularly all intraclass correlation (ICC values for the volumetric analysis were > .99 per observer and between observers. There was minimal bias in agreement for manual as well as volumetric analysis.In this study volumetric measurement using Vitrea software® appears to be a reliable, reproducible method to measure initial tumor volume of retroperitoneal lymph node metastases of testicular cancer after chemotherapy. Both measurement methods can be performed by experienced non-radiologists as well.

  2. Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets

    Directory of Open Access Journals (Sweden)

    Mingwei Leng

    2013-01-01

    Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.

  3. Volumetric image processing: A new technique for three-dimensional imaging

    International Nuclear Information System (INIS)

    Fishman, E.K.; Drebin, B.; Magid, D.; St Ville, J.A.; Zerhouni, E.A.; Siegelman, S.S.; Ney, D.R.

    1986-01-01

    Volumetric three-dimensional (3D) image processing was performed on CT scans of 25 normal hips, and image quality and potential diagnostic applications were assessed. In contrast to surface detection 3D techniques, volumetric processing preserves every pixel of transaxial CT data, replacing the gray scale with transparent ''gels'' and shading. Anatomically, accurate 3D images can be rotated and manipulated in real time, including simulated tissue layer ''peeling'' and mock surgery or disarticulation. This pilot study suggests that volumetric rendering is a major advance in signal processing of medical image data, producing a high quality, uniquely maneuverable image that is useful for fracture interpretation, soft-tissue analysis, surgical planning, and surgical rehearsal

  4. Three-dimensional volumetric display by inclined-plane scanning

    Science.gov (United States)

    Miyazaki, Daisuke; Eto, Takuma; Nishimura, Yasuhiro; Matsushita, Kenji

    2003-05-01

    A volumetric display system based on three-dimensional (3-D) scanning that uses an inclined two-dimensional (2-D) image is described. In the volumetric display system a 2-D display unit is placed obliquely in an imaging system into which a rotating mirror is inserted. When the mirror is rotated, the inclined 2-D image is moved laterally. A locus of the moving image can be observed by persistence of vision as a result of the high-speed rotation of the mirror. Inclined cross-sectional images of an object are displayed on the display unit in accordance with the position of the image plane to observe a 3-D image of the object by persistence of vision. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision. We constructed the volumetric display systems using a galvanometer mirror and a vector-scan display unit. In addition, we constructed a real-time 3-D measurement system based on a light section method. Measured 3-D images can be reconstructed in the 3-D display system in real time.

  5. Volumetric characteristics and compactability of asphalt rubber mixtures with organic warm mix asphalt additives

    Directory of Open Access Journals (Sweden)

    A. M. Rodríguez-Alloza

    2017-04-01

    Full Text Available Warm Mix Asphalt (WMA refers to technologies that reduce manufacturing and compaction temperatures of asphalt mixtures allowing lower energy consumption and reducing greenhouse gas emissions from asphalt plants. These benefits, combined with the effective reuse of a solid waste product, make asphalt rubber (AR mixtures with WMA additives an excellent environmentally-friendly material for road construction. The effect of WMA additives on rubberized mixtures has not yet been established in detail and the lower mixing/compaction temperatures of these mixtures may result in insufficient compaction. In this sense, the present study uses a series of laboratory tests to evaluate the volumetric characteristics and compactability of AR mixtures with organic additives when production/compaction temperatures are decreased. The results of this study indicate that the additives selected can decrease the mixing/compaction temperatures without compromising the volumetric characteristics and compactability.

  6. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  7. Volumetric 3-component velocimetry measurements of the flow field on the rear window of a generic car model

    Directory of Open Access Journals (Sweden)

    Tounsi Nabil

    2012-01-01

    Full Text Available Volumetric 3-component Velocimetry measurements are carried out in the flow field around the rear window of a generic car model, the so-called Ahmed body. This particular flow field is known to be highly unsteady, three dimensional and characterized by strong vortices. The volumetric velocity measurements from the present experiments provide the most comprehensive data for this flow field to date. The present study focuses on the wake flow modifications which result from using a simple flow control device, such as the one recently employed by Fourrié et al. [1]. The mean data clearly show the structure of this complex flow and confirm the drag reduction mechanism suggested by Fourrié et al. The results show that strengthening the separated flow leads to weakening the longitudinal vortices and vice versa. The present paper shows that the Volumetric 3-component Velocimetry technique is a powerful tool used for a better understanding of a threedimensional unsteady complex flow such that developing around a bluffbody.

  8. Volumetric, dashboard-mounted augmented display

    Science.gov (United States)

    Kessler, David; Grabowski, Christopher

    2017-11-01

    The optical design of a compact volumetric display for drivers is presented. The system displays a true volume image with realistic physical depth cues, such as focal accommodation, parallax and convergence. A large eyebox is achieved with a pupil expander. The windshield is used as the augmented reality combiner. A freeform windshield corrector is placed at the dashboard.

  9. Predicting Soil-Water Characteristics from Volumetric Contents of Pore-Size Analogue Particle Fractions

    DEFF Research Database (Denmark)

    Naveed, Muhammad; Møldrup, Per; Tuller, Markus

    *-model) for the SWC, derived from readily available soil properties such as texture and bulk density. A total of 46 soils from different horizons at 15 locations across Denmark were used for models evaluation. The Xw-model predicts the volumetric water content as a function of volumetric fines content (organic matter...... and clay). It performed reasonably well for the dry-end (above a pF value of 2.0; pF = log(|Ψ|), where Ψ is the matric potential in cm), but did not do as well closer to saturated conditions. The Xw*-model gives the volumetric water content as a function of volumetric content of particle size fractions...... (organic matter, clay, silt, fine and coarse sand), variably included in the model depending on the pF value. The volumetric content of a particular soil particle size fraction was included in the model if it was assumed to contribute to the pore size fraction still occupied with water at the given p...

  10. BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters

    Directory of Open Access Journals (Sweden)

    Mithun Biswas

    2017-06-01

    Full Text Available BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.

  11. Reference volumetric samples of gamma-spectroscopic sources

    International Nuclear Information System (INIS)

    Taskaev, E.; Taskaeva, M.; Grigorov, T.

    1993-01-01

    The purpose of this investigation is to determine the requirements for matrices of reference volumetric radiation sources necessary for detector calibration. The first stage of this determination consists in analysing some available organic and nonorganic materials. Different sorts of food, grass, plastics, minerals and building materials have been considered, taking into account the various procedures of their processing (grinding, screening, homogenizing) and their properties (hygroscopy, storage life, resistance to oxidation during gamma sterilization). The procedures of source processing, sample preparation, matrix irradiation and homogenization have been determined. A rotation homogenizing device has been elaborated enabling to homogenize the matrix activity irrespective of the vessel geometry. 33 standard volumetric radioactive sources have been prepared: 14 - on organic matrix and 19 - on nonorganic matrix. (author)

  12. Semi-automated volumetric analysis of artificial lymph nodes in a phantom study

    International Nuclear Information System (INIS)

    Fabel, M.; Biederer, J.; Jochens, A.; Bornemann, L.; Soza, G.; Heller, M.; Bolte, H.

    2011-01-01

    Purpose: Quantification of tumour burden in oncology requires accurate and reproducible image evaluation. The current standard is one-dimensional measurement (e.g. RECIST) with inherent disadvantages. Volumetric analysis is discussed as an alternative for therapy monitoring of lung and liver metastases. The aim of this study was to investigate the accuracy of semi-automated volumetric analysis of artificial lymph node metastases in a phantom study. Materials and methods: Fifty artificial lymph nodes were produced in a size range from 10 to 55 mm; some of them enhanced using iodine contrast media. All nodules were placed in an artificial chest phantom (artiCHEST ® ) within different surrounding tissues. MDCT was performed using different collimations (1–5 mm) at varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed using Oncology Software (Siemens Healthcare, Forchheim, Germany) and were compared to reference volume and diameter by calculating absolute percentage errors. Results: The software performance allowed a robust volumetric analysis in a phantom setting. Unsatisfying segmentation results were frequently found for native nodules within surrounding muscle. The absolute percentage error (APE) for volumetric analysis varied between 0.01 and 225%. No significant differences were seen between different reconstruction kernels. The most unsatisfactory segmentation results occurred in higher slice thickness (4 and 5 mm). Contrast enhanced lymph nodes showed better segmentation results by trend. Conclusion: The semi-automated 3D-volumetric analysis software tool allows a reliable and convenient segmentation of artificial lymph nodes in a phantom setting. Lymph nodes adjacent to tissue of similar density cause segmentation problems. For volumetric analysis of lymph node metastases in clinical routine a slice thickness of ≤3 mm and a medium soft reconstruction kernel (e.g. B40f for Siemens scan systems) may be a suitable

  13. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  14. PEMODELAN OBYEK TIGA DIMENSI DARI GAMBAR SINTETIS DUA DIMENSI DENGAN PENDEKATAN VOLUMETRIC

    Directory of Open Access Journals (Sweden)

    Rudy Adipranata

    2005-01-01

    Full Text Available In this paper, we implemented 3D object modeling from 2D input images. Modeling is performed by using volumetric reconstruction approaches by using volumetric reconstruction approaches, the 3D space is tesselated into discrete volumes called voxels. We use voxel coloring method to reconstruct 3D object from synthetic input images by using voxel coloring, we can get photorealistic result and also has advantage to solve occlusion problem that occur in many case of 3D reconstruction. Photorealistic 3D object reconstruction is a challenging problem in computer graphics and still an active area nowadays. Many applications that make use the result of reconstruction, include virtual reality, augmented reality, 3D games, and another 3D applications. Voxel coloring considered the reconstruction problem as a color reconstruction problem, instead of shape reconstruction problem. This method works by discretizing scene space into voxels, then traversed and colored those voxels in special order. The result is photorealitstic 3D object. Abstract in Bahasa Indonesia : Dalam penelitian ini dilakukan implementasi untuk pemodelan obyek tiga dimensi yang berasal dari gambar dua dimensi. Pemodelan ini dilakukan dengan menggunakan pendekatan volumetric. Dengan menggunakan pendekatan volumetric, ruang tiga dimensi dibagi menjadi bentuk diskrit yang disebut voxel. Kemudian pada voxel-voxel tersebut dilakukan metode pewarnaan voxel untuk mendapatkan hasil berupa obyek tiga dimensi yang bersifat photorealistic. Bagaimana memodelkan obyek tiga dimensi untuk menghasilkan hasil photorealistic merupakan masalah yang masih aktif di bidang komputer grafik. Banyak aplikasi lain yang dapat memanfaatkan hasil dari pemodelan tersebut seperti virtual reality, augmented reality dan lain-lain. Pewarnaan voxel merupakan pemodelan obyek tiga dimensi dengan melakukan rekonstruksi warna, bukan rekonstruksi bentuk. Metode ini bekerja dengan cara mendiskritkan obyek menjadi voxel dan

  15. A Comparative Analysis of Classification Algorithms on Diverse Datasets

    Directory of Open Access Journals (Sweden)

    M. Alghobiri

    2018-04-01

    Full Text Available Data mining involves the computational process to find patterns from large data sets. Classification, one of the main domains of data mining, involves known structure generalizing to apply to a new dataset and predict its class. There are various classification algorithms being used to classify various data sets. They are based on different methods such as probability, decision tree, neural network, nearest neighbor, boolean and fuzzy logic, kernel-based etc. In this paper, we apply three diverse classification algorithms on ten datasets. The datasets have been selected based on their size and/or number and nature of attributes. Results have been discussed using some performance evaluation measures like precision, accuracy, F-measure, Kappa statistics, mean absolute error, relative absolute error, ROC Area etc. Comparative analysis has been carried out using the performance evaluation measures of accuracy, precision, and F-measure. We specify features and limitations of the classification algorithms for the diverse nature datasets.

  16. A volumetric three-dimensional digital light photoactivatable dye display

    Science.gov (United States)

    Patel, Shreya K.; Cao, Jian; Lippert, Alexander R.

    2017-07-01

    Volumetric three-dimensional displays offer spatially accurate representations of images with a 360° view, but have been difficult to implement due to complex fabrication requirements. Herein, a chemically enabled volumetric 3D digital light photoactivatable dye display (3D Light PAD) is reported. The operating principle relies on photoactivatable dyes that become reversibly fluorescent upon illumination with ultraviolet light. Proper tuning of kinetics and emission wavelengths enables the generation of a spatial pattern of fluorescent emission at the intersection of two structured light beams. A first-generation 3D Light PAD was fabricated using the photoactivatable dye N-phenyl spirolactam rhodamine B, a commercial picoprojector, an ultraviolet projector and a custom quartz imaging chamber. The system displays a minimum voxel size of 0.68 mm3, 200 μm resolution and good stability over repeated `on-off' cycles. A range of high-resolution 3D images and animations can be projected, setting the foundation for widely accessible volumetric 3D displays.

  17. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    Science.gov (United States)

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Homogenised Australian climate datasets used for climate change monitoring

    International Nuclear Information System (INIS)

    Trewin, Blair; Jones, David; Collins; Dean; Jovanovic, Branislava; Braganza, Karl

    2007-01-01

    Full text: The Australian Bureau of Meteorology has developed a number of datasets for use in climate change monitoring. These datasets typically cover 50-200 stations distributed as evenly as possible over the Australian continent, and have been subject to detailed quality control and homogenisation.The time period over which data are available for each element is largely determined by the availability of data in digital form. Whilst nearly all Australian monthly and daily precipitation data have been digitised, a significant quantity of pre-1957 data (for temperature and evaporation) or pre-1987 data (for some other elements) remains to be digitised, and is not currently available for use in the climate change monitoring datasets. In the case of temperature and evaporation, the start date of the datasets is also determined by major changes in instruments or observing practices for which no adjustment is feasible at the present time. The datasets currently available cover: Monthly and daily precipitation (most stations commence 1915 or earlier, with many extending back to the late 19th century, and a few to the mid-19th century); Annual temperature (commences 1910); Daily temperature (commences 1910, with limited station coverage pre-1957); Twice-daily dewpoint/relative humidity (commences 1957); Monthly pan evaporation (commences 1970); Cloud amount (commences 1957) (Jovanovic etal. 2007). As well as the station-based datasets listed above, an additional dataset being developed for use in climate change monitoring (and other applications) covers tropical cyclones in the Australian region. This is described in more detail in Trewin (2007). The datasets already developed are used in analyses of observed climate change, which are available through the Australian Bureau of Meteorology website (http://www.bom.gov.au/silo/products/cli_chg/). They are also used as a basis for routine climate monitoring, and in the datasets used for the development of seasonal

  19. Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern

    Directory of Open Access Journals (Sweden)

    Alberto Reyna

    2014-01-01

    Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.

  20. System analysis of formation and perception processes of three-dimensional images in volumetric displays

    Science.gov (United States)

    Bolshakov, Alexander; Sgibnev, Arthur

    2018-03-01

    One of the promising devices is currently a volumetric display. Volumetric displays capable to visualize complex three-dimensional information as nearly as possible to its natural – volume form without the use of special glasses. The invention and implementation of volumetric display technology will expand opportunities of information visualization in various spheres of human activity. The article attempts to structure and describe the interrelation of the essential characteristics of objects in the area of volumetric visualization. Also there is proposed a method of calculation of estimate total number of voxels perceived by observers during the 3D demonstration, generated using a volumetric display with a rotating screen. In the future, it is planned to expand the described technique and implement a system for estimation the quality of generated images, depending on the types of biplanes and their initial characteristics.

  1. Increasing the volumetric efficiency of Diesel engines by intake pipes

    Science.gov (United States)

    List, Hans

    1933-01-01

    Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.

  2. Wind and wave dataset for Matara, Sri Lanka

    Directory of Open Access Journals (Sweden)

    Y. Luo

    2018-01-01

    Full Text Available We present a continuous in situ hydro-meteorology observational dataset from a set of instruments first deployed in December 2012 in the south of Sri Lanka, facing toward the north Indian Ocean. In these waters, simultaneous records of wind and wave data are sparse due to difficulties in deploying measurement instruments, although the area hosts one of the busiest shipping lanes in the world. This study describes the survey, deployment, and measurements of wind and waves, with the aim of offering future users of the dataset the most comprehensive and as much information as possible. This dataset advances our understanding of the nearshore hydrodynamic processes and wave climate, including sea waves and swells, in the north Indian Ocean. Moreover, it is a valuable resource for ocean model parameterization and validation. The archived dataset (Table 1 is examined in detail, including wave data at two locations with water depths of 20 and 10 m comprising synchronous time series of wind, ocean astronomical tide, air pressure, etc. In addition, we use these wave observations to evaluate the ERA-Interim reanalysis product. Based on Buoy 2 data, the swells are the main component of waves year-round, although monsoons can markedly alter the proportion between swell and wind sea. The dataset (Luo et al., 2017 is publicly available from Science Data Bank (https://doi.org/10.11922/sciencedb.447.

  3. Volumetric Arterial Wall Shear Stress Calculation Based on Cine Phase Contrast MRI

    NARCIS (Netherlands)

    Potters, Wouter V.; van Ooij, Pim; Marquering, Henk; VanBavel, Ed; Nederveen, Aart J.

    2015-01-01

    PurposeTo assess the accuracy and precision of a volumetric wall shear stress (WSS) calculation method applied to cine phase contrast magnetic resonance imaging (PC-MRI) data. Materials and MethodsVolumetric WSS vectors were calculated in software phantoms. WSS algorithm parameters were optimized

  4. Tandem Gravimetric and Volumetric Apparatus for Methane Sorption Measurements

    Science.gov (United States)

    Burress, Jacob; Bethea, Donald

    Concerns about global climate change have driven the search for alternative fuels. Natural gas (NG, methane) is a cleaner fuel than gasoline and abundantly available due to hydraulic fracturing. One hurdle to the adoption of NG vehicles is the bulky cylindrical storage vessels needed to store the NG at high pressures (3600 psi, 250 bar). The adsorption of methane in microporous materials can store large amounts of methane at low enough pressures for the allowance of conformable, ``flat'' pressure vessels. The measurement of the amount of gas stored in sorbent materials is typically done by measuring pressure differences (volumetric, manometric) or masses (gravimetric). Volumetric instruments of the Sievert type have uncertainties that compound with each additional measurement. Therefore, the highest-pressure measurement has the largest uncertainty. Gravimetric instruments don't have that drawback, but can have issues with buoyancy corrections. An instrument will be presented with which methane adsorption measurements can be performed using both volumetric and gravimetric methods in tandem. The gravimetric method presented has no buoyancy corrections and low uncertainty. Therefore, the gravimetric measurements can be performed throughout an entire isotherm or just at the extrema to verify the results from the volumetric measurements. Results from methane sorption measurements on an activated carbon (MSC-30) and a metal-organic framework (Cu-BTC, HKUST-1, MOF-199) will be shown. New recommendations for calculations of gas uptake and uncertainty measurements will be discussed.

  5. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  6. Volumetric evaluation of dual-energy perfusion CT by the presence of intrapulmonary clots using a 64-slice dual-source CT

    International Nuclear Information System (INIS)

    Okada, Munemasa; Nakashima, Yoshiteru; Kunihiro, Yoshie; Nakao, Sei; Matsunaga, Naofumi; Morikage, Noriyasu; Sano, Yuichi; Suga, Kazuyoshi

    2013-01-01

    Background: Dual-energy perfusion CT (DE p CT) directly represents the iodine distribution in lung parenchyma and low perfusion areas caused by intrapulmonary clots (IPCs) are visualized as low attenuation areas. Purpose: To evaluate if volumetric evaluation of DE p CT can be used as a predictor of right heart strain by the presence of IPCs. Material and Methods: One hundred and ninety-six patients suspected of having acute pulmonary embolism (PE) underwent DE p CT using a 64-slice dual-source CT. DE p CT images were three-dimensionally reconstructed with four threshold ranges: 1-120 HU (V 120 ), 1-15 HU (V 15 ), 1-10 HU (V 10 ), and 1-5 HU (V 5 ). Each relative ratio per V 120 was expressed as the %V 15 , %V 10 , and %V 5 . Volumetric data-sets were compared with D-dimer, pulmonary arterial (PA) pressure, right ventricular (RV) diameter, RV/left ventricular (RV/LV) diameter ratio, PA diameter, and PA/aorta (PA/Ao) diameter ratio. The areas under the ROC curves (AUCs) were examined for their relationship to the presence of IPCs. This study was approved by the local ethics committee. Results: PA pressure and D-dimer were significantly higher in the patients who had IPCs. In the patients with IPCs, V 15 , V 10 , V 5 , %V 15 , %V 10 , and %V 5 were also significantly higher than those without IPC (P = 0.001). %V 5 had a better correlation with D-dimer (r = 0.30, P p CT had a correlation with D-dimer and RV/LV diameter ratio, and the relative ratio of volumetric CT measurements with a lower attenuation threshold might be recommended for the analysis of acute PE

  7. Layering of Structure in the North American Upper Mantle: Combining Short Period Constraints and Full Waveform Tomography

    Science.gov (United States)

    Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.

    2016-12-01

    Recent receiver function (RF) studies of the north American craton suggest the presence of layering within the cratonic lithosphere with significant lateral variations in the depth. However, the location and character of these discontinuities depends on assumptions made on a background 3D velocity model. On the other hand, the implementation of the Spectral Element Method (SEM) for the computation of the seismic wavefield in 3D structures is allowing improved resolution of volumetric structure in full waveform tomography. The corresponding computations are however very heavy and limit our ability to attain short enough periods to resolve short scale features such as the existence and lateral variations of discontinuities. In order to overcome these limitations, we have developed a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. In a first step we constructed a 3D discontinuous radially anisotropic starting model combining 1D models calculated using RF and L and R wave dispersion data in a Bayesian framework using trans-dimensional MCMC inversion at a collection of 30 stations across the north American continent (Calò et al., 2016). This model was then interpolated and smoothed using a procedure based on residual homogenization (Capdeville et al. 2013) and serves as input model for full waveform tomography using a three-component waveform dataset previously collected (Yuan et al., 2014). The homogenization is necessary to avoid meshing problems and heavy SEM computations. In a second step, several iterations of the full waveform inversion are performed until convergence, using a regional SEM code for forward computations (RegSEM, Cupillard et al., 2012). Results of the inversion are volumetric velocity perturbations around the homogenized starting model, which are then added to the discontinuous 3D starting model. The final result is a multiscale discontinuous model containing both short and

  8. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  9. Volumetric B1 (+) mapping of the brain at 7T using DREAM.

    Science.gov (United States)

    Nehrke, Kay; Versluis, Maarten J; Webb, Andrew; Börnert, Peter

    2014-01-01

    To tailor and optimize the Dual Refocusing Echo Acquisition Mode (DREAM) approach for volumetric B1 (+) mapping of the brain at 7T. A new DREAM echo timing scheme based on the virtual stimulated echo was derived to minimize potential effects of transverse relaxation. Furthermore, the DREAM B1 (+) mapping performance was investigated in simulations and experimentally in phantoms and volunteers for volumetric applications, studying and optimizing the accuracy of the sequence with respect to saturation effects, slice profile imperfections, and T1 and T2 relaxation. Volumetric brain protocols were compiled for different isotropic resolutions (5-2.5 mm) and SENSE factors, and were studied in vivo for different RF drive modes (circular/linear polarization) and the application of dielectric pads. Volumetric B1 (+) maps with good SNR at 2.5 mm isotropic resolution were acquired in about 20 s or less. The specific absorption rate was well below the safety limits for all scans. Mild flow artefacts were observed in the large vessels. Moreover, a slight contrast in the ventricle was observed in the B1 (+) maps, which could be attributed to T1 and T2 relaxation effects. DREAM enables safe, very fast, and robust volumetric B1 (+) mapping of the brain at ultrahigh fields. Copyright © 2013 Wiley Periodicals, Inc.

  10. A new method for calculating volumetric sweeps efficiency using streamline simulation concepts

    International Nuclear Information System (INIS)

    Hidrobo, E A

    2000-01-01

    One of the purposes of reservoir engineering is to quantify the volumetric sweep efficiency for optimizing reservoir management decisions. The estimation of this parameter has always been a difficult task. Until now, sweep efficiency correlations and calculations have been limited to mostly homogeneous 2-D cases. Calculating volumetric sweep efficiency in a 3-D heterogeneous reservoir becomes difficult due to inherent complexity of multiple layers and arbitrary well configurations. In this paper, a new method for computing volumetric sweep efficiency for any arbitrary heterogeneity and well configuration is presented. The proposed method is based on Datta-Gupta and King's formulation of streamline time-of-flight (1995). Given the fact that the time-of-flight reflects the fluid front propagation at various times, then the connectivity in the time-of-flight represents a direct measure of the volumetric sweep efficiency. The proposed approach has been applied to synthetic as well as field examples. Synthetic examples are used to validate the volumetric sweep efficiency calculations using the streamline time-of-flight connectivity criterion by comparison with analytic solutions and published correlations. The field example, which illustrates the feasibility of the approach for large-scale field applications, is from the north Robertson unit, a low permeability carbonate reservoir in west Texas

  11. Discovery and Reuse of Open Datasets: An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Sara

    2016-07-01

    Full Text Available Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.

  12. Region-of-interest volumetric visual hull refinement

    KAUST Repository

    Knoblauch, Daniel; Kuester, Falko

    2010-01-01

    This paper introduces a region-of-interest visual hull refinement technique, based on flexible voxel grids for volumetric visual hull reconstructions. Region-of-interest refinement is based on a multipass process, beginning with a focussed visual

  13. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  14. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    Science.gov (United States)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  15. Volumetric breast density estimation from full-field digital mammograms.

    Science.gov (United States)

    van Engeland, Saskia; Snoeren, Peter R; Huisman, Henkjan; Boetes, Carla; Karssemeijer, Nico

    2006-03-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.

  16. A cross-country Exchange Market Pressure (EMP dataset

    Directory of Open Access Journals (Sweden)

    Mohit Desai

    2017-06-01

    Full Text Available The data presented in this article are related to the research article titled - “An exchange market pressure measure for cross country analysis” (Patnaik et al. [1]. In this article, we present the dataset for Exchange Market Pressure values (EMP for 139 countries along with their conversion factors, ρ (rho. Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values for the point estimates of ρ’s. Using the standard errors of estimates of ρ’s, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  17. Plant fibre composites - porosity and volumetric interaction

    DEFF Research Database (Denmark)

    Madsen, Bo; Thygesen, Anders; Lilholt, Hans

    2007-01-01

    the combination of a high fibre volume fraction, a low porosity and a high composite density is optimal. Experimental data from the literature on volumetric composition and density of four types of plant fibre composites are used to validate the model. It is demonstrated that the model provides a concept......Plant fibre composites contain typically a relative large amount of porosity, which considerably influences properties and performance of the composites. The large porosity must be integrated in the conversion of weight fractions into volume fractions of the fibre and matrix parts. A model...... is presented to predict the porosity as a function of the fibre weight fractions, and to calculate the related fibre and matrix volume fractions, as well as the density of the composite. The model predicts two cases of composite volumetric interaction separated by a transition fibre weight fraction, at which...

  18. Volumetric 3D display using a DLP projection engine

    Science.gov (United States)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  19. A new laboratory-scale experimental facility for detailed aerothermal characterizations of volumetric absorbers

    Science.gov (United States)

    Gomez-Garcia, Fabrisio; Santiago, Sergio; Luque, Salvador; Romero, Manuel; Gonzalez-Aguilar, Jose

    2016-05-01

    This paper describes a new modular laboratory-scale experimental facility that was designed to conduct detailed aerothermal characterizations of volumetric absorbers for use in concentrating solar power plants. Absorbers are generally considered to be the element with the highest potential for efficiency gains in solar thermal energy systems. The configu-ration of volumetric absorbers enables concentrated solar radiation to penetrate deep into their solid structure, where it is progressively absorbed, prior to being transferred by convection to a working fluid flowing through the structure. Current design trends towards higher absorber outlet temperatures have led to the use of complex intricate geometries in novel ceramic and metallic elements to maximize the temperature deep inside the structure (thus reducing thermal emission losses at the front surface and increasing efficiency). Although numerical models simulate the conjugate heat transfer mechanisms along volumetric absorbers, they lack, in many cases, the accuracy that is required for precise aerothermal validations. The present work aims to aid this objective by the design, development, commissioning and operation of a new experimental facility which consists of a 7 kWe (1.2 kWth) high flux solar simulator, a radiation homogenizer, inlet and outlet collector modules and a working section that can accommodate volumetric absorbers up to 80 mm × 80 mm in cross-sectional area. Experimental measurements conducted in the facility include absorber solid temperature distributions along its depth, inlet and outlet air temperatures, air mass flow rate and pressure drop, incident radiative heat flux, and overall thermal efficiency. In addition, two windows allow for the direct visualization of the front and rear absorber surfaces, thus enabling full-coverage surface temperature measurements by thermal imaging cameras. This paper presents the results from the aerothermal characterization of a siliconized silicon

  20. Erosion of water-based cements evaluated by volumetric and gravimetric methods.

    Science.gov (United States)

    Nomoto, Rie; Uchida, Keiko; Momoi, Yasuko; McCabe, John F

    2003-05-01

    To compare the erosion of glass ionomer, zinc phosphate and polycarboxylate cements using volumetric and gravimetric methods. For the volumetric method, the eroded depth of cement placed in a cylindrical cavity in PMMA was measured using a dial gauge after immersion in an eroding solution. For the gravimetric method, the weight of the residue of a solution in which a cylindrical specimen had been immersed was measured. 0.02 M lactic acid solution (0.02 M acid) and 0.1 M lactic acid/sodium lactate buffer solution (0.1 M buffer) were used as eroding solutions. The pH of both solutions was 2.74 and the test period was 24 h. Ranking of eroded depth and weight of residue was polycarboxylate>zinc phosphate>glass ionomers. Differences in erosion were more clearly defined by differences in eroded depth than differences in weight of residue. In 0.02 M acid, the erosion of glass ionomer using the volumetric method was effected by the hygroscopic expansion. In 0.1 M buffer, the erosion for polycarboxylate and zinc phosphate using the volumetric method was much greater than that using the gravimetric method. This is explained by cryo-SEM images which show many holes in the surface of specimens after erosion. It appears that zinc oxide is dissolved leaving a spongy matrix which easily collapses under the force applied to the dial gauge during measurement. The volumetric method that employs eroded depth of cement using a 0.1 M buffer solution is able to quantify erosion and to make material comparisons.

  1. Correlation of volumetric mismatch and mismatch of Alberta Stroke program Early CT scores on CT perfusion maps

    International Nuclear Information System (INIS)

    Lin, Ke; Rapalino, Otto; Lee, Benjamin; Do, Kinh G.; Sussmann, Amado R.; Pramanik, Bidyut K.; Law, Meng

    2009-01-01

    We aimed to determine if volumetric mismatch between tissue at risk and tissue destined to infarct on computed tomography perfusion (CTP) can be described by the mismatch of Alberta Stroke Program Early CT Score (ASPECTS). Forty patients with nonlacunar middle cerebral artery infarct 6 s and <2.0 mL per 100 g, respectively. Two other raters assigned ASPECTS to the same MTT and CBV maps while blinded to the volumetric data. Volumetric mismatch was deemed present if ≥20%. ASPECTS mismatch (=CBV ASPECTS - MTT ASPECTS) was deemed present if ≥1. Correlation between the two types of mismatches was assessed by Spearman's coefficient (ρ). ROC curve analyses were performed to determine the optimal ASPECTS mismatch cut point for volumetric mismatch ≥20%, ≥50%, ≥100%, and ≥150%. Median volumetric mismatch was 130% (range 10.9-2,031%) with 31 (77.5%) being ≥20%. Median ASPECTS mismatch was 2 (range 0-6) with 26 (65%) being ≥1. ASPECTS mismatch correlated strongly with volumetric mismatch with ρ = 0.763 [95% CI 0.585-0.870], p < 0.0001. Sensitivity and specificity for volumetric mismatch ≥20% was 83.9% [95% CI 65.5-93.5] and 100% [95% CI 65.9-100], respectively, using ASPECTS mismatch ≥1. Volumetric mismatch ≥50%, ≥100%, and ≥150% were optimally identified using ASPECTS mismatch ≥1, ≥2, and ≥2, respectively. On CTP, ASPECTS mismatch showed strong correlation to volumetric mismatch. ASPECTS mismatch ≥1 was the optimal cut point for volumetric mismatch ≥20%. (orig.)

  2. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  3. Volumetric composition of nanocomposites

    DEFF Research Database (Denmark)

    Madsen, Bo; Lilholt, Hans; Mannila, Juha

    2015-01-01

    is presented, using cellulose/epoxy and aluminosilicate/polylactate nanocomposites as case materials. The buoyancy method is used for the accurate measurements of materials density. The accuracy of the method is determined to be high, allowing the measured nanocomposite densities to be reported with 5...... significant figures. The plotting of the measured nanocomposite density as a function of the nanofibre weight content is shown to be a first good approach of assessing the porosity content of the materials. The known gravimetric composition of the nanocomposites is converted into a volumetric composition...

  4. SU-D-18A-02: Towards Real-Time On-Board Volumetric Image Reconstruction for Intrafraction Target Verification in Radiation Therapy

    International Nuclear Information System (INIS)

    Xu, X; Iliopoulos, A; Zhang, Y; Pitsianis, N; Sun, X; Yin, F; Ren, L

    2014-01-01

    Purpose: To expedite on-board volumetric image reconstruction from limited-angle kV—MV projections for intrafraction verification. Methods: A limited-angle intrafraction verification (LIVE) system has recently been developed for real-time volumetric verification of moving targets, using limited-angle kV—MV projections. Currently, it is challenged by the intensive computational load of the prior-knowledge-based reconstruction method. To accelerate LIVE, we restructure the software pipeline to make it adaptable to model and algorithm parameter changes, while enabling efficient utilization of rapidly advancing, modern computer architectures. In particular, an innovative two-level parallelization scheme has been designed: At the macroscopic level, data and operations are adaptively partitioned, taking into account algorithmic parameters and the processing capacity or constraints of underlying hardware. The control and data flows of the pipeline are scheduled in such a way as to maximize operation concurrency and minimize total processing time. At the microscopic level, the partitioned functions act as independent modules, operating on data partitions in parallel. Each module is pre-parallelized and optimized for multi-core processors (CPUs) and graphics processing units (GPUs). Results: We present results from a parallel prototype, where most of the controls and module parallelization are carried out via Matlab and its Parallel Computing Toolbox. The reconstruction is 5 times faster on a data-set of twice the size, compared to recently reported results, without compromising on algorithmic optimization control. Conclusion: The prototype implementation and its results have served to assess the efficacy of our system concept. While a production implementation will yield much higher processing rates by approaching full-capacity utilization of CPUs and GPUs, some mutual constraints between algorithmic flow and architecture specifics remain. Based on a careful analysis

  5. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  6. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  7. Predicting positional error of MLC using volumetric analysis

    International Nuclear Information System (INIS)

    Hareram, E.S.

    2008-01-01

    IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods

  8. NEW APPROACH FOR TECHNOLOGY OF VOLUMETRIC – SUPERFICIAL HARDENING OF GEAR DETAILS OF THE BACK AXLE OF MOBILE MACHINES

    Directory of Open Access Journals (Sweden)

    A. I. Mihluk

    2010-01-01

    Full Text Available The new approach for technology of volumetric – superficial hardening of gear details of the back axle made of steel lowered harden ability is offered. This approach consisting in formation of intense – hardened condition on all surface of a detail.

  9. NGO Presence and Activity in Afghanistan, 2000–2014: A Provincial-Level Dataset

    Directory of Open Access Journals (Sweden)

    David F. Mitchell

    2017-06-01

    Full Text Available This article introduces a new provincial-level dataset on non-governmental organizations (NGOs in Afghanistan. The data—which are freely available for download—provide information on the locations and sectors of activity of 891 international and local (Afghan NGOs that operated in the country between 2000 and 2014. A summary and visualization of the data is presented in the article following a brief historical overview of NGOs in Afghanistan. Links to download the full dataset are provided in the conclusion.

  10. Composite Match Index with Application of Interior Deformation Field Measurement from Magnetic Resonance Volumetric Images of Human Tissues

    Directory of Open Access Journals (Sweden)

    Penglin Zhang

    2012-01-01

    Full Text Available Whereas a variety of different feature-point matching approaches have been reported in computer vision, few feature-point matching approaches employed in images from nonrigid, nonuniform human tissues have been reported. The present work is concerned with interior deformation field measurement of complex human tissues from three-dimensional magnetic resonance (MR volumetric images. To improve the reliability of matching results, this paper proposes composite match index (CMI as the foundation of multimethod fusion methods to increase the reliability of these various methods. Thereinto, we discuss the definition, components, and weight determination of CMI. To test the validity of the proposed approach, it is applied to actual MR volumetric images obtained from a volunteer’s calf. The main result is consistent with the actual condition.

  11. Spatial and volumetric changes of retroperitoneal sarcomas during pre-operative radiotherapy

    International Nuclear Information System (INIS)

    Wong, Philip; Dickie, Colleen; Lee, David; Chung, Peter; O’Sullivan, Brian; Letourneau, Daniel; Xu, Wei; Swallow, Carol; Gladdy, Rebecca; Catton, Charles

    2014-01-01

    Purpose: To determine the positional and volumetric changes of retroperitoneal sarcomas (RPS) during pre-operative external beam radiotherapy (PreRT). Material and methods: After excluding 2 patients who received chemotherapy prior to PreRT and 15 RPS that were larger than the field-of-view of cone-beam CT (CBCT), the positional and volumetric changes of RPS throughout PreRT were characterized in 19 patients treated with IMRT using CBCT image guidance. Analysis was performed on 118 CBCT images representing one image per week of those acquired daily during treatment. Intra-fraction breathing motions of the gross tumor volume (GTV) and kidneys were measured in 22 RPS patients simulated using 4D-CT. Fifteen other patients were excluded whose tumors were incompletely imaged on CBCT or who received pre-RT chemotherapy. Results: A GTV volumetric increase (mean: 6.6%, p = 0.035) during the first 2 weeks (CBCT1 vs. CBCT2) of treatment was followed by GTV volumetric decrease (mean: 4%, p = 0.009) by completion of radiotherapy (CBCT1 vs. CBCT6). Internal margins of 8.6, 15 and 15 mm in the lateral, anterior/posterior and superior/inferior directions would be required to account for inter-fraction displacements. The extent of GTV respiratory motion was significantly (p < 0.0001) correlated with more superiorly positioned tumors. Conclusion: Inter-fraction CBCT provides important volumetric and positional information of RPS which may improve PreRT quality and prompt re-planning. Planning target volume may be reduced using online soft-tissue matching to account for interfractional displacements of GTVs. Important breathing motion occurred in superiorly placed RPS supporting the utility of 4D-CT planning

  12. Comparison of surface contour and volumetric three-dimensional imaging of the musculoskeletal system

    International Nuclear Information System (INIS)

    Guilford, W.B.; Ullrich, C.G.; Moore, T.

    1988-01-01

    Both surface contour and volumetric three-dimensional image processing from CT data can provide accurate demonstration of skeletal anatomy. While realistic, surface contour images may obscure fine detail such as nondisplaced fractures, and thin bone may disappear. Volumetric processing can provide high detail, but the transparency effect is unnatural and may yield a confusing image. Comparison of both three-dimensional modes is presented to demonstrate those findings best shown with each and to illustrate helpful techniques to improve volumetric display, such as disarticulation of unnecessary anatomy, short-angle repeating rotation (dithering), and image combination into overlay displays

  13. Rapid volumetric imaging with Bessel-Beam three-photon microscopy

    Science.gov (United States)

    Chen, Bingying; Huang, Xiaoshuai; Gou, Dongzhou; Zeng, Jianzhi; Chen, Guoqing; Pang, Meijun; Hu, Yanhui; Zhao, Zhe; Zhang, Yunfeng; Zhou, Zhuan; Wu, Haitao; Cheng, Heping; Zhang, Zhigang; Xu, Chris; Li, Yulong; Chen, Liangyi; Wang, Aimin

    2018-01-01

    Owing to its tissue-penetration ability, multi-photon fluorescence microscopy allows for the high-resolution, non-invasive imaging of deep tissue in vivo; the recently developed three-photon microscopy (3PM) has extended the depth of high-resolution, non-invasive functional imaging of mouse brains to beyond 1.0 mm. However, the low repetition rate of femtosecond lasers that are normally used in 3PM limits the temporal resolution of point-scanning three-photon microscopy. To increase the volumetric imaging speed of 3PM, we propose a combination of an axially elongated needle-like Bessel-beam with three-photon excitation (3PE) to image biological samples with an extended depth of focus. We demonstrate the higher signal-to-background ratio (SBR) of the Bessel-beam 3PM compared to the two-photon version both theoretically and experimentally. Finally, we perform simultaneous calcium imaging of brain regions at different axial locations in live fruit flies and rapid volumetric imaging of neuronal structures in live mouse brains. These results highlight the unique advantage of conducting rapid volumetric imaging with a high SBR in the deep brain in vivo using scanning Bessel-3PM.

  14. An integrated dataset for in silico drug discovery

    Directory of Open Access Journals (Sweden)

    Cockell Simon J

    2010-12-01

    Full Text Available Drug development is expensive and prone to failure. It is potentially much less risky and expensive to reuse a drug developed for one condition for treating a second disease, than it is to develop an entirely new compound. Systematic approaches to drug repositioning are needed to increase throughput and find candidates more reliably. Here we address this need with an integrated systems biology dataset, developed using the Ondex data integration platform, for the in silico discovery of new drug repositioning candidates. We demonstrate that the information in this dataset allows known repositioning examples to be discovered. We also propose a means of automating the search for new treatment indications of existing compounds.

  15. The OXL format for the exchange of integrated datasets

    Directory of Open Access Journals (Sweden)

    Taubert Jan

    2007-12-01

    Full Text Available A prerequisite for systems biology is the integration and analysis of heterogeneous experimental data stored in hundreds of life-science databases and millions of scientific publications. Several standardised formats for the exchange of specific kinds of biological information exist. Such exchange languages facilitate the integration process; however they are not designed to transport integrated datasets. A format for exchanging integrated datasets needs to i cover data from a broad range of application domains, ii be flexible and extensible to combine many different complex data structures, iii include metadata and semantic definitions, iv include inferred information, v identify the original data source for integrated entities and vi transport large integrated datasets. Unfortunately, none of the exchange formats from the biological domain (e.g. BioPAX, MAGE-ML, PSI-MI, SBML or the generic approaches (RDF, OWL fulfil these requirements in a systematic way.

  16. 40 CFR 80.170 - Volumetric additive reconciliation (VAR), equipment calibration, and recordkeeping requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Volumetric additive reconciliation... ADDITIVES Detergent Gasoline § 80.170 Volumetric additive reconciliation (VAR), equipment calibration, and...) For a facility which uses a gauge to measure the inventory of the detergent storage tank, the total...

  17. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  18. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  19. Dosimetric effects of sectional adjustments of collimator angles on volumetric modulated arc therapy for irregularly-shaped targets.

    Directory of Open Access Journals (Sweden)

    Beom Seok Ahn

    Full Text Available To calculate an optimal collimator angle at each of sectional arcs in a full-arc volumetric modulated arc therapy (VMAT plan and evaluate dosimetric quality of these VMAT plans comparing full-arc VMAT plans with a fixed collimator angle.Seventeen patients who had irregularly-shaped target in abdominal, head and neck, and chest cases were selected retrospectively. To calculate an optimal collimator angle at each of sectional arcs in VMAT, integrated MLC apertures which could cover all shapes of target determined by beam's-eye view (BEV within angular sections were obtained for each VMAT plan. The angular sections were 40°, 60°, 90° and 120°. When the collimator settings were rotated at intervals of 2°, we obtained the optimal collimator angle to minimize area size difference between the integrated MLC aperture and collimator settings with 5 mm-margins to the integrated MLC aperture. The VMAT plans with the optimal collimator angles (Colli-VMAT were generated in the EclipseTM. For comparison purposes, one full-arc VMAT plans with a fixed collimator angles (Std-VMAT were generated. The dose-volumetric parameters and total MUs were evaluated.The mean dose-volumetric parameters for target volume of Colli-VMAT were comparable to Std-VMAT. Colli-VMAT improved sparing of most normal organs but for brain stem, compared to Std-VMAT for all cases. There were decreasing tendencies in mean total MUs with decreasing angular section. The mean total MUs for Colli-VMAT with the angular section of 40° (434 ± 95 MU, 317 ± 81 MU, and 371 ± 43 MU for abdominal, head and neck, and chest cases, respectively were lower than those for Std-VMAT (654 ± 182 MU, 517 ± 116 MU, and 533 ± 25 MU, respectively.For an irregularly-shaped target, Colli-VMAT with the angular section of 40° reduced total MUs and improved sparing of normal organs, compared to Std-VMAT.

  20. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  1. Visual Comparison of Multiple Gene Expression Datasets in a Genomic Context

    Directory of Open Access Journals (Sweden)

    Borowski Krzysztof

    2008-06-01

    Full Text Available The need for novel methods of visualizing microarray data is growing. New perspectives are beneficial to finding patterns in expression data. The Bluejay genome browser provides an integrative way of visualizing gene expression datasets in a genomic context. We have now developed the functionality to display multiple microarray datasets simultaneously in Bluejay, in order to provide researchers with a comprehensive view of their datasets linked to a graphical representation of gene function. This will enable biologists to obtain valuable insights on expression patterns, by allowing them to analyze the expression values in relation to the gene locations as well as to compare expression profiles of related genomes or of di erent experiments for the same genome.

  2. ENHANCED DATA DISCOVERABILITY FOR IN SITU HYPERSPECTRAL DATASETS

    Directory of Open Access Journals (Sweden)

    B. Rasaiah

    2016-06-01

    Full Text Available Field spectroscopic metadata is a central component in the quality assurance, reliability, and discoverability of hyperspectral data and the products derived from it. Cataloguing, mining, and interoperability of these datasets rely upon the robustness of metadata protocols for field spectroscopy, and on the software architecture to support the exchange of these datasets. Currently no standard for in situ spectroscopy data or metadata protocols exist. This inhibits the effective sharing of growing volumes of in situ spectroscopy datasets, to exploit the benefits of integrating with the evolving range of data sharing platforms. A core metadataset for field spectroscopy was introduced by Rasaiah et al., (2011-2015 with extended support for specific applications. This paper presents a prototype model for an OGC and ISO compliant platform-independent metadata discovery service aligned to the specific requirements of field spectroscopy. In this study, a proof-of-concept metadata catalogue has been described and deployed in a cloud-based architecture as a demonstration of an operationalized field spectroscopy metadata standard and web-based discovery service.

  3. Combined use of biochemical and volumetric biomarkers to assess the risk of conversion of mild cognitive impairment to Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Marta Nesteruk

    2016-12-01

    Full Text Available Introduction : The aim of our study was to evaluate the usefulness of several biomarkers in predicting the conversion of mild cognitive impairment (MCI to Alzheimer’s disease (AD: β-amyloid and tau proteins in cerebrospinal fluid and the volumetric evaluation of brain structures including the hippocampus in magnetic resonance imaging (MRI. Material and methods : MRI of the brain with the volumetric assessment of hippocampus, entorhinal cortex, posterior cingulate gyrus, parahippocampal gyrus, superior, medial and inferior temporal gyri was performed in 40 patients diagnosed with mild cognitive impairment. Each patient had a lumbar puncture to evaluate β-amyloid and tau protein (total and phosphorylated levels in the cerebrospinal fluid. The observation period was 2 years. Results : Amongst 40 patients with MCI, 9 (22.5% converted to AD within 2 years of observation. Discriminant analysis was conducted and sensitivity for MCI conversion to AD on the basis of volumetric measurements was 88.9% and specificity 90.3%; on the basis of β-amyloid and total tau, sensitivity was 77.8% and specificity 83.9%. The combined use of the results of volumetric measurements with the results of proteins in the cerebrospinal fluid did not increase the sensitivity (88.9% but increased specificity to 96.8% and the percentage of correct classification to 95%.

  4. A multimodal dataset for authoring and editing multimedia content: The MAMEM project

    Directory of Open Access Journals (Sweden)

    Spiros Nikolopoulos

    2017-12-01

    Full Text Available We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate signals collected from 34 individuals (18 able-bodied and 16 motor-impaired. Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  5. Developing predictive imaging biomarkers using whole-brain classifiers: Application to the ABIDE I dataset

    Directory of Open Access Journals (Sweden)

    Swati Rane

    2017-03-01

    Full Text Available We designed a modular machine learning program that uses functional magnetic resonance imaging (fMRI data in order to distinguish individuals with autism spectrum disorders from neurodevelopmentally normal individuals. Data was selected from the Autism Brain Imaging Dataset Exchange (ABIDE I Preprocessed Dataset.

  6. 3D Volumetric Modeling and Microvascular Reconstruction of Irradiated Lumbosacral Defects After Oncologic Resection

    Directory of Open Access Journals (Sweden)

    Emilio Garcia-Tutor

    2016-12-01

    Full Text Available Background: Locoregional flaps are sufficient in most sacral reconstructions. However, large sacral defects due to malignancy necessitate a different reconstructive approach, with local flaps compromised by radiation and regional flaps inadequate for broad surface areas or substantial volume obliteration. In this report, we present our experience using free muscle transfer for volumetric reconstruction in such cases, and demonstrate 3D haptic models of the sacral defect to aid preoperative planning.Methods: Five consecutive patients with irradiated sacral defects secondary to oncologic resections were included, surface area ranging from 143-600cm2. Latissimus dorsi-based free flap sacral reconstruction was performed in each case, between 2005 and 2011. Where the superior gluteal artery was compromised, the subcostal artery was used as a recipient vessel. Microvascular technique, complications and outcomes are reported. The use of volumetric analysis and 3D printing is also demonstrated, with imaging data converted to 3D images suitable for 3D printing with Osirix software (Pixmeo, Geneva, Switzerland. An office-based, desktop 3D printer was used to print 3D models of sacral defects, used to demonstrate surface area and contour and produce a volumetric print of the dead space needed for flap obliteration. Results: The clinical series of latissimus dorsi free flap reconstructions is presented, with successful transfer in all cases, and adequate soft-tissue cover and volume obliteration achieved. The original use of the subcostal artery as a recipient vessel was successfully achieved. All wounds healed uneventfully. 3D printing is also demonstrated as a useful tool for 3D evaluation of volume and dead-space.Conclusion: Free flaps offer unique benefits in sacral reconstruction where local tissue is compromised by irradiation and tumor recurrence, and dead-space requires accurate volumetric reconstruction. We describe for the first time the use of

  7. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  8. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Stambaugh, Cassandra; Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo G.; Nelms, Benjamin E.

    2014-01-01

    Background and purpose: Delta 4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta 4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta 4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta 4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  9. Knowledge Mining from Clinical Datasets Using Rough Sets and Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Kindie Biredagn Nahato

    2015-01-01

    Full Text Available The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  10. A multimodal data-set of a unidirectional glass fibre reinforced polymer composite

    Directory of Open Access Journals (Sweden)

    Monica J. Emerson

    2018-06-01

    Full Text Available A unidirectional (UD glass fibre reinforced polymer (GFRP composite was scanned at varying resolutions in the micro-scale with several imaging modalities. All six scans capture the same region of the sample, containing well-aligned fibres inside a UD load-carrying bundle. Two scans of the cross-sectional surface of the bundle were acquired at a high resolution, by means of scanning electron microscopy (SEM and optical microscopy (OM, and four volumetric scans were acquired through X-ray computed tomography (CT at different resolutions. Individual fibres can be resolved from these scans to investigate the micro-structure of the UD bundle. The data is hosted at https://doi.org/10.5281/zenodo.1195879 and it was used in Emerson et al. (2018 [1] to demonstrate that precise and representative characterisations of fibre geometry are possible with relatively low X-ray CT resolutions if the analysis method is robust to image quality. Keywords: Geometrical characterisation, Polymer-matrix composites (PMCs, Volumetric fibre segmentation, Automated fibre tracking, X-ray imaging, Microscopy, Non-destructive testing

  11. Dataset of transcriptional landscape of B cell early activation

    Directory of Open Access Journals (Sweden)

    Alexander S. Garruss

    2015-09-01

    Full Text Available Signaling via B cell receptors (BCR and Toll-like receptors (TLRs result in activation of B cells with distinct physiological outcomes, but transcriptional regulatory mechanisms that drive activation and distinguish these pathways remain unknown. At early time points after BCR and TLR ligand exposure, 0.5 and 2 h, RNA-seq was performed allowing observations on rapid transcriptional changes. At 2 h, ChIP-seq was performed to allow observations on important regulatory mechanisms potentially driving transcriptional change. The dataset includes RNA-seq, ChIP-seq of control (Input, RNA Pol II, H3K4me3, H3K27me3, and a separate RNA-seq for miRNA expression, which can be found at Gene Expression Omnibus Dataset GSE61608. Here, we provide details on the experimental and analysis methods used to obtain and analyze this dataset and to examine the transcriptional landscape of B cell early activation.

  12. Integral transform solution of natural convection in a square cavity with volumetric heat generation

    Directory of Open Access Journals (Sweden)

    C. An

    2013-12-01

    Full Text Available The generalized integral transform technique (GITT is employed to obtain a hybrid numerical-analytical solution of natural convection in a cavity with volumetric heat generation. The hybrid nature of this approach allows for the establishment of benchmark results in the solution of non-linear partial differential equation systems, including the coupled set of heat and fluid flow equations that govern the steady natural convection problem under consideration. Through performing the GITT, the resulting transformed ODE system is then numerically solved by making use of the subroutine DBVPFD from the IMSL Library. Therefore, numerical results under user prescribed accuracy are obtained for different values of Rayleigh numbers, and the convergence behavior of the proposed eigenfunction expansions is illustrated. Critical comparisons against solutions produced by ANSYS CFX 12.0 are then conducted, which demonstrate excellent agreement. Several sets of reference results for natural convection with volumetric heat generation in a bi-dimensional square cavity are also provided for future verification of numerical results obtained by other researchers.

  13. Geoseq: a tool for dissecting deep-sequencing datasets

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2010-10-01

    Full Text Available Abstract Background Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO, Sequence Read Archive (SRA hosted by the NCBI, or the DNA Data Bank of Japan (ddbj. Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Results Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Conclusions Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a identify differential isoform expression in mRNA-seq datasets, b identify miRNAs (microRNAs in libraries, and identify mature and star sequences in miRNAS and c to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  14. Boundary expansion algorithm of a decision tree induction for an imbalanced dataset

    Directory of Open Access Journals (Sweden)

    Kesinee Boonchuay

    2017-10-01

    Full Text Available A decision tree is one of the famous classifiers based on a recursive partitioning algorithm. This paper introduces the Boundary Expansion Algorithm (BEA to improve a decision tree induction that deals with an imbalanced dataset. BEA utilizes all attributes to define non-splittable ranges. The computed means of all attributes for minority instances are used to find the nearest minority instance, which will be expanded along all attributes to cover a minority region. As a result, BEA can successfully cope with an imbalanced dataset comparing with C4.5, Gini, asymmetric entropy, top-down tree, and Hellinger distance decision tree on 25 imbalanced datasets from the UCI Repository.

  15. Minimum pricing of alcohol versus volumetric taxation: which policy will reduce heavy consumption without adversely affecting light and moderate consumers?

    Directory of Open Access Journals (Sweden)

    Anurag Sharma

    Full Text Available We estimate the effect on light, moderate and heavy consumers of alcohol from implementing a minimum unit price for alcohol (MUP compared with a uniform volumetric tax.We analyse scanner data from a panel survey of demographically representative households (n = 885 collected over a one-year period (24 Jan 2010-22 Jan 2011 in the state of Victoria, Australia, which includes detailed records of each household's off-trade alcohol purchasing.The heaviest consumers (3% of the sample currently purchase 20% of the total litres of alcohol (LALs, are more likely to purchase cask wine and full strength beer, and pay significantly less on average per standard drink compared to the lightest consumers (A$1.31 [95% CI 1.20-1.41] compared to $2.21 [95% CI 2.10-2.31]. Applying a MUP of A$1 per standard drink has a greater effect on reducing the mean annual volume of alcohol purchased by the heaviest consumers of wine (15.78 LALs [95% CI 14.86-16.69] and beer (1.85 LALs [95% CI 1.64-2.05] compared to a uniform volumetric tax (9.56 LALs [95% CI 9.10-10.01] and 0.49 LALs [95% CI 0.46-0.41], respectively. A MUP results in smaller increases in the annual cost for the heaviest consumers of wine ($393.60 [95% CI 374.19-413.00] and beer ($108.26 [95% CI 94.76-121.75], compared to a uniform volumetric tax ($552.46 [95% CI 530.55-574.36] and $163.92 [95% CI 152.79-175.03], respectively. Both a MUP and uniform volumetric tax have little effect on changing the annual cost of wine and beer for light and moderate consumers, and likewise little effect upon their purchasing.While both a MUP and a uniform volumetric tax have potential to reduce heavy consumption of wine and beer without adversely affecting light and moderate consumers, a MUP offers the potential to achieve greater reductions in heavy consumption at a lower overall annual cost to consumers.

  16. Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)

    Science.gov (United States)

    Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.

    2017-01-01

    Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111

  17. Non-uniform volumetric structures in Richtmyer-Meshkov flows

    NARCIS (Netherlands)

    Staniç, M.; McFarland, J.; Stellingwerf, R.F.; Cassibry, J.T.; Ranjan, D.; Bonazza, R.; Greenough, J.A.; Abarzhi, S.I.

    2013-01-01

    We perform an integrated study of volumetric structures in Richtmyer-Meshkov (RM) flows induced by moderate shocks. Experiments, theoretical analyses, Smoothed Particle Hydrodynamics simulations, and ARES Arbitrary Lagrange Eulerian simulations are employed to analyze RM evolution for fluids with

  18. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  19. Volumetric response classification in metastatic solid tumors on MSCT: Initial results in a whole-body setting

    International Nuclear Information System (INIS)

    Wulff, A.M.; Fabel, M.; Freitag-Wolf, S.; Tepper, M.; Knabe, H.M.; Schäfer, J.P.; Jansen, O.; Bolte, H.

    2013-01-01

    Purpose: To examine technical parameters of measurement accuracy and differences in tumor response classification using RECIST 1.1 and volumetric assessment in three common metastasis types (lung nodules, liver lesions, lymph node metastasis) simultaneously. Materials and methods: 56 consecutive patients (32 female) aged 41–82 years with a wide range of metastatic solid tumors were examined with MSCT for baseline and follow up. Images were evaluated by three experienced radiologists using manual measurements and semi-automatic lesion segmentation. Institutional ethics review was obtained and all patients gave written informed consent. Data analysis comprised interobserver variability operationalized as coefficient of variation and categorical response classification according to RECIST 1.1 for both manual and volumetric measures. Continuous data were assessed for statistical significance with Wilcoxon signed-rank test and categorical data with Fleiss kappa. Results: Interobserver variability was 6.3% (IQR 4.6%) for manual and 4.1% (IQR 4.4%) for volumetrically obtained sum of relevant diameters (p < 0.05, corrected). 4–8 patients’ response to therapy was classified differently across observers by using volumetry compared to standard manual measurements. Fleiss kappa revealed no significant difference in categorical agreement of response classification between manual (0.7558) and volumetric (0.7623) measurements. Conclusion: Under standard RECIST thresholds there was no advantage of volumetric compared to manual response evaluation. However volumetric assessment yielded significantly lower interobserver variability. This may allow narrower thresholds for volumetric response classification in the future

  20. Volumetric response classification in metastatic solid tumors on MSCT: Initial results in a whole-body setting

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, A.M., E-mail: a.wulff@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Fabel, M. [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Freitag-Wolf, S., E-mail: freitag@medinfo.uni-kiel.de [Institut für Medizinische Informatik und Statistik, Brunswiker Str. 10, 24105 Kiel (Germany); Tepper, M., E-mail: m.tepper@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Knabe, H.M., E-mail: h.knabe@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Schäfer, J.P., E-mail: jp.schaefer@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Jansen, O., E-mail: o.jansen@neurorad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Bolte, H., E-mail: hendrik.bolte@ukmuenster.de [Klinik für Nuklearmedizin, Albert-Schweitzer-Campus 1, Gebäude A1, 48149 Münster (Germany)

    2013-10-01

    Purpose: To examine technical parameters of measurement accuracy and differences in tumor response classification using RECIST 1.1 and volumetric assessment in three common metastasis types (lung nodules, liver lesions, lymph node metastasis) simultaneously. Materials and methods: 56 consecutive patients (32 female) aged 41–82 years with a wide range of metastatic solid tumors were examined with MSCT for baseline and follow up. Images were evaluated by three experienced radiologists using manual measurements and semi-automatic lesion segmentation. Institutional ethics review was obtained and all patients gave written informed consent. Data analysis comprised interobserver variability operationalized as coefficient of variation and categorical response classification according to RECIST 1.1 for both manual and volumetric measures. Continuous data were assessed for statistical significance with Wilcoxon signed-rank test and categorical data with Fleiss kappa. Results: Interobserver variability was 6.3% (IQR 4.6%) for manual and 4.1% (IQR 4.4%) for volumetrically obtained sum of relevant diameters (p < 0.05, corrected). 4–8 patients’ response to therapy was classified differently across observers by using volumetry compared to standard manual measurements. Fleiss kappa revealed no significant difference in categorical agreement of response classification between manual (0.7558) and volumetric (0.7623) measurements. Conclusion: Under standard RECIST thresholds there was no advantage of volumetric compared to manual response evaluation. However volumetric assessment yielded significantly lower interobserver variability. This may allow narrower thresholds for volumetric response classification in the future.

  1. Volumetric capnography: In the diagnostic work-up of chronic thromboembolic disease

    Directory of Open Access Journals (Sweden)

    Marcos Mello Moreira

    2010-05-01

    Full Text Available Marcos Mello Moreira1, Renato Giuseppe Giovanni Terzi1, Laura Cortellazzi2, Antonio Luis Eiras Falcão1, Heitor Moreno Junior2, Luiz Cláudio Martins2, Otavio Rizzi Coelho21Department of Surgery, 2Department of Internal Medicine, State University of Campinas, School of Medical Sciences, Campinas, Sao Paulo, BrazilAbstract: The morbidity and mortality of pulmonary embolism (PE have been found to be related to early diagnosis and appropriate treatment. The examinations used to diagnose PE are expensive and not always easily accessible. These options include noninvasive examinations, such as clinical pretests, ELISA D-dimer (DD tests, and volumetric capnography (VCap. We report the case of a patient whose diagnosis of PE was made via pulmonary arteriography. The clinical pretest revealed a moderate probability of the patient having PE, and the DD result was negative; however, the VCap associated with arterial blood gases result was positive. The patient underwent all noninvasive exams following admission to hospital and again eight months after discharge. Results gained from invasive tests were similar to those produced by image exams, highlighting the importance of VCap as an important noninvasive tool.Keywords: pulmonary embolism, pulmonary hypertension, volumetric capnography, d-dimers, pretest probability

  2. An Improved Random Walker with Bayes Model for Volumetric Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chunhua Dong

    2017-01-01

    Full Text Available Random walk (RW method has been widely used to segment the organ in the volumetric medical image. However, it leads to a very large-scale graph due to a number of nodes equal to a voxel number and inaccurate segmentation because of the unavailability of appropriate initial seed point setting. In addition, the classical RW algorithm was designed for a user to mark a few pixels with an arbitrary number of labels, regardless of the intensity and shape information of the organ. Hence, we propose a prior knowledge-based Bayes random walk framework to segment the volumetric medical image in a slice-by-slice manner. Our strategy is to employ the previous segmented slice to obtain the shape and intensity knowledge of the target organ for the adjacent slice. According to the prior knowledge, the object/background seed points can be dynamically updated for the adjacent slice by combining the narrow band threshold (NBT method and the organ model with a Gaussian process. Finally, a high-quality image segmentation result can be automatically achieved using Bayes RW algorithm. Comparing our method with conventional RW and state-of-the-art interactive segmentation methods, our results show an improvement in the accuracy for liver segmentation (p<0.001.

  3. Investigating the effect of clamping force on the fatigue life of bolted plates using volumetric approach

    International Nuclear Information System (INIS)

    Esmaeili, F.; Chakherlou, T. N.; Zehsaz, M.; Hasanifard, S.

    2013-01-01

    In this paper, the effects of bolt clamping force on the fatigue life for bolted plates made from Al7075-T6 have been studied on the values of notch strength reduction factor obtained by volumetric approach. To attain stress distribution around the notch (hole) which is required for volumetric approach, nonlinear finite element simulations were carried out. To estimate the fatigue life, the available smooth S-N curve of Al7075-T6 and the notch strength reduction factor obtained from volumetric method were used. The estimated fatigue life was compared with the available experimental test results. The investigation shows that there is a good agreement between the life predicted by the volumetric approach and the experimental results for various specimens with different amount of clamping forces. Volumetric approach and experimental results showed that the fatigue life of bolted plates improves because of the compressive stresses created around the plate hole due to clamping force.

  4. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  5. Short-term mechanisms influencing volumetric brain dynamics

    Directory of Open Access Journals (Sweden)

    Nikki Dieleman

    2017-01-01

    Full Text Available With the use of magnetic resonance imaging (MRI and brain analysis tools, it has become possible to measure brain volume changes up to around 0.5%. Besides long-term brain changes caused by atrophy in aging or neurodegenerative disease, short-term mechanisms that influence brain volume may exist. When we focus on short-term changes of the brain, changes may be either physiological or pathological. As such determining the cause of volumetric dynamics of the brain is essential. Additionally for an accurate interpretation of longitudinal brain volume measures by means of neurodegeneration, knowledge about the short-term changes is needed. Therefore, in this review, we discuss the possible mechanisms influencing brain volumes on a short-term basis and set-out a framework of MRI techniques to be used for volumetric changes as well as the used analysis tools. 3D T1-weighted images are the images of choice when it comes to MRI of brain volume. These images are excellent to determine brain volume and can be used together with an analysis tool to determine the degree of volume change. Mechanisms that decrease global brain volume are: fluid restriction, evening MRI measurements, corticosteroids, antipsychotics and short-term effects of pathological processes like Alzheimer's disease, hypertension and Diabetes mellitus type II. Mechanisms increasing the brain volume include fluid intake, morning MRI measurements, surgical revascularization and probably medications like anti-inflammatory drugs and anti-hypertensive medication. Exercise was found to have no effect on brain volume on a short-term basis, which may imply that dehydration caused by exercise differs from dehydration by fluid restriction. In the upcoming years, attention should be directed towards studies investigating physiological short-term changes within the light of long-term pathological changes. Ultimately this may lead to a better understanding of the physiological short-term effects of

  6. Semiautomated volumetric response evaluation as an imaging biomarker in superior sulcus tumors

    International Nuclear Information System (INIS)

    Vos, C.G.; Paul, M.A.; Dahele, M.; Soernsen de Koste, J.R. van; Senan, S.; Bahce, I.; Smit, E.F.; Thunnissen, E.; Hartemink, K.J.

    2014-01-01

    Volumetric response to therapy has been suggested as a biomarker for patient-centered outcomes. The primary aim of this pilot study was to investigate whether the volumetric response to induction chemoradiotherapy was associated with pathological complete response (pCR) or survival in patients with superior sulcus tumors managed with trimodality therapy. The secondary aim was to evaluate a semiautomated method for serial volume assessment. In this retrospective study, treatment outcomes were obtained from a departmental database. The tumor was delineated on the computed tomography (CT) scan used for radiotherapy planning, which was typically performed during the first cycle of chemotherapy. These contours were transferred to the post-chemoradiotherapy diagnostic CT scan using deformable image registration (DIR) with/without manual editing. CT scans from 30 eligible patients were analyzed. Median follow-up was 51 months. Neither absolute nor relative reduction in tumor volume following chemoradiotherapy correlated with pCR or 2-year survival. The tumor volumes determined by DIR alone and DIR + manual editing correlated to a high degree (R 2 = 0.99, P < 0.01). Volumetric response to induction chemoradiotherapy was not correlated with pCR or survival in patients with superior sulcus tumors managed with trimodality therapy. DIR-based contour propagation merits further evaluation as a tool for serial volumetric assessment. (orig.)

  7. 40 CFR 80.157 - Volumetric additive reconciliation (“VAR”), equipment calibration, and recordkeeping requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Volumetric additive reconciliation (â... ADDITIVES Detergent Gasoline § 80.157 Volumetric additive reconciliation (“VAR”), equipment calibration, and... other comparable VAR supporting documentation. (ii) For a facility which uses a gauge to measure the...

  8. Determination of uranium by a gravimetric-volumetric titration method

    International Nuclear Information System (INIS)

    Krtil, J.

    1998-01-01

    A volumetric-gravimetric modification of a method for the determination of uranium based on the reduction of uranium to U (IV) in a phosphoric acid medium and titration with a standard potassium dichromate solution is described. More than 99% of the stoichiometric amount of the titrating solution is weighed and the remainder is added volumetrically by using the Mettler DL 40 RC Memotitrator. Computer interconnected with analytical balances collects continually the data on the analyzed samples and evaluates the results of determination. The method allows to determine uranium in samples of uranium metal, alloys, oxides, and ammonium diuranate by using aliquot portions containing 30 - 100 mg of uranium with the error of determination, expressed as the relative standard deviation, of 0.02 - 0.05%. (author)

  9. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  10. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  11. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    Science.gov (United States)

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  12. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  13. Dataset Preservation for the Long Term: Results of the DareLux Project

    Directory of Open Access Journals (Sweden)

    Eugène Dürr

    2008-08-01

    Full Text Available The purpose of the DareLux (Data Archiving River Environment Luxembourg Project was the preservation of unique and irreplaceable datasets, for which we chose hydrology data that will be required to be used in future climatic models. The results are: an operational archive built with XML containers, the OAI-PMH protocol and an architecture based upon web services. Major conclusions are: quality control on ingest is important; digital rights management demands attention; and cost aspects of ingest and retrieval cannot be underestimated. We propose a new paradigm for information retrieval of this type of dataset. We recommend research into visualisation tools for the search and retrieval of this type of dataset.

  14. Effect of cup inclination on predicted contact stress-induced volumetric wear in total hip replacement.

    Science.gov (United States)

    Rijavec, B; Košak, R; Daniel, M; Kralj-Iglič, V; Dolinar, D

    2015-01-01

    In order to increase the lifetime of the total hip endoprosthesis, it is necessary to understand mechanisms leading to its failure. In this work, we address volumetric wear of the artificial cup, in particular the effect of its inclination with respect to the vertical. Volumetric wear was calculated by using mathematical models for resultant hip force, contact stress and penetration of the prosthesis head into the cup. Relevance of the dependence of volumetric wear on inclination of the cup (its abduction angle ϑA) was assessed by the results of 95 hips with implanted endoprosthesis. Geometrical parameters obtained from standard antero-posterior radiographs were taken as input data. Volumetric wear decreases with increasing cup abduction angle ϑA. The correlation within the population of 95 hips was statistically significant (P = 0.006). Large cup abduction angle minimises predicted volumetric wear but may increase the risk for dislocation of the artificial head from the cup in the one-legged stance. Cup abduction angle and direction of the resultant hip force may compensate each other to achieve optimal position of the cup with respect to wear and dislocation in the one-legged stance for a particular patient.

  15. Structural brain alterations of Down's syndrome in early childhood evaluation by DTI and volumetric analyses

    International Nuclear Information System (INIS)

    Gunbey, Hediye Pinar; Bilgici, Meltem Ceyhan; Aslan, Kerim; Incesu, Lutfi; Has, Arzu Ceylan; Ogur, Methiye Gonul; Alhan, Aslihan

    2017-01-01

    To provide an initial assessment of white matter (WM) integrity with diffusion tensor imaging (DTI) and the accompanying volumetric changes in WM and grey matter (GM) through volumetric analyses of young children with Down's syndrome (DS). Ten children with DS and eight healthy control subjects were included in the study. Tract-based spatial statistics (TBSS) were used in the DTI study for whole-brain voxelwise analysis of fractional anisotropy (FA) and mean diffusivity (MD) of WM. Volumetric analyses were performed with an automated segmentation method to obtain regional measurements of cortical volumes. Children with DS showed significantly reduced FA in association tracts of the fronto-temporo-occipital regions as well as the corpus callosum (CC) and anterior limb of the internal capsule (p < 0.05). Volumetric reductions included total cortical GM, cerebellar GM and WM volume, basal ganglia, thalamus, brainstem and CC in DS compared with controls (p < 0.05). These preliminary results suggest that DTI and volumetric analyses may reflect the earliest complementary changes of the neurodevelopmental delay in children with DS and can serve as surrogate biomarkers of the specific elements of WM and GM integrity for cognitive development. (orig.)

  16. 3D Space Shift from CityGML LoD3-Based Multiple Building Elements to a 3D Volumetric Object

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2017-01-01

    Full Text Available In contrast with photorealistic visualizations, urban landscape applications, and building information system (BIM, 3D volumetric presentations highlight specific calculations and applications of 3D building elements for 3D city planning and 3D cadastres. Knowing the precise volumetric quantities and the 3D boundary locations of 3D building spaces is a vital index which must remain constant during data processing because the values are related to space occupation, tenure, taxes, and valuation. To meet these requirements, this paper presents a five-step algorithm for performing a 3D building space shift. This algorithm is used to convert multiple building elements into a single 3D volumetric building object while maintaining the precise volume of the 3D space and without changing the 3D locations or displacing the building boundaries. As examples, this study used input data and building elements based on City Geography Markup Language (CityGML LoD3 models. This paper presents a method for 3D urban space and 3D property management with the goal of constructing a 3D volumetric object for an integral building using CityGML objects, by fusing the geometries of various building elements. The resulting objects possess true 3D geometry that can be represented by solid geometry and saved to a CityGML file for effective use in 3D urban planning and 3D cadastres.

  17. Approximations of noise covariance in multi-slice helical CT scans: impact on lung nodule size estimation.

    Science.gov (United States)

    Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J

    2011-10-07

    Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.

  18. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  19. Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets

    Directory of Open Access Journals (Sweden)

    Nicolas Robitaille

    2012-01-01

    Full Text Available Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.

  20. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  1. RetroTransformDB: A Dataset of Generic Transforms for Retrosynthetic Analysis

    Directory of Open Access Journals (Sweden)

    Svetlana Avramova

    2018-04-01

    Full Text Available Presently, software tools for retrosynthetic analysis are widely used by organic, medicinal, and computational chemists. Rule-based systems extensively use collections of retro-reactions (transforms. While there are many public datasets with reactions in synthetic direction (usually non-generic reactions, there are no publicly-available databases with generic reactions in computer-readable format which can be used for the purposes of retrosynthetic analysis. Here we present RetroTransformDB—a dataset of transforms, compiled and coded in SMIRKS line notation by us. The collection is comprised of more than 100 records, with each one including the reaction name, SMIRKS linear notation, the functional group to be obtained, and the transform type classification. All SMIRKS transforms were tested syntactically, semantically, and from a chemical point of view in different software platforms. The overall dataset design and the retrosynthetic fitness were analyzed and curated by organic chemistry experts. The RetroTransformDB dataset may be used by open-source and commercial software packages, as well as chemoinformatics tools.

  2. Volumetric determination of tumor size abdominal masses. Problems -feasabilities

    International Nuclear Information System (INIS)

    Helmberger, H.; Bautz, W.; Sendler, A.; Fink, U.; Gerhardt, P.

    1995-01-01

    The most important indication for clinically reliable volumetric determination of tumor size in the abdominal region is monitoring liver metastases during chemotherapy. Determination of volume can be effectively realized using 3D reconstruction. Therefore, the primary data set must be complete and contiguous. The mass should be depicted strongly enhanced and free of artifacts. At present, this prerequisite can only be complied with using thin-slice spiral CT. Phantom studies have proven that a semiautomatic reconstruction algorithm is recommendable. The basic difficulties involved in volumetric determination of tumor size are the problems in differentiating active malignant mass and changes in the surrounding tissue, as well as the lack of histomorphological correlation. Possible indications for volumetry of gastrointestinal masses in the assessment of neoadjuvant therapeutic concepts are under scientific evaluation. (orig./MG) [de

  3. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    Science.gov (United States)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  4. Exploring massive, genome scale datasets with the GenometriCorr package.

    Directory of Open Access Journals (Sweden)

    Alexander Favorov

    2012-05-01

    Full Text Available We have created a statistically grounded tool for determining the correlation of genomewide data with other datasets or known biological features, intended to guide biological exploration of high-dimensional datasets, rather than providing immediate answers. The software enables several biologically motivated approaches to these data and here we describe the rationale and implementation for each approach. Our models and statistics are implemented in an R package that efficiently calculates the spatial correlation between two sets of genomic intervals (data and/or annotated features, for use as a metric of functional interaction. The software handles any type of pointwise or interval data and instead of running analyses with predefined metrics, it computes the significance and direction of several types of spatial association; this is intended to suggest potentially relevant relationships between the datasets.The package, GenometriCorr, can be freely downloaded at http://genometricorr.sourceforge.net/. Installation guidelines and examples are available from the sourceforge repository. The package is pending submission to Bioconductor.

  5. Volumetrics relate to the development of depression after traumatic brain injury.

    Science.gov (United States)

    Maller, Jerome J; Thomson, Richard H S; Pannek, Kerstin; Bailey, Neil; Lewis, Philip M; Fitzgerald, Paul B

    2014-09-01

    Previous research suggests that many people who sustain a traumatic brain injury (TBI), even of the mild form, will develop major depression (MD). We previously reported white matter integrity differences between those who did and did not develop MD after mild TBI. In this current paper, we aimed to investigate whether there were also volumetric differences between these groups, as suggested by previous volumetric studies in mild TBI populations. A sample of TBI-with-MD subjects (N=14), TBI-without-MD subjects (N=12), MD-without-TBI (N=26) and control subjects (no TBI or MD, N=23), received structural MRI brain scans. T1-weighted data were analysed using the Freesurfer software package which produces automated volumetric results. The findings of this study indicate that (1) TBI patients who develop MD have reduced volume in temporal, parietal and lingual regions compared to TBI patients who do not develop MD, and (2) MD patients with a history of TBI have decreased volume in the temporal region compared to those who had MD but without a history of TBI. We also found that more severe MD in those with TBI-with-MD significantly correlated with reduced volume in anterior cingulate, temporal lobe and insula. These findings suggest that volumetric reduction to specific regions, including parietal, temporal and occipital lobes, after a mild TBI may underlie the susceptibility of these patients developing major depression, in addition to altered white matter integrity. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Crumpled Nitrogen-Doped Graphene for Supercapacitors with High Gravimetric and Volumetric Performances.

    Science.gov (United States)

    Wang, Jie; Ding, Bing; Xu, Yunling; Shen, Laifa; Dou, Hui; Zhang, Xiaogang

    2015-10-14

    Graphene is considered a promising electrochemical capacitors electrode material due to its high surface area and high electrical conductivity. However, restacking interactions between graphene nanosheets significantly decrease the ion-accessible surface area and impede electronic and ionic transfer. This would, in turn, severely hinder the realization of high energy density. Herein, we report a strategy for preparation of few-layer graphene material with abundant crumples and high-level nitrogen doping. The two-dimensional graphene nanosheets (CNG) feature high ion-available surface area, excellent electronic and ion transfer properties, and high packing density, permitting the CNG electrode to exhibit excellent electrochemical performance. In ionic liquid electrolyte, the CNG electrode exhibits gravimetric and volumetric capacitances of 128 F g(-1) and 98 F cm(-3), respectively, achieving gravimetric and volumetric energy densities of 56 Wh kg(-1) and 43 Wh L(-1). The preparation strategy described here provides a new approach for developing a graphene-based supercapacitor with high gravimetric and volumetric energy densities.

  7. Impact of Turbocharger Non-Adiabatic Operation on Engine Volumetric Efficiency and Turbo Lag

    Directory of Open Access Journals (Sweden)

    S. Shaaban

    2012-01-01

    Full Text Available Turbocharger performance significantly affects the thermodynamic properties of the working fluid at engine boundaries and hence engine performance. Heat transfer takes place under all circumstances during turbocharger operation. This heat transfer affects the power produced by the turbine, the power consumed by the compressor, and the engine volumetric efficiency. Therefore, non-adiabatic turbocharger performance can restrict the engine charging process and hence engine performance. The present research work investigates the effect of turbocharger non-adiabatic performance on the engine charging process and turbo lag. Two passenger car turbochargers are experimentally and theoretically investigated. The effect of turbine casing insulation is also explored. The present investigation shows that thermal energy is transferred to the compressor under all circumstances. At high rotational speeds, thermal energy is first transferred to the compressor and latter from the compressor to the ambient. Therefore, the compressor appears to be “adiabatic” at high rotational speeds despite the complex heat transfer processes inside the compressor. A tangible effect of turbocharger non-adiabatic performance on the charging process is identified at turbocharger part load operation. The turbine power is the most affected operating parameter, followed by the engine volumetric efficiency. Insulating the turbine is recommended for reducing the turbine size and the turbo lag.

  8. Methodological proposal for the volumetric study of archaeological ceramics through 3D edition free-software programs: the case of the celtiberians cemeteries of the meseta

    Directory of Open Access Journals (Sweden)

    Álvaro Sánchez Climent

    2014-10-01

    Full Text Available Nowadays the free-software programs have been converted into the ideal tools for the archaeological researches, reaching the same level as other commercial programs. For that reason, the 3D modeling tool Blender has reached in the last years a great popularity offering similar characteristics like other commercial 3D editing programs such as 3D Studio Max or AutoCAD. Recently, it has been developed the necessary script for the volumetric calculations of three-dimnesional objects, offering great possibilities to calculate the volume of the archaeological ceramics. In this paper, we present a methodological approach for the volumetric studies with Blender and a study case of funerary urns from several celtiberians cemeteries of the Spanish Meseta. The goal is to demonstrate the great possibilities that the 3D editing free-software tools have in the volumetric studies at the present time.

  9. Concentrated fed-batch cell culture increases manufacturing capacity without additional volumetric capacity.

    Science.gov (United States)

    Yang, William C; Minkler, Daniel F; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2016-01-10

    Biomanufacturing factories of the future are transitioning from large, single-product facilities toward smaller, multi-product, flexible facilities. Flexible capacity allows companies to adapt to ever-changing pipeline and market demands. Concentrated fed-batch (CFB) cell culture enables flexible manufacturing capacity with limited volumetric capacity; it intensifies cell culture titers such that the output of a smaller facility can rival that of a larger facility. We tested this hypothesis at bench scale by developing a feeding strategy for CFB and applying it to two cell lines. CFB improved cell line A output by 105% and cell line B output by 70% compared to traditional fed-batch (TFB) processes. CFB did not greatly change cell line A product quality, but it improved cell line B charge heterogeneity, suggesting that CFB has both process and product quality benefits. We projected CFB output gains in the context of a 2000-L small-scale facility, but the output was lower than that of a 15,000-L large-scale TFB facility. CFB's high cell mass also complicated operations, eroded volumetric productivity, and showed our current processes require significant improvements in specific productivity in order to realize their full potential and savings in manufacturing. Thus, improving specific productivity can resolve CFB's cost, scale-up, and operability challenges. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  11. Reducing uncertainties in volumetric image based deformable organ registration

    International Nuclear Information System (INIS)

    Liang, J.; Yan, D.

    2003-01-01

    Applying volumetric image feedback in radiotherapy requires image based deformable organ registration. The foundation of this registration is the ability of tracking subvolume displacement in organs of interest. Subvolume displacement can be calculated by applying biomechanics model and the finite element method to human organs manifested on the multiple volumetric images. The calculation accuracy, however, is highly dependent on the determination of the corresponding organ boundary points. Lacking sufficient information for such determination, uncertainties are inevitable--thus diminishing the registration accuracy. In this paper, a method of consuming energy minimization was developed to reduce these uncertainties. Starting from an initial selection of organ boundary point correspondence on volumetric image sets, the subvolume displacement and stress distribution of the whole organ are calculated and the consumed energy due to the subvolume displacements is computed accordingly. The corresponding positions of the initially selected boundary points are then iteratively optimized to minimize the consuming energy under geometry and stress constraints. In this study, a rectal wall delineated from patient CT image was artificially deformed using a computer simulation and utilized to test the optimization. Subvolume displacements calculated based on the optimized boundary point correspondence were compared to the true displacements, and the calculation accuracy was thereby evaluated. Results demonstrate that a significant improvement on the accuracy of the deformable organ registration can be achieved by applying the consuming energy minimization in the organ deformation calculation

  12. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  13. Volumetric properties of ammonium nitrate in N,N-dimethylformamide

    International Nuclear Information System (INIS)

    Vranes, Milan; Dozic, Sanja; Djeric, Vesna; Gadzuric, Slobodan

    2012-01-01

    Highlights: ► We observed interactions and changes in the solution using volumetric properties. ► The greatest influence on the solvent–solvent interactions has temperature. ► The smallest influence temperature has on the ion–ion interactions. ► Temperature has no influence on concentrated systems and partially solvated melts. - Abstract: The densities of the ammonium nitrate in N,N-dimethylformamide (DMF) mixtures were measured at T = (308.15 to 348.15) K for different ammonium nitrate molalities in the range from (0 to 6.8404) mol·kg −1 . From the obtained density data, volumetric properties (apparent molar volumes and partial molar volumes) have been evaluated and discussed in the term of respective ionic and dipole interactions. From the apparent molar volume, determined at various temperatures, the apparent molar expansibility and the coefficients of thermal expansion were also calculated.

  14. Densely-packed graphene/conducting polymer nanoparticle papers for high-volumetric-performance flexible all-solid-state supercapacitors

    Science.gov (United States)

    Yang, Chao; Zhang, Liling; Hu, Nantao; Yang, Zhi; Wei, Hao; Xu, Zhichuan J.; Wang, Yanyan; Zhang, Yafei

    2016-08-01

    Graphene-based all-solid-state supercapacitors (ASSSCs) are one of the most ideal candidates for high-performance flexible power sources. The achievement of high volumetric energy density is highly desired for practical application of this type of ASSSCs. Here, we present a facile method to boost volumetric performances of graphene-based flexible ASSSCs through incorporation of ultrafine polyaniline-poly(4-styrenesulfonate) (PANI-PSS) nanoparticles in reduced graphene oxide (rGO) papers. A compact structure is obtained via intimate contact and π-π interaction between PANI-PSS nanoparticles and rGO sheets. The hybrid paper electrode with the film thickness of 13.5 μm, shows an extremely high volumetric specific capacitance of 272 F/cm3 (0.37 A/cm3 in a three-electrode cell). The assembled ASSSCs show a large volumetric specific capacitance of 217 F/cm3 (0.37 A/cm3 in a two-electrode cell), high volumetric energy and power density, excellent capacitance stability, small leakage current as well as low self-discharge characteristics, revealing the usefulness of this robust hybrid paper for high-performance flexible energy storage devices.

  15. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  16. Evaluation of Modified Categorical Data Fuzzy Clustering Algorithm on the Wisconsin Breast Cancer Dataset

    Directory of Open Access Journals (Sweden)

    Amir Ahmad

    2016-01-01

    Full Text Available The early diagnosis of breast cancer is an important step in a fight against the disease. Machine learning techniques have shown promise in improving our understanding of the disease. As medical datasets consist of data points which cannot be precisely assigned to a class, fuzzy methods have been useful for studying of these datasets. Sometimes breast cancer datasets are described by categorical features. Many fuzzy clustering algorithms have been developed for categorical datasets. However, in most of these methods Hamming distance is used to define the distance between the two categorical feature values. In this paper, we use a probabilistic distance measure for the distance computation among a pair of categorical feature values. Experiments demonstrate that the distance measure performs better than Hamming distance for Wisconsin breast cancer data.

  17. CoVennTree: A new method for the comparative analysis of large datasets

    Directory of Open Access Journals (Sweden)

    Steffen C. Lott

    2015-02-01

    Full Text Available The visualization of massive datasets, such as those resulting from comparative metatranscriptome analyses or the analysis of microbial population structures using ribosomal RNA sequences, is a challenging task. We developed a new method called CoVennTree (Comparative weighted Venn Tree that simultaneously compares up to three multifarious datasets by aggregating and propagating information from the bottom to the top level and produces a graphical output in Cytoscape. With the introduction of weighted Venn structures, the contents and relationships of various datasets can be correlated and simultaneously aggregated without losing information. We demonstrate the suitability of this approach using a dataset of 16S rDNA sequences obtained from microbial populations at three different depths of the Gulf of Aqaba in the Red Sea. CoVennTree has been integrated into the Galaxy ToolShed and can be directly downloaded and integrated into the user instance.

  18. Vector Nonlinear Time-Series Analysis of Gamma-Ray Burst Datasets on Heterogeneous Clusters

    Directory of Open Access Journals (Sweden)

    Ioana Banicescu

    2005-01-01

    Full Text Available The simultaneous analysis of a number of related datasets using a single statistical model is an important problem in statistical computing. A parameterized statistical model is to be fitted on multiple datasets and tested for goodness of fit within a fixed analytical framework. Definitive conclusions are hopefully achieved by analyzing the datasets together. This paper proposes a strategy for the efficient execution of this type of analysis on heterogeneous clusters. Based on partitioning processors into groups for efficient communications and a dynamic loop scheduling approach for load balancing, the strategy addresses the variability of the computational loads of the datasets, as well as the unpredictable irregularities of the cluster environment. Results from preliminary tests of using this strategy to fit gamma-ray burst time profiles with vector functional coefficient autoregressive models on 64 processors of a general purpose Linux cluster demonstrate the effectiveness of the strategy.

  19. NDE Technology Development Program for Non-Visual Volumetric Inspection Technology; Sensor Effectiveness Testing Report

    Energy Technology Data Exchange (ETDEWEB)

    Moran, Traci L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Larche, Michael R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Denslow, Kayte M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Glass, Samuel W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-31

    The Pacific Northwest National Laboratory (PNNL) located in Richland, Washington, hosted and administered Sensor Effectiveness Testing that allowed four different participants to demonstrate the NDE volumetric inspection technologies that were previously demonstrated during the Technology Screening session. This document provides a Sensor Effectiveness Testing report for the final part of Phase I of a three-phase NDE Technology Development Program designed to identify and mature a system or set of non-visual volumetric NDE technologies for Hanford DST primary liner bottom inspection. Phase I of the program will baseline the performance of current or emerging non-visual volumetric NDE technologies for their ability to detect and characterize primary liner bottom flaws, and identify candidate technologies for adaptation and maturation for Phase II of the program.

  20. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  1. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.; Martin, Tobias; Grosset, A. V Pascal; Brownlee, Carson; Hollt, Thomas; Brown, Benjamin P.; Smith, Sean T.; Hansen, Charles D.

    2012-01-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  2. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.

    2012-02-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  3. Evolving hard problems: Generating human genetics datasets with a complex etiology

    Directory of Open Access Journals (Sweden)

    Himmelstein Daniel S

    2011-07-01

    Full Text Available Abstract Background A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Results Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. Conclusions This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  4. Comparison of global 3-D aviation emissions datasets

    Directory of Open Access Journals (Sweden)

    S. C. Olsen

    2013-01-01

    Full Text Available Aviation emissions are unique from other transportation emissions, e.g., from road transportation and shipping, in that they occur at higher altitudes as well as at the surface. Aviation emissions of carbon dioxide, soot, and water vapor have direct radiative impacts on the Earth's climate system while emissions of nitrogen oxides (NOx, sulfur oxides, carbon monoxide (CO, and hydrocarbons (HC impact air quality and climate through their effects on ozone, methane, and clouds. The most accurate estimates of the impact of aviation on air quality and climate utilize three-dimensional chemistry-climate models and gridded four dimensional (space and time aviation emissions datasets. We compare five available aviation emissions datasets currently and historically used to evaluate the impact of aviation on climate and air quality: NASA-Boeing 1992, NASA-Boeing 1999, QUANTIFY 2000, Aero2k 2002, and AEDT 2006 and aviation fuel usage estimates from the International Energy Agency. Roughly 90% of all aviation emissions are in the Northern Hemisphere and nearly 60% of all fuelburn and NOx emissions occur at cruise altitudes in the Northern Hemisphere. While these datasets were created by independent methods and are thus not strictly suitable for analyzing trends they suggest that commercial aviation fuelburn and NOx emissions increased over the last two decades while HC emissions likely decreased and CO emissions did not change significantly. The bottom-up estimates compared here are consistently lower than International Energy Agency fuelburn statistics although the gap is significantly smaller in the more recent datasets. Overall the emissions distributions are quite similar for fuelburn and NOx with regional peaks over the populated land masses of North America, Europe, and East Asia. For CO and HC there are relatively larger differences. There are however some distinct differences in the altitude distribution

  5. Common integration sites of published datasets identified using a graph-based framework

    Directory of Open Access Journals (Sweden)

    Alessandro Vasciaveo

    2016-01-01

    Full Text Available With next-generation sequencing, the genomic data available for the characterization of integration sites (IS has dramatically increased. At present, in a single experiment, several thousand viral integration genome targets can be investigated to define genomic hot spots. In a previous article, we renovated a formal CIS analysis based on a rigid fixed window demarcation into a more stretchy definition grounded on graphs. Here, we present a selection of supporting data related to the graph-based framework (GBF from our previous article, in which a collection of common integration sites (CIS was identified on six published datasets. In this work, we will focus on two datasets, ISRTCGD and ISHIV, which have been previously discussed. Moreover, we show in more detail the workflow design that originates the datasets.

  6. Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion

    Science.gov (United States)

    Uenishi, K.; Yamachi, H.

    2017-12-01

    As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the

  7. Spatially continuous dataset at local scale of Taita Hills in Kenya and Mount Kilimanjaro in Tanzania

    Directory of Open Access Journals (Sweden)

    Sizah Mwalusepo

    2016-09-01

    Full Text Available Climate change is a global concern, requiring local scale spatially continuous dataset and modeling of meteorological variables. This dataset article provided the interpolated temperature, rainfall and relative humidity dataset at local scale along Taita Hills and Mount Kilimanjaro altitudinal gradients in Kenya and Tanzania, respectively. The temperature and relative humidity were recorded hourly using automatic onset THHOBO data loggers and rainfall was recorded daily using GENERALR wireless rain gauges. Thin plate spline (TPS was used to interpolate, with the degree of data smoothing determined by minimizing the generalized cross validation. The dataset provide information on the status of the current climatic conditions along the two mountainous altitudinal gradients in Kenya and Tanzania. The dataset will, thus, enhance future research. Keywords: Spatial climate data, Climate change, Modeling, Local scale

  8. A Unified Framework for Measuring Stewardship Practices Applied to Digital Environmental Datasets

    Directory of Open Access Journals (Sweden)

    Ge Peng

    2015-01-01

    Full Text Available This paper presents a stewardship maturity assessment model in the form of a matrix for digital environmental datasets. Nine key components are identified based on requirements imposed on digital environmental data and information that are cared for and disseminated by U.S. Federal agencies by U.S. law, i.e., Information Quality Act of 2001, agencies’ guidance, expert bodies’ recommendations, and users. These components include: preservability, accessibility, usability, production sustainability, data quality assurance, data quality control/monitoring, data quality assessment, transparency/traceability, and data integrity. A five-level progressive maturity scale is then defined for each component associated with measurable practices applied to individual datasets, representing Ad Hoc, Minimal, Intermediate, Advanced, and Optimal stages. The rationale for each key component and its maturity levels is described. This maturity model, leveraging community best practices and standards, provides a unified framework for assessing scientific data stewardship. It can be used to create a stewardship maturity scoreboard of dataset(s and a roadmap for scientific data stewardship improvement or to provide data quality and usability information to users, stakeholders, and decision makers.

  9. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  10. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm

    2015-01-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological....... This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...

  11. CO2 Capacity Sorbent Analysis Using Volumetric Measurement Approach

    Science.gov (United States)

    Huang, Roger; Richardson, Tra-My Justine; Belancik, Grace; Jan, Darrell; Knox, Jim

    2017-01-01

    In support of air revitalization system sorbent selection for future space missions, Ames Research Center (ARC) has performed CO2 capacity tests on various solid sorbents to complement structural strength tests conducted at Marshall Space Flight Center (MSFC). The materials of interest are: Grace Davison Grade 544 13X, Honeywell UOP APG III, LiLSX VSA-10, BASF 13X, and Grace Davison Grade 522 5A. CO2 capacity was for all sorbent materials using a Micromeritics ASAP 2020 Physisorption Volumetric Analysis machine to produce 0C, 10C, 25C, 50C, and 75C isotherms. These data are to be used for modeling data and to provide a basis for continued sorbent research. The volumetric analysis method proved to be effective in generating consistent and repeatable data for the 13X sorbents, but the method needs to be refined to tailor to different sorbents.

  12. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  13. Minimum pricing of alcohol versus volumetric taxation: which policy will reduce heavy consumption without adversely affecting light and moderate consumers?

    Science.gov (United States)

    Sharma, Anurag; Vandenberg, Brian; Hollingsworth, Bruce

    2014-01-01

    We estimate the effect on light, moderate and heavy consumers of alcohol from implementing a minimum unit price for alcohol (MUP) compared with a uniform volumetric tax. We analyse scanner data from a panel survey of demographically representative households (n = 885) collected over a one-year period (24 Jan 2010-22 Jan 2011) in the state of Victoria, Australia, which includes detailed records of each household's off-trade alcohol purchasing. The heaviest consumers (3% of the sample) currently purchase 20% of the total litres of alcohol (LALs), are more likely to purchase cask wine and full strength beer, and pay significantly less on average per standard drink compared to the lightest consumers (A$1.31 [95% CI 1.20-1.41] compared to $2.21 [95% CI 2.10-2.31]). Applying a MUP of A$1 per standard drink has a greater effect on reducing the mean annual volume of alcohol purchased by the heaviest consumers of wine (15.78 LALs [95% CI 14.86-16.69]) and beer (1.85 LALs [95% CI 1.64-2.05]) compared to a uniform volumetric tax (9.56 LALs [95% CI 9.10-10.01] and 0.49 LALs [95% CI 0.46-0.41], respectively). A MUP results in smaller increases in the annual cost for the heaviest consumers of wine ($393.60 [95% CI 374.19-413.00]) and beer ($108.26 [95% CI 94.76-121.75]), compared to a uniform volumetric tax ($552.46 [95% CI 530.55-574.36] and $163.92 [95% CI 152.79-175.03], respectively). Both a MUP and uniform volumetric tax have little effect on changing the annual cost of wine and beer for light and moderate consumers, and likewise little effect upon their purchasing. While both a MUP and a uniform volumetric tax have potential to reduce heavy consumption of wine and beer without adversely affecting light and moderate consumers, a MUP offers the potential to achieve greater reductions in heavy consumption at a lower overall annual cost to consumers.

  14. Volumetric polymerization shrinkage of contemporary composite resins

    OpenAIRE

    Nagem Filho, Halim; Nagem, Haline Drumond; Francisconi, Paulo Afonso Silveira; Franco, Eduardo Batista; Mondelli, Rafael Francisco Lia; Coutinho, Kennedy Queiroz

    2007-01-01

    The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill ...

  15. Effects of Prepolymerized Particle Size and Polymerization Kinetics on Volumetric Shrinkage of Dental Modeling Resins

    Directory of Open Access Journals (Sweden)

    Tae-Yub Kwon

    2014-01-01

    Full Text Available Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control. The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P>0.05 or significantly larger (P<0.05 than that of the control resin and were related to the polymerization kinetics (P<0.05 rather than the PMMA bead size (P=0.335. Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins.

  16. Volumetric velocity measurements in restricted geometries using spiral sampling: a phantom study.

    Science.gov (United States)

    Nilsson, Anders; Revstedt, Johan; Heiberg, Einar; Ståhlberg, Freddy; Bloch, Karin Markenroth

    2015-04-01

    The aim of this study was to evaluate the accuracy of maximum velocity measurements using volumetric phase-contrast imaging with spiral readouts in a stenotic flow phantom. In a phantom model, maximum velocity, flow, pressure gradient, and streamline visualizations were evaluated using volumetric phase-contrast magnetic resonance imaging (MRI) with velocity encoding in one (extending on current clinical practice) and three directions (for characterization of the flow field) using spiral readouts. Results of maximum velocity and pressure drop were compared to computational fluid dynamics (CFD) simulations, as well as corresponding low-echo-time (TE) Cartesian data. Flow was compared to 2D through-plane phase contrast (PC) upstream from the restriction. Results obtained with 3D through-plane PC as well as 4D PC at shortest TE using a spiral readout showed excellent agreements with the maximum velocity values obtained with CFD (spiral sequences were respectively 14 and 13 % overestimated compared to CFD. Identification of the maximum velocity location, as well as the accurate velocity quantification can be obtained in stenotic regions using short-TE spiral volumetric PC imaging.

  17. Nanofoaming to Boost the Electrochemical Performance of Ni@Ni(OH)2 Nanowires for Ultrahigh Volumetric Supercapacitors.

    Science.gov (United States)

    Xu, Shusheng; Li, Xiaolin; Yang, Zhi; Wang, Tao; Jiang, Wenkai; Yang, Chao; Wang, Shuai; Hu, Nantao; Wei, Hao; Zhang, Yafei

    2016-10-10

    Three-dimensional free-standing film electrodes have aroused great interest for energy storage devices. However, small volumetric capacity and low operating voltage limit their practical application for large energy storage applications. Herein, a facile and novel nanofoaming process was demonstrated to boost the volumetric electrochemical capacitance of the devices via activation of Ni nanowires to form ultrathin nanosheets and porous nanostructures. The as-designed free-standing Ni@Ni(OH) 2 film electrodes display a significantly enhanced volumetric capacity (462 C/cm 3 at 0.5 A/cm 3 ) and excellent cycle stability. Moreover, the as-developed hybrid supercapacitor employed Ni@Ni(OH) 2 film as positive electrode and graphene-carbon nanotube film as negative electrode exhibits a high volumetric capacitance of 95 F/cm 3 (at 0.25 A/cm 3 ) and excellent cycle performance (only 14% capacitance reduction for 4500 cycles). Furthermore, the volumetric energy density can reach 33.9 mWh/cm 3 , which is much higher than that of most thin film lithium batteries (1-10 mWh/cm 3 ). This work gives an insight for designing high-volume three-dimensional electrodes and paves a new way to construct binder-free film electrode for high-performance hybrid supercapacitor applications.

  18. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  19. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  20. Stability and Volumetric Properties of Asphalt Mixture Containing Waste Plastic

    Directory of Open Access Journals (Sweden)

    Abd Kader Siti Aminah

    2017-01-01

    Full Text Available The objectives of this study are to determine the optimum bitumen content (OBC for every percentage added of waste plastics in asphalt mixtures and to investigate the stability properties of the asphalt mixtures containing waste plastic. Marshall stability and flow values along with density, air voids in total mix, voids in mineral aggregate, and voids filled with bitumen were determined to obtain OBC at different percentages of waste plastic, i.e., 4%, 6%, 8%, and 10% by weight of bitumen as additive. Results showed that the OBC for the plastic-modified asphalt mixtures at 4%, 6%, 8%, and 10% are 4.98, 5.44, 5.48, and 5.14, respectively. On the other hand, the controlled specimen’s shows better volumetric properties compared to plastic mixes. However, 4% additional of waste plastic indicated better stability than controlled specimen.

  1. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  2. Comparing the accuracy of food outlet datasets in an urban environment

    Directory of Open Access Journals (Sweden)

    Michelle S. Wong

    2017-05-01

    Full Text Available Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP, an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7% and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%. We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.

  3. Designing remote web-based mechanical-volumetric flow meter ...

    African Journals Online (AJOL)

    Today, in water and wastewater industry a lot of mechanical-volumetric flow meters are used for the navigation of the produced water and the data of these flow meters, due to use in a wide geographical range, is done physically and by in person presence. All this makes reading the data costly and, in some cases, due to ...

  4. Datasets of mung bean proteins and metabolites from four different cultivars

    Directory of Open Access Journals (Sweden)

    Akiko Hashiguchi

    2017-08-01

    Full Text Available Plants produce a wide array of nutrients that exert synergistic interaction among whole combinations of nutrients. Therefore comprehensive nutrient profiling is required to evaluate their nutritional/nutraceutical value and health promoting effect. In order to obtain such datasets for mung bean, which is known as a medicinal plant with heat alleviating effect, proteomic and metabolomic analyses were performed using four cultivars from China, Thailand, and Myanmar. In total, 449 proteins and 210 metabolic compounds were identified in seed coat; whereas 480 proteins and 217 metabolic compounds were detected in seed flesh, establishing the first comprehensive dataset of mung bean for nutraceutical evaluation.

  5. Full Life Cycle of Data Analysis with Climate Model Diagnostic Analyzer (CMDA)

    Science.gov (United States)

    Lee, S.; Zhai, C.; Pan, L.; Tang, B.; Zhang, J.; Bao, Q.; Malarout, N.

    2017-12-01

    We have developed a system that supports the full life cycle of a data analysis process, from data discovery, to data customization, to analysis, to reanalysis, to publication, and to reproduction. The system called Climate Model Diagnostic Analyzer (CMDA) is designed to demonstrate that the full life cycle of data analysis can be supported within one integrated system for climate model diagnostic evaluation with global observational and reanalysis datasets. CMDA has four subsystems that are highly integrated to support the analysis life cycle. Data System manages datasets used by CMDA analysis tools, Analysis System manages CMDA analysis tools which are all web services, Provenance System manages the meta data of CMDA datasets and the provenance of CMDA analysis history, and Recommendation System extracts knowledge from CMDA usage history and recommends datasets/analysis tools to users. These four subsystems are not only highly integrated but also easily expandable. New datasets can be easily added to Data System and scanned to be visible to the other subsystems. New analysis tools can be easily registered to be available in the Analysis System and Provenance System. With CMDA, a user can start a data analysis process by discovering datasets of relevance to their research topic using the Recommendation System. Next, the user can customize the discovered datasets for their scientific use (e.g. anomaly calculation, regridding, etc) with tools in the Analysis System. Next, the user can do their analysis with the tools (e.g. conditional sampling, time averaging, spatial averaging) in the Analysis System. Next, the user can reanalyze the datasets based on the previously stored analysis provenance in the Provenance System. Further, they can publish their analysis process and result to the Provenance System to share with other users. Finally, any user can reproduce the published analysis process and results. By supporting the full life cycle of climate data analysis

  6. Advanced Neuropsychological Diagnostics Infrastructure (ANDI: A Normative Database Created from Control Datasets.

    Directory of Open Access Journals (Sweden)

    Nathalie R. de Vent

    2016-10-01

    Full Text Available In the Advanced Neuropsychological Diagnostics Infrastructure (ANDI, datasets of several research groups are combined into a single database, containing scores on neuropsychological tests from healthy participants. For most popular neuropsychological tests the quantity and range of these data surpasses that of traditional normative data, thereby enabling more accurate neuropsychological assessment. Because of the unique structure of the database, it facilitates normative comparison methods that were not feasible before, in particular those in which entire profiles of scores are evaluated. In this article, we describe the steps that were necessary to combine the separate datasets into a single database. These steps involve matching variables from multiple datasets, removing outlying values, determining the influence of demographic variables, and finding appropriate transformations to normality. Also, a brief description of the current contents of the ANDI database is given.

  7. Augmented Reality Prototype for Visualizing Large Sensors’ Datasets

    Directory of Open Access Journals (Sweden)

    Folorunso Olufemi A.

    2011-04-01

    Full Text Available This paper addressed the development of an augmented reality (AR based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations which made data exploration and visualisation daunting tasks. Therefore a model to manage such data and enhance computational support needed for effective explorations are developed in this paper. A challenge of this approach is to reduce the data inefficiency. This paper presented a model for computing information gain for each data attributes and determine a lead attribute.The computed lead attribute is then used for the development of an AR-based scientific visualization interface which automatically identifies, localises and visualizes all necessary data relevant to a particularly selected region of interest (ROI on the network. Necessary architectural system supports and the interface requirements for such visualizations are also presented.

  8. Integration of geophysical datasets by a conjoint probability tomography approach: application to Italian active volcanic areas

    Directory of Open Access Journals (Sweden)

    D. Patella

    2008-06-01

    Full Text Available We expand the theory of probability tomography to the integration of different geophysical datasets. The aim of the new method is to improve the information quality using a conjoint occurrence probability function addressed to highlight the existence of common sources of anomalies. The new method is tested on gravity, magnetic and self-potential datasets collected in the volcanic area of Mt. Vesuvius (Naples, and on gravity and dipole geoelectrical datasets collected in the volcanic area of Mt. Etna (Sicily. The application demonstrates that, from a probabilistic point of view, the integrated analysis can delineate the signature of some important volcanic targets better than the analysis of the tomographic image of each dataset considered separately.

  9. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  10. Structural brain alterations of Down's syndrome in early childhood evaluation by DTI and volumetric analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gunbey, Hediye Pinar; Bilgici, Meltem Ceyhan; Aslan, Kerim; Incesu, Lutfi [Ondokuz Mayis University, Faculty of Medicine, Department of Radiology, Kurupelit, Samsun (Turkey); Has, Arzu Ceylan [Bilkent University, National Magnetic Resonance Research Center, Ankara (Turkey); Ogur, Methiye Gonul [Ondokuz Mayis University, Department of Genetics, Samsun (Turkey); Alhan, Aslihan [Ufuk University, Department of Statistics, Ankara (Turkey)

    2017-07-15

    To provide an initial assessment of white matter (WM) integrity with diffusion tensor imaging (DTI) and the accompanying volumetric changes in WM and grey matter (GM) through volumetric analyses of young children with Down's syndrome (DS). Ten children with DS and eight healthy control subjects were included in the study. Tract-based spatial statistics (TBSS) were used in the DTI study for whole-brain voxelwise analysis of fractional anisotropy (FA) and mean diffusivity (MD) of WM. Volumetric analyses were performed with an automated segmentation method to obtain regional measurements of cortical volumes. Children with DS showed significantly reduced FA in association tracts of the fronto-temporo-occipital regions as well as the corpus callosum (CC) and anterior limb of the internal capsule (p < 0.05). Volumetric reductions included total cortical GM, cerebellar GM and WM volume, basal ganglia, thalamus, brainstem and CC in DS compared with controls (p < 0.05). These preliminary results suggest that DTI and volumetric analyses may reflect the earliest complementary changes of the neurodevelopmental delay in children with DS and can serve as surrogate biomarkers of the specific elements of WM and GM integrity for cognitive development. (orig.)

  11. 100KE/KW fuel storage basin surface volumetric factors

    International Nuclear Information System (INIS)

    Conn, K.R.

    1996-01-01

    This Supporting Document presents calculations of surface Volumetric factors for the 100KE and 100KW Fuel Storage Basins. These factors relate water level changes to basin loss or additions of water, or the equivalent water displacement volumes of objects added to or removed from the basin

  12. Volumetric and superficial characterization of carbon activated

    International Nuclear Information System (INIS)

    Carrera G, L.M.; Garcia S, I.; Jimenez B, J.; Solache R, M.; Lopez M, B.; Bulbulian G, S.; Olguin G, M.T.

    2000-01-01

    The activated carbon is the resultant material of the calcination process of natural carbonated materials as coconut shells or olive little bones. It is an excellent adsorbent of diluted substances, so much in colloidal form, as in particles form. Those substances are attracted and retained by the carbon surface. In this work is make the volumetric and superficial characterization of activated carbon treated thermically (300 Centigrade) in function of the grain size average. (Author)

  13. High volumetric power density, non-enzymatic, glucose fuel cells.

    Science.gov (United States)

    Oncescu, Vlad; Erickson, David

    2013-01-01

    The development of new implantable medical devices has been limited in the past by slow advances in lithium battery technology. Non-enzymatic glucose fuel cells are promising replacement candidates for lithium batteries because of good long-term stability and adequate power density. The devices developed to date however use an "oxygen depletion design" whereby the electrodes are stacked on top of each other leading to low volumetric power density and complicated fabrication protocols. Here we have developed a novel single-layer fuel cell with good performance (2 μW cm⁻²) and stability that can be integrated directly as a coating layer on large implantable devices, or stacked to obtain a high volumetric power density (over 16 μW cm⁻³). This represents the first demonstration of a low volume non-enzymatic fuel cell stack with high power density, greatly increasing the range of applications for non-enzymatic glucose fuel cells.

  14. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-06-01

    Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

  15. Genome-wide gene expression dataset used to identify potential therapeutic targets in androgenetic alopecia

    Directory of Open Access Journals (Sweden)

    R. Dey-Rao

    2017-08-01

    Full Text Available The microarray dataset attached to this report is related to the research article with the title: “A genomic approach to susceptibility and pathogenesis leads to identifying potential novel therapeutic targets in androgenetic alopecia” (Dey-Rao and Sinha, 2017 [1]. Male-pattern hair loss that is induced by androgens (testosterone in genetically predisposed individuals is known as androgenetic alopecia (AGA. The raw dataset is being made publicly available to enable critical and/or extended analyses. Our related research paper utilizes the attached raw dataset, for genome-wide gene-expression associated investigations. Combined with several in silico bioinformatics-based analyses we were able to delineate five strategic molecular elements as potential novel targets towards future AGA-therapy.

  16. New Fuzzy Support Vector Machine for the Class Imbalance Problem in Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Xiaoqing Gu

    2014-01-01

    Full Text Available In medical datasets classification, support vector machine (SVM is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM for the class imbalance problem (called FSVM-CIP is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

  17. Influence of Cobb Angle and ISIS2 Surface Topography Volumetric Asymmetry on Scoliosis Research Society-22 Outcome Scores in Scoliosis.

    Science.gov (United States)

    Brewer, Paul; Berryman, Fiona; Baker, De; Pynsent, Paul; Gardner, Adrian

    2013-11-01

    Retrospective sequential patient series. To establish the relationship between the magnitude of the deformity in scoliosis and patients' perception of their condition, as measured with Scoliosis Research Society-22 scores. A total of 93 untreated patients with adolescent idiopathic scoliosis were included retrospectively. The Cobb angle was measured from a plain radiograph, and volumetric asymmetry was measured by ISIS2 surface topography. The association between Scoliosis Research Society scores for function, pain, self-image, and mental health against Cobb angle and volumetric asymmetry was investigated using the Pearson correlation coefficient. Correlation of both Cobb angle and volumetric asymmetry with function and pain was weak (all self-image, was higher, although still moderate (-.37 for Cobb angle and -.44 for volumetric asymmetry). Both were statistically significant (Cobb angle, p = .0002; volumetric asymmetry; p = .00001). Cobb angle contributed 13.8% to the linear relationship with self-image, whereas volumetric asymmetry contributed 19.3%. For mental health, correlation was statistically significant with Cobb angle (p = .011) and volumetric asymmetry (p = .0005), but the correlation was low to moderate (-.26 and -.35, respectively). Cobb angle contributed 6.9% to the linear relationship with mental health, whereas volumetric asymmetry contributed 12.4%. Volumetric asymmetry correlates better with both mental health and self-image compared with Cobb angle, but the correlation was only moderate. This study suggests that a patient's own perception of self-image and mental health is multifactorial and not completely explained through present objective measurements of the size of the deformity. This helps to explain the difficulties in any objective analysis of a problem with multifactorial perception issues. Further study is required to investigate other physical aspects of the deformity that may have a role in how patients view themselves. Copyright

  18. A new dataset validation system for the Planetary Science Archive

    Science.gov (United States)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that

  19. Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    2018-01-01

    Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.

  20. Discovery of Teleconnections Using Data Mining Technologies in Global Climate Datasets

    Directory of Open Access Journals (Sweden)

    Fan Lin

    2007-10-01

    Full Text Available In this paper, we apply data mining technologies to a 100-year global land precipitation dataset and a 100-year Sea Surface Temperature (SST dataset. Some interesting teleconnections are discovered, including well-known patterns and unknown patterns (to the best of our knowledge, such as teleconnections between the abnormally low temperature events of the North Atlantic and floods in Northern Bolivia, abnormally low temperatures of the Venezuelan Coast and floods in Northern Algeria and Tunisia, etc. In particular, we use a high dimensional clustering method and a method that mines episode association rules in event sequences. The former is used to cluster the original time series datasets into higher spatial granularity, and the later is used to discover teleconnection patterns among events sequences that are generated by the clustering method. In order to verify our method, we also do experiments on the SOI index and a 100-year global land precipitation dataset and find many well-known teleconnections, such as teleconnections between SOI lower events and drought events of Eastern Australia, South Africa, and North Brazil; SOI lower events and flood events of the middle-lower reaches of Yangtze River; etc. We also do explorative experiments to help domain scientists discover new knowledge.

  1. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    Science.gov (United States)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  2. Dataset of mitochondrial genome variants in oncocytic tumors

    Directory of Open Access Journals (Sweden)

    Lihua Lyu

    2018-04-01

    Full Text Available This dataset presents the mitochondrial genome variants associated with oncocytic tumors. These data were obtained by Sanger sequencing of the whole mitochondrial genomes of oncocytic tumors and the adjacent normal tissues from 32 patients. The mtDNA variants are identified after compared with the revised Cambridge sequence, excluding those defining haplogroups of our patients. The pathogenic prediction for the novel missense variants found in this study was performed with the Mitimpact 2 program.

  3. Continuous assessment of carotid intima-media thickness applied to estimate a volumetric compliance using B-mode ultrasound sequences

    International Nuclear Information System (INIS)

    Pascaner, A F; Craiem, D; Casciaro, M E; Graf, S; Danielo, R; Guevara, E

    2015-01-01

    Recent reports have shown that the carotid artery wall had significant movements not only in the radial but also in the longitudinal direction during the cardiac cycle. Accordingly, the idea that longitudinal elongations could be systematically neglected for compliance estimations became controversial. Assuming a dynamic change in vessel length, the standard measurement of cross-sectional compliance can be revised. In this work, we propose to estimate a volumetric compliance based on continuous measurements of carotid diameter and intima-media thickness (IMT) from B-mode ultrasound sequences. Assuming the principle of conservation of the mass of wall volume (compressibility equals zero), a temporal longitudinal elongation can be calculated to estimate a volumetric compliance. Moreover, elongations can also be estimated allowing small compressibility factors to model some wall leakage. The cross-sectional and the volumetric compliance were estimated in 45 healthy volunteers and 19 asymptomatic patients. The standard measurement underestimated the volumetric compliance by 25% for young volunteers (p < 0.01) and 17% for patients (p < 0.05). When compressibility factors different from zero were allowed, volunteers and patients reached values of 9% and 4%, respectively. We conclude that a simultaneous assessment of carotid diameter and IMT can be employed to estimate a volumetric compliance incorporating a longitudinal elongation. The cross-sectional compliance, that neglects the change in vessel length, underestimates the volumetric compliance. (paper)

  4. A comparison of semi-automated volumetric vs linear measurement of small vestibular schwannomas.

    Science.gov (United States)

    MacKeith, Samuel; Das, Tilak; Graves, Martin; Patterson, Andrew; Donnelly, Neil; Mannion, Richard; Axon, Patrick; Tysome, James

    2018-04-01

    Accurate and precise measurement of vestibular schwannoma (VS) size is key to clinical management decisions. Linear measurements are used in routine clinical practice but are prone to measurement error. This study aims to compare a semi-automated volume segmentation tool against standard linear method for measuring small VS. This study also examines whether oblique tumour orientation can contribute to linear measurement error. Experimental comparison of observer agreement using two measurement techniques. Tertiary skull base unit. Twenty-four patients with unilateral sporadic small (linear dimension following reformatting to correct for oblique orientation of VS. Intra-observer ICC was higher for semi-automated volumetric when compared with linear measurements, 0.998 (95% CI 0.994-0.999) vs 0.936 (95% CI 0.856-0.972), p linear measurements, 0.989 (95% CI 0.975-0.995) vs 0.946 (95% CI 0.880-0.976), p = 0.0045. The intra-observer %SDD was similar for volumetric and linear measurements, 9.9% vs 11.8%. However, the inter-observer %SDD was greater for volumetric than linear measurements, 20.1% vs 10.6%. Following oblique reformatting to correct tumour angulation, the mean increase in size was 1.14 mm (p = 0.04). Semi-automated volumetric measurements are more repeatable than linear measurements when measuring small VS and should be considered for use in clinical practice. Oblique orientation of VS may contribute to linear measurement error.

  5. Error characterisation of global active and passive microwave soil moisture datasets

    Directory of Open Access Journals (Sweden)

    W. A. Dorigo

    2010-12-01

    Full Text Available Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT and passive (AMSR-E and SSM/I microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active, the use of different frequencies (C-, X-, and Ku-band for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH. The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different

  6. Volumetric visualization of anatomy for treatment planning

    International Nuclear Information System (INIS)

    Pelizzari, Charles A.; Grzeszczuk, Robert; Chen, George T. Y.; Heimann, Ruth; Haraf, Daniel J.; Vijayakumar, Srinivasan; Ryan, Martin J.

    1996-01-01

    Purpose: Delineation of volumes of interest for three-dimensional (3D) treatment planning is usually performed by contouring on two-dimensional sections. We explore the usage of segmentation-free volumetric rendering of the three-dimensional image data set for tumor and normal tissue visualization. Methods and Materials: Standard treatment planning computed tomography (CT) studies, with typically 5 to 10 mm slice thickness, and spiral CT studies with 3 mm slice thickness were used. The data were visualized using locally developed volume-rendering software. Similar to the method of Drebin et al., CT voxels are automatically assigned an opacity and other visual properties (e.g., color) based on a probabilistic classification into tissue types. Using volumetric compositing, a projection into the opacity-weighted volume is produced. Depth cueing, perspective, and gradient-based shading are incorporated to achieve realistic images. Unlike surface-rendered displays, no hand segmentation is required to produce detailed renditions of skin, muscle, or bony anatomy. By suitable manipulation of the opacity map, tissue classes can be made transparent, revealing muscle, vessels, or bone, for example. Manually supervised tissue masking allows irrelevant tissues overlying tumors or other structures of interest to be removed. Results: Very high-quality renditions are produced in from 5 s to 1 min on midrange computer workstations. In the pelvis, an anteroposterior (AP) volume rendered view from a typical planning CT scan clearly shows the skin and bony anatomy. A muscle opacity map permits clear visualization of the superficial thigh muscles, femoral veins, and arteries. Lymph nodes are seen in the femoral triangle. When overlying muscle and bone are cut away, the prostate, seminal vessels, bladder, and rectum are seen in 3D perspective. Similar results are obtained for thorax and for head and neck scans. Conclusion: Volumetric visualization of anatomy is useful in treatment

  7. H-Metric: Characterizing Image Datasets via Homogenization Based on KNN-Queries

    Directory of Open Access Journals (Sweden)

    Welington M da Silva

    2012-01-01

    Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

  8. Ultrahigh volumetric capacitance and cyclic stability of fluorine and nitrogen co-doped carbon microspheres

    Science.gov (United States)

    Zhou, Junshuang; Lian, Jie; Hou, Li; Zhang, Junchuan; Gou, Huiyang; Xia, Meirong; Zhao, Yufeng; Strobel, Timothy A.; Tao, Lu; Gao, Faming

    2015-09-01

    Highly porous nanostructures with large surface areas are typically employed for electrical double-layer capacitors to improve gravimetric energy storage capacity; however, high surface area carbon-based electrodes result in poor volumetric capacitance because of the low packing density of porous materials. Here, we demonstrate ultrahigh volumetric capacitance of 521 F cm-3 in aqueous electrolytes for non-porous carbon microsphere electrodes co-doped with fluorine and nitrogen synthesized by low-temperature solvothermal route, rivaling expensive RuO2 or MnO2 pseudo-capacitors. The new electrodes also exhibit excellent cyclic stability without capacitance loss after 10,000 cycles in both acidic and basic electrolytes at a high charge current of 5 A g-1. This work provides a new approach for designing high-performance electrodes with exceptional volumetric capacitance with high mass loadings and charge rates for long-lived electrochemical energy storage systems.

  9. Study and modeling of changes in volumetric efficiency of helix conveyors at different rotational speeds and inclination angels by ANFIS and statistical methods

    Directory of Open Access Journals (Sweden)

    A Zareei

    2017-05-01

    Full Text Available Introduction Spiral conveyors effectively carry solid masses as free or partly free flow of materials. They create good throughput and they are the perfect solution to solve the problems of transport, due to their simple structure, high efficiency and low maintenance costs. This study aims to investigate the performance characteristics of conveyors as function of auger diameter, rotational speed and handling inclination angle. The performance characteristic was investigated according to volumetric efficiency. In another words, the purpose of this study was obtaining a suitable model for volumetric efficiency changes of steep auger to transfer agricultural products. Three different diameters of auger, five levels of rotational speed and three slope angles were used to investigate the effects of changes in these parameters on volumetric efficiency of auger. The used method is novel in this area and the results show that performance by ANFIS models is much better than common statistical models. Materials and Methods The experiments were conducted in Department of Mechanical Engineering of Agricultural Machinery in Urmia University. In this study, SAYOS cultivar of wheat was used. This cultivar of wheat had hard seeds and the humidity was 12% (based on wet. Before testing, all foreign material was separated from the wheat such as stone, dust, plant residues and green seeds. Bulk density of wheat was 790 kg m-3. The auger shaft of the spiral conveyor was received its rotational force through belt and electric motor and its rotation leading to transfer the product to the output. In this study, three conveyors at diameters of 13, 17.5, and 22.5 cm, five levels of rotational speed at 100, 200, 300, 400, and 500 rpm and three handling angles of 10, 20, and 30º were tested. Adaptive Nero-fuzzy inference system (ANFIS is the combination of fuzzy systems and artificial neural network, so it has both benefits. This system is useful to solve the complex non

  10. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  11. ProDaMa: an open source Python library to generate protein structure datasets

    Directory of Open Access Journals (Sweden)

    Manconi Andrea

    2009-10-01

    Full Text Available Abstract Background The huge difference between the number of known sequences and known tertiary structures has justified the use of automated methods for protein analysis. Although a general methodology to solve these problems has not been yet devised, researchers are engaged in developing more accurate techniques and algorithms whose training plays a relevant role in determining their performance. From this perspective, particular importance is given to the training data used in experiments, and researchers are often engaged in the generation of specialized datasets that meet their requirements. Findings To facilitate the task of generating specialized datasets we devised and implemented ProDaMa, an open source Python library than provides classes for retrieving, organizing, updating, analyzing, and filtering protein data. Conclusion ProDaMa has been used to generate specialized datasets useful for secondary structure prediction and to develop a collaborative web application aimed at generating and sharing protein structure datasets. The library, the related database, and the documentation are freely available at the URL http://iasc.diee.unica.it/prodama.

  12. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  13. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  14. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  15. Flexible MXene/Graphene Films for Ultrafast Supercapacitors with Outstanding Volumetric Capacitance

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Jun [Drexel Univ., Philadelphia, PA (United States); Harbin Engineering Univ., Harbin (China); Ren, Chang E. [Drexel Univ., Philadelphia, PA (United States); Maleski, Kathleen [Drexel Univ., Philadelphia, PA (United States); Hatter, Christine B. [Drexel Univ., Philadelphia, PA (United States); Anasori, Babak [Drexel Univ., Philadelphia, PA (United States); Urbankowski, Patrick [Drexel Univ., Philadelphia, PA (United States); Sarycheva, Asya [Drexel Univ., Philadelphia, PA (United States); Gogotsi, Yury G. [Drexel Univ., Philadelphia, PA (United States)

    2017-06-30

    A strategy to prepare flexible and conductive MXene/graphene (reduced graphene oxide, rGO) supercapacitor electrodes by using electrostatic self-assembly between positively charged rGO modified with poly(diallyldimethylammonium chloride) and negatively charged titanium carbide MXene nanosheets is presented. After electrostatic assembly, rGO nanosheets are inserted in-between MXene layers. As a result, the self-restacking of MXene nanosheets is effectively prevented, leading to a considerably increased interlayer spacing. Accelerated diffusion of electrolyte ions enables more electroactive sites to become accessible. The freestanding MXene/rGO-5 wt% electrode displays a volumetric capacitance of 1040 F cm–3 at a scan rate of 2 mV s–1, an impressive rate capability with 61% capacitance retention at 1 V s–1 and long cycle life. Moreover, the fabricated binder-free symmetric supercapacitor shows an ultrahigh volumetric energy density of 32.6 Wh L–1, which is among the highest values reported for carbon and MXene based materials in aqueous electrolytes. Furthermore, this work provides fundamental insight into the effect of interlayer spacing on the electrochemical performance of 2D hybrid materials and sheds light on the design of next-generation flexible, portable and highly integrated supercapacitors with high volumetric and rate performances.

  16. CLARA-A1: a cloud, albedo, and radiation dataset from 28 yr of global AVHRR data

    Directory of Open Access Journals (Sweden)

    K.-G. Karlsson

    2013-05-01

    Full Text Available A new satellite-derived climate dataset – denoted CLARA-A1 ("The CM SAF cLoud, Albedo and RAdiation dataset from AVHRR data" – is described. The dataset covers the 28 yr period from 1982 until 2009 and consists of cloud, surface albedo, and radiation budget products derived from the AVHRR (Advanced Very High Resolution Radiometer sensor carried by polar-orbiting operational meteorological satellites. Its content, anticipated accuracies, limitations, and potential applications are described. The dataset is produced by the EUMETSAT Climate Monitoring Satellite Application Facility (CM SAF project. The dataset has its strengths in the long duration, its foundation upon a homogenized AVHRR radiance data record, and in some unique features, e.g. the availability of 28 yr of summer surface albedo and cloudiness parameters over the polar regions. Quality characteristics are also well investigated and particularly useful results can be found over the tropics, mid to high latitudes and over nearly all oceanic areas. Being the first CM SAF dataset of its kind, an intensive evaluation of the quality of the datasets was performed and major findings with regard to merits and shortcomings of the datasets are reported. However, the CM SAF's long-term commitment to perform two additional reprocessing events within the time frame 2013–2018 will allow proper handling of limitations as well as upgrading the dataset with new features (e.g. uncertainty estimates and extension of the temporal coverage.

  17. Valuation of large variable annuity portfolios: Monte Carlo simulation and synthetic datasets

    Directory of Open Access Journals (Sweden)

    Gan Guojun

    2017-12-01

    Full Text Available Metamodeling techniques have recently been proposed to address the computational issues related to the valuation of large portfolios of variable annuity contracts. However, it is extremely diffcult, if not impossible, for researchers to obtain real datasets frominsurance companies in order to test their metamodeling techniques on such real datasets and publish the results in academic journals. To facilitate the development and dissemination of research related to the effcient valuation of large variable annuity portfolios, this paper creates a large synthetic portfolio of variable annuity contracts based on the properties of real portfolios of variable annuities and implements a simple Monte Carlo simulation engine for valuing the synthetic portfolio. In addition, this paper presents fair market values and Greeks for the synthetic portfolio of variable annuity contracts that are important quantities for managing the financial risks associated with variable annuities. The resulting datasets can be used by researchers to test and compare the performance of various metamodeling techniques.

  18. Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue

    Science.gov (United States)

    González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.

    2013-04-01

    Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.

  19. Phylogenetic factorization of compositional data yields lineage-level associations in microbiome datasets

    Directory of Open Access Journals (Sweden)

    Alex D. Washburne

    2017-02-01

    Full Text Available Marker gene sequencing of microbial communities has generated big datasets of microbial relative abundances varying across environmental conditions, sample sites and treatments. These data often come with putative phylogenies, providing unique opportunities to investigate how shared evolutionary history affects microbial abundance patterns. Here, we present a method to identify the phylogenetic factors driving patterns in microbial community composition. We use the method, “phylofactorization,” to re-analyze datasets from the human body and soil microbial communities, demonstrating how phylofactorization is a dimensionality-reducing tool, an ordination-visualization tool, and an inferential tool for identifying edges in the phylogeny along which putative functional ecological traits may have arisen.

  20. Anonymising the Sparse Dataset: A New Privacy Preservation Approach while Predicting Diseases

    Directory of Open Access Journals (Sweden)

    V. Shyamala Susan

    2016-09-01

    Full Text Available Data mining techniques analyze the medical dataset with the intention of enhancing patient’s health and privacy. Most of the existing techniques are properly suited for low dimensional medical dataset. The proposed methodology designs a model for the representation of sparse high dimensional medical dataset with the attitude of protecting the patient’s privacy from an adversary and additionally to predict the disease’s threat degree. In a sparse data set many non-zero values are randomly spread in the entire data space. Hence, the challenge is to cluster the correlated patient’s record to predict the risk degree of the disease earlier than they occur in patients and to keep privacy. The first phase converts the sparse dataset right into a band matrix through the Genetic algorithm along with Cuckoo Search (GCS.This groups the correlated patient’s record together and arranges them close to the diagonal. The next segment dissociates the patient’s disease, which is a sensitive value (SA with the parameters that determine the disease normally Quasi Identifier (QI.Finally, density based clustering technique is used over the underlying data to  create anonymized groups to maintain privacy and to predict the risk level of disease. Empirical assessments on actual health care data corresponding to V.A.Medical Centre heart disease dataset reveal the efficiency of this model pertaining to information loss, utility and privacy.

  1. Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance

    Science.gov (United States)

    Stauber, Mark; Western, Craig; Solek, Roman; Salisbury, Kenneth; Hristov, Dmitre; Schlosser, Jeffrey

    2016-03-01

    Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3+/-0.3 mm, -0.3+/-0.3 mm, and -0.1+/-0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3+/-0.9 mm, 0.4+/-0.7 mm, and -0.3+/-1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.

  2. Semi-automated volumetric analysis of lymph node metastases in patients with malignant melanoma stage III/IV-A feasibility study

    International Nuclear Information System (INIS)

    Fabel, M.; Tengg-Kobligk, H. von; Giesel, F.L.; Delorme, S.; Kauczor, H.-U.; Bornemann, L.; Dicken, V.; Kopp-Schneider, A.; Moser, C.

    2008-01-01

    Therapy monitoring in oncological patient care requires accurate and reliable imaging and post-processing methods. RECIST criteria are the current standard, with inherent disadvantages. The aim of this study was to investigate the feasibility of semi-automated volumetric analysis of lymph node metastases in patients with malignant melanoma compared to manual volumetric analysis and RECIST. Multislice CT was performed in 47 patients, covering the chest, abdomen and pelvis. In total, 227 suspicious, enlarged lymph nodes were evaluated retrospectively by two radiologists regarding diameters (RECIST), manually measured volume by placement of ROIs and semi-automated volumetric analysis. Volume (ml), quality of segmentation (++/-) and time effort (s) were evaluated in the study. The semi-automated volumetric analysis software tool was rated acceptable to excellent in 81% of all cases (reader 1) and 79% (reader 2). Median time for the entire segmentation process and necessary corrections was shorter with the semi-automated software than by manual segmentation. Bland-Altman plots showed a significantly lower interobserver variability for semi-automated volumetric than for RECIST measurements. The study demonstrated feasibility of volumetric analysis of lymph node metastases. The software allows a fast and robust segmentation in up to 80% of all cases. Ease of use and time needed are acceptable for application in the clinical routine. Variability and interuser bias were reduced to about one third of the values found for RECIST measurements. (orig.)

  3. Automatic Estimation of Volumetric Breast Density Using Artificial Neural Network-Based Calibration of Full-Field Digital Mammography: Feasibility on Japanese Women With and Without Breast Cancer.

    Science.gov (United States)

    Wang, Jeff; Kato, Fumi; Yamashita, Hiroko; Baba, Motoi; Cui, Yi; Li, Ruijiang; Oyama-Manabe, Noriko; Shirato, Hiroki

    2017-04-01

    Breast cancer is the most common invasive cancer among women and its incidence is increasing. Risk assessment is valuable and recent methods are incorporating novel biomarkers such as mammographic density. Artificial neural networks (ANN) are adaptive algorithms capable of performing pattern-to-pattern learning and are well suited for medical applications. They are potentially useful for calibrating full-field digital mammography (FFDM) for quantitative analysis. This study uses ANN modeling to estimate volumetric breast density (VBD) from FFDM on Japanese women with and without breast cancer. ANN calibration of VBD was performed using phantom data for one FFDM system. Mammograms of 46 Japanese women diagnosed with invasive carcinoma and 53 with negative findings were analyzed using ANN models learned. ANN-estimated VBD was validated against phantom data, compared intra-patient, with qualitative composition scoring, with MRI VBD, and inter-patient with classical risk factors of breast cancer as well as cancer status. Phantom validations reached an R 2 of 0.993. Intra-patient validations ranged from R 2 of 0.789 with VBD to 0.908 with breast volume. ANN VBD agreed well with BI-RADS scoring and MRI VBD with R 2 ranging from 0.665 with VBD to 0.852 with breast volume. VBD was significantly higher in women with cancer. Associations with age, BMI, menopause, and cancer status previously reported were also confirmed. ANN modeling appears to produce reasonable measures of mammographic density validated with phantoms, with existing measures of breast density, and with classical biomarkers of breast cancer. FFDM VBD is significantly higher in Japanese women with cancer.

  4. VOLUMETRIC LEAK DETECTION IN LARGE UNDERGROUND STORAGE TANKS - VOLUME I

    Science.gov (United States)

    A set of experiments was conducted to determine whether volumetric leak detection system presently used to test underground storage tanks (USTs) up to 38,000 L (10,000 gal) in capacity could meet EPA's regulatory standards for tank tightness and automatic tank gauging systems whe...

  5. Power analysis dataset for QCA based multiplexer circuits

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-Al-Shafi

    2017-04-01

    Full Text Available Power consumption in irreversible QCA logic circuits is a vital and a major issue; however in the practical cases, this focus is mostly omitted.The complete power depletion dataset of different QCA multiplexers have been worked out in this paper. At −271.15 °C temperature, the depletion is evaluated under three separate tunneling energy levels. All the circuits are designed with QCADesigner, a broadly used simulation engine and QCAPro tool has been applied for estimating the power dissipation.

  6. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  7. Global-scale evaluation of 22 precipitation datasets using gauge observations and hydrological modeling

    Directory of Open Access Journals (Sweden)

    H. E. Beck

    2017-12-01

    Full Text Available We undertook a comprehensive evaluation of 22 gridded (quasi-global (sub-daily precipitation (P datasets for the period 2000–2016. Thirteen non-gauge-corrected P datasets were evaluated using daily P gauge observations from 76 086 gauges worldwide. Another nine gauge-corrected datasets were evaluated using hydrological modeling, by calibrating the HBV conceptual model against streamflow records for each of 9053 small to medium-sized ( <  50 000 km2 catchments worldwide, and comparing the resulting performance. Marked differences in spatio-temporal patterns and accuracy were found among the datasets. Among the uncorrected P datasets, the satellite- and reanalysis-based MSWEP-ng V1.2 and V2.0 datasets generally showed the best temporal correlations with the gauge observations, followed by the reanalyses (ERA-Interim, JRA-55, and NCEP-CFSR and the satellite- and reanalysis-based CHIRP V2.0 dataset, the estimates based primarily on passive microwave remote sensing of rainfall (CMORPH V1.0, GSMaP V5/6, and TMPA 3B42RT V7 or near-surface soil moisture (SM2RAIN-ASCAT, and finally, estimates based primarily on thermal infrared imagery (GridSat V1.0, PERSIANN, and PERSIANN-CCS. Two of the three reanalyses (ERA-Interim and JRA-55 unexpectedly obtained lower trend errors than the satellite datasets. Among the corrected P datasets, the ones directly incorporating daily gauge data (CPC Unified, and MSWEP V1.2 and V2.0 generally provided the best calibration scores, although the good performance of the fully gauge-based CPC Unified is unlikely to translate to sparsely or ungauged regions. Next best results were obtained with P estimates directly incorporating temporally coarser gauge data (CHIRPS V2.0, GPCP-1DD V1.2, TMPA 3B42 V7, and WFDEI-CRU, which in turn outperformed the one indirectly incorporating gauge data through another multi-source dataset (PERSIANN-CDR V1R1. Our results highlight large differences in estimation accuracy

  8. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Directory of Open Access Journals (Sweden)

    Douglas Teodoro

    Full Text Available The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  9. Performance of single and multi-atlas based automated landmarking methods compared to expert annotations in volumetric microCT datasets of mouse mandibles.

    Science.gov (United States)

    Young, Ryan; Maga, A Murat

    2015-01-01

    Here we present an application of advanced registration and atlas building framework DRAMMS to the automated annotation of mouse mandibles through a series of tests using single and multi-atlas segmentation paradigms and compare the outcomes to the current gold standard, manual annotation. Our results showed multi-atlas annotation procedure yields landmark precisions within the human observer error range. The mean shape estimates from gold standard and multi-atlas annotation procedure were statistically indistinguishable for both Euclidean Distance Matrix Analysis (mean form matrix) and Generalized Procrustes Analysis (Goodall F-test). Further research needs to be done to validate the consistency of variance-covariance matrix estimates from both methods with larger sample sizes. Multi-atlas annotation procedure shows promise as a framework to facilitate truly high-throughput phenomic analyses by channeling investigators efforts to annotate only a small portion of their datasets.

  10. Effects of Agitation, Aeration and Temperature on Production of a Novel Glycoprotein GP-1 by Streptomyces kanasenisi ZX01 and Scale-Up Based on Volumetric Oxygen Transfer Coefficient

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2018-01-01

    Full Text Available The effects of temperature, agitation and aeration on glycoprotein GP-1 production by Streptomyces kanasenisi ZX01 in bench-scale fermentors were systematically investigated. The maximum final GP-1 production was achieved at an agitation speed of 200 rpm, aeration rate of 2.0 vvm and temperature of 30 °C. By using a dynamic gassing out method, the effects of agitation and aeration on volumetric oxygen transfer coefficient (kLa were also studied. The values of volumetric oxygen transfer coefficient in the logarithmic phase increased with increase of agitation speed (from 14.53 to 32.82 h−1 and aeration rate (from 13.21 to 22.43 h−1. In addition, a successful scale-up from bench-scale to pilot-scale was performed based on volumetric oxygen transfer coefficient, resulting in final GP-1 production of 3.92, 4.03, 3.82 and 4.20 mg/L in 5 L, 15 L, 70 L and 500 L fermentors, respectively. These results indicated that constant volumetric oxygen transfer coefficient was appropriate for the scale-up of batch fermentation of glycoprotein GP-1 by Streptomyces kanasenisi ZX01, and this scale-up strategy successfully achieved 100-fold scale-up from bench-scale to pilot-scale fermentor.

  11. Creating a distortion characterisation dataset for visual band cameras using fiducial markers

    CSIR Research Space (South Africa)

    Jermy, R

    2015-11-01

    Full Text Available . This will allow other researchers to perform the same steps and create better algorithms to accurately locate fiducial markers and calibrate cameras. A second dataset that can be used to assess the accuracy of the stereo vision of two calibrated cameras is also...

  12. Calculation of climatic reference values and its use for automatic outlier detection in meteorological datasets

    Directory of Open Access Journals (Sweden)

    B. Téllez

    2008-04-01

    Full Text Available The climatic reference values for monthly and annual average air temperature and total precipitation in Catalonia – northeast of Spain – are calculated using a combination of statistical methods and geostatistical techniques of interpolation. In order to estimate the uncertainty of the method, the initial dataset is split into two parts that are, respectively, used for estimation and validation. The resulting maps are then used in the automatic outlier detection in meteorological datasets.

  13. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  15. Need and trends of volumetric tests in recurring inspection of pressurized components in pressurized water reactors

    International Nuclear Information System (INIS)

    Bergemann, W.

    1982-01-01

    On the basis of the types of stress occurring in nuclear power plants and of practical results it has been shown that cracks in primary circuit components arise due to operating stresses in both the materials surfaces and the bulk of the materials. For this reason, volumetric materials testing is necessary in addition to surface testing. An outlook is given on the trends of volumetric testing. (author)

  16. Electromagnetically controlled measuring device for measuring injection quantities in a diesel injection pump volumetrically. Elektromagnetisch gesteuerte Messvorrichtung zur volumetrischen Messung von Einspritzmengen einer Dieseleinspritzpumpe

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmann, K H; Mueller, M; Decker, R; Huber, G

    1990-11-22

    The invention concerns a measuring device for volumetric measurements of injection quantities of a diesel injection pump which injects its contents into a volumetric chamber controlled electromagnetically by a discharge valve and enclosed by a non-impact gas pressure loaded volumetric vessel and effects a retreating movement of the latter. The device is provided with an inductive path controller fitted with a differential pair of coils containing an axially movable ferromagnetic core. The path controller forms a part of a lifter rod connected to the volumetric vessel. It gives an opening signal to the discharge valve after each retreat of the volumetric vessel and a closing signal as soon as a defined height of suspension corresponding to the original position of the volumetric vessel after its return is reached.

  17. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  18. A coupled melt-freeze temperature index approach in a one-layer model to predict bulk volumetric liquid water content dynamics in snow

    Science.gov (United States)

    Avanzi, Francesco; Yamaguchi, Satoru; Hirashima, Hiroyuki; De Michele, Carlo

    2016-04-01

    Liquid water in snow rules runoff dynamics and wet snow avalanches release. Moreover, it affects snow viscosity and snow albedo. As a result, measuring and modeling liquid water dynamics in snow have important implications for many scientific applications. However, measurements are usually challenging, while modeling is difficult due to an overlap of mechanical, thermal and hydraulic processes. Here, we evaluate the use of a simple one-layer one-dimensional model to predict hourly time-series of bulk volumetric liquid water content in seasonal snow. The model considers both a simple temperature-index approach (melt only) and a coupled melt-freeze temperature-index approach that is able to reconstruct melt-freeze dynamics. Performance of this approach is evaluated at three sites in Japan. These sites (Nagaoka, Shinjo and Sapporo) present multi-year time-series of snow and meteorological data, vertical profiles of snow physical properties and snow melt lysimeters data. These data-sets are an interesting opportunity to test this application in different climatic conditions, as sites span a wide latitudinal range and are subjected to different snow conditions during the season. When melt-freeze dynamics are included in the model, results show that median absolute differences between observations and predictions of bulk volumetric liquid water content are consistently lower than 1 vol%. Moreover, the model is able to predict an observed dry condition of the snowpack in 80% of observed cases at a non-calibration site, where parameters from calibration sites are transferred. Overall, the analysis show that a coupled melt-freeze temperature-index approach may be a valid solution to predict average wetness conditions of a snow cover at local scale.

  19. Creating a Regional MODIS Satellite-Driven Net Primary Production Dataset for European Forests

    Directory of Open Access Journals (Sweden)

    Mathias Neumann

    2016-06-01

    Full Text Available Net primary production (NPP is an important ecological metric for studying forest ecosystems and their carbon sequestration, for assessing the potential supply of food or timber and quantifying the impacts of climate change on ecosystems. The global MODIS NPP dataset using the MOD17 algorithm provides valuable information for monitoring NPP at 1-km resolution. Since coarse-resolution global climate data are used, the global dataset may contain uncertainties for Europe. We used a 1-km daily gridded European climate data set with the MOD17 algorithm to create the regional NPP dataset MODIS EURO. For evaluation of this new dataset, we compare MODIS EURO with terrestrial driven NPP from analyzing and harmonizing forest inventory data (NFI from 196,434 plots in 12 European countries as well as the global MODIS NPP dataset for the years 2000 to 2012. Comparing these three NPP datasets, we found that the global MODIS NPP dataset differs from NFI NPP by 26%, while MODIS EURO only differs by 7%. MODIS EURO also agrees with NFI NPP across scales (from continental, regional to country and gradients (elevation, location, tree age, dominant species, etc.. The agreement is particularly good for elevation, dominant species or tree height. This suggests that using improved climate data allows the MOD17 algorithm to provide realistic NPP estimates for Europe. Local discrepancies between MODIS EURO and NFI NPP can be related to differences in stand density due to forest management and the national carbon estimation methods. With this study, we provide a consistent, temporally continuous and spatially explicit productivity dataset for the years 2000 to 2012 on a 1-km resolution, which can be used to assess climate change impacts on ecosystems or the potential biomass supply of the European forests for an increasing bio-based economy. MODIS EURO data are made freely available at ftp://palantir.boku.ac.at/Public/MODIS_EURO.

  20. Volumetric changes and clinical outcome for petroclival meningiomas after primary treatment with Gamma Knife radiosurgery.

    Science.gov (United States)

    Sadik, Zjiwar H A; Lie, Suan Te; Leenstra, Sieger; Hanssens, Patrick E J

    2018-01-26

    OBJECTIVE Petroclival meningiomas (PCMs) can cause devastating clinical symptoms due to mass effect on cranial nerves (CNs); thus, patients harboring these tumors need treatment. Many neurosurgeons advocate for microsurgery because removal of the tumor can provide relief or result in symptom disappearance. Gamma Knife radiosurgery (GKRS) is often an alternative for surgery because it can cause tumor shrinkage with improvement of symptoms. This study evaluates qualitative volumetric changes of PCM after primary GKRS and its impact on clinical symptoms. METHODS The authors performed a retrospective study of patients with PCM who underwent primary GKRS between 2003 and 2015 at the Gamma Knife Center of the Elisabeth-Tweesteden Hospital in Tilburg, the Netherlands. This study yields 53 patients. In this study the authors concentrate on qualitative volumetric tumor changes, local tumor control rate, and the effect of the treatment on trigeminal neuralgia (TN). RESULTS Local tumor control was 98% at 5 years and 93% at 7 years (Kaplan-Meier estimates). More than 90% of the tumors showed regression in volume during the first 5 years. The mean volumetric tumor decrease was 21.2%, 27.1%, and 31% at 1, 3, and 6 years of follow-up, respectively. Improvement in TN was achieved in 61%, 67%, and 70% of the cases at 1, 2, and 3 years of follow-up, respectively. This was associated with a mean volumetric tumor decrease of 25% at the 1-year follow-up to 32% at the 3-year follow-up. CONCLUSIONS GKRS for PCMs yields a high tumor control rate with a low incidence of neurological deficits. Many patients with TN due to PCM experienced improvement in TN after radiosurgery. GKRS achieves significant volumetric tumor decrease in the first years of follow-up and thereafter.

  1. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  2. Sparse multivariate measures of similarity between intra-modal neuroimaging datasets

    Directory of Open Access Journals (Sweden)

    Maria J. Rosa

    2015-10-01

    Full Text Available An increasing number of neuroimaging studies are now based on either combining more than one data modality (inter-modal or combining more than one measurement from the same modality (intra-modal. To date, most intra-modal studies using multivariate statistics have focused on differences between datasets, for instance relying on classifiers to differentiate between effects in the data. However, to fully characterize these effects, multivariate methods able to measure similarities between datasets are needed. One classical technique for estimating the relationship between two datasets is canonical correlation analysis (CCA. However, in the context of high-dimensional data the application of CCA is extremely challenging. A recent extension of CCA, sparse CCA (SCCA, overcomes this limitation, by regularizing the model parameters while yielding a sparse solution. In this work, we modify SCCA with the aim of facilitating its application to high-dimensional neuroimaging data and finding meaningful multivariate image-to-image correspondences in intra-modal studies. In particular, we show how the optimal subset of variables can be estimated independently and we look at the information encoded in more than one set of SCCA transformations. We illustrate our framework using Arterial Spin Labelling data to investigate multivariate similarities between the effects of two antipsychotic drugs on cerebral blood flow.

  3. Equalizing imbalanced imprecise datasets for genetic fuzzy classifiers

    Directory of Open Access Journals (Sweden)

    AnaM. Palacios

    2012-04-01

    Full Text Available Determining whether an imprecise dataset is imbalanced is not immediate. The vagueness in the data causes that the prior probabilities of the classes are not precisely known, and therefore the degree of imbalance can also be uncertain. In this paper we propose suitable extensions of different resampling algorithms that can be applied to interval valued, multi-labelled data. By means of these extended preprocessing algorithms, certain classification systems designed for minimizing the fraction of misclassifications are able to produce knowledge bases that are also adequate under common metrics for imbalanced classification.

  4. Re-inspection of small RNA sequence datasets reveals several novel human miRNA genes.

    Directory of Open Access Journals (Sweden)

    Thomas Birkballe Hansen

    Full Text Available BACKGROUND: miRNAs are key players in gene expression regulation. To fully understand the complex nature of cellular differentiation or initiation and progression of disease, it is important to assess the expression patterns of as many miRNAs as possible. Thereby, identifying novel miRNAs is an essential prerequisite to make possible a comprehensive and coherent understanding of cellular biology. METHODOLOGY/PRINCIPAL FINDINGS: Based on two extensive, but previously published, small RNA sequence datasets from human embryonic stem cells and human embroid bodies, respectively [1], we identified 112 novel miRNA-like structures and were able to validate miRNA processing in 12 out of 17 investigated cases. Several miRNA candidates were furthermore substantiated by including additional available small RNA datasets, thereby demonstrating the power of combining datasets to identify miRNAs that otherwise may be assigned as experimental noise. CONCLUSIONS/SIGNIFICANCE: Our analysis highlights that existing datasets are not yet exhaustedly studied and continuous re-analysis of the available data is important to uncover all features of small RNA sequencing.

  5. An enhanced topologically significant directed random walk in cancer classification using gene expression datasets

    Directory of Open Access Journals (Sweden)

    Choon Sen Seah

    2017-12-01

    Full Text Available Microarray technology has become one of the elementary tools for researchers to study the genome of organisms. As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analysis, cancerous classification is an emerging important trend. Significant directed random walk is proposed as one of the cancerous classification approach which have higher sensitivity of risk gene prediction and higher accuracy of cancer classification. In this paper, the methodology and material used for the experiment are presented. Tuning parameter selection method and weight as parameter are applied in proposed approach. Gene expression dataset is used as the input datasets while pathway dataset is used to build a directed graph, as reference datasets, to complete the bias process in random walk approach. In addition, we demonstrate that our approach can improve sensitive predictions with higher accuracy and biological meaningful classification result. Comparison result takes place between significant directed random walk and directed random walk to show the improvement in term of sensitivity of prediction and accuracy of cancer classification.

  6. WE-D-BRB-03: Current State of Volumetric Image Guidance for Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hua, C. [St. Jude Children’s Research Hospital (United States)

    2016-06-15

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. It introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.

  7. WE-D-BRB-03: Current State of Volumetric Image Guidance for Proton Therapy

    International Nuclear Information System (INIS)

    Hua, C.

    2016-01-01

    The goal of this session is to review the physics of proton therapy, treatment planning techniques, and the use of volumetric imaging in proton therapy. The course material covers the physics of proton interaction with matter and physical characteristics of clinical proton beams. It will provide information on proton delivery systems and beam delivery techniques for double scattering (DS), uniform scanning (US), and pencil beam scanning (PBS). The session covers the treatment planning strategies used in DS, US, and PBS for various anatomical sites, methods to address uncertainties in proton therapy and uncertainty mitigation to generate robust treatment plans. It introduces the audience to the current status of image guided proton therapy and clinical applications of CBCT for proton therapy. It outlines the importance of volumetric imaging in proton therapy. Learning Objectives: Gain knowledge in proton therapy physics, and treatment planning for proton therapy including intensity modulated proton therapy. The current state of volumetric image guidance equipment in proton therapy. Clinical applications of CBCT and its advantage over orthogonal imaging for proton therapy. B. Teo, B.K Teo had received travel funds from IBA in 2015.

  8. Low-cost Volumetric Ultrasound by Augmentation of 2D Systems: Design and Prototype.

    Science.gov (United States)

    Herickhoff, Carl D; Morgan, Matthew R; Broder, Joshua S; Dahl, Jeremy J

    2018-01-01

    Conventional two-dimensional (2D) ultrasound imaging is a powerful diagnostic tool in the hands of an experienced user, yet 2D ultrasound remains clinically underutilized and inherently incomplete, with output being very operator dependent. Volumetric ultrasound systems can more fully capture a three-dimensional (3D) region of interest, but current 3D systems require specialized transducers, are prohibitively expensive for many clinical departments, and do not register image orientation with respect to the patient; these systems are designed to provide improved workflow rather than operator independence. This work investigates whether it is possible to add volumetric 3D imaging capability to existing 2D ultrasound systems at minimal cost, providing a practical means of reducing operator dependence in ultrasound. In this paper, we present a low-cost method to make 2D ultrasound systems capable of quality volumetric image acquisition: we present the general system design and image acquisition method, including the use of a probe-mounted orientation sensor, a simple probe fixture prototype, and an offline volume reconstruction technique. We demonstrate initial results of the method, implemented using a Verasonics Vantage research scanner.

  9. Study of a spherical torus based volumetric neutron source for nuclear technology testing and development

    International Nuclear Information System (INIS)

    Cheng, E.T.; Cerbone, R.J.; Sviatoslavsky, I.N.; Galambos, L.D.; Peng, Y.-K.M.

    2000-01-01

    A plasma based, deuterium and tritium (DT) fueled, volumetric 14 MeV neutron source (VNS) has been considered as a possible facility to support the development of the demonstration fusion power reactor (DEMO). It can be used to test and develop necessary fusion blanket and divertor components and provide sufficient database, particularly on the reliability of nuclear components necessary for DEMO. The VNS device can be complement to ITER by reducing the cost and risk in the development of DEMO. A low cost, scientifically attractive, and technologically feasible volumetric neutron source based on the spherical torus (ST) concept has been conceived. The ST-VNS, which has a major radius of 1.07 m, aspect ratio 1.4, and plasma elongation three, can produce a neutron wall loading from 0.5 to 5 MW m -2 at the outboard test section with a modest fusion power level from 38 to 380 MW. It can be used to test necessary nuclear technologies for fusion power reactor and develop fusion core components include divertor, first wall, and power blanket. Using staged operation leading to high neutron wall loading and optimistic availability, a neutron fluence of more than 30 MW year m -2 is obtainable within 20 years of operation. This will permit the assessments of lifetime and reliability of promising fusion core components in a reactor relevant environment. A full scale demonstration of power reactor fusion core components is also made possible because of the high neutron wall loading capability. Tritium breeding in such a full scale demonstration can be very useful to ensure the self-sufficiency of fuel cycle for a candidate power blanket concept

  10. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  11. Nitrogen-Doped Holey Graphene as an Anode for Lithium-Ion Batteries with High Volumetric Energy Density and Long Cycle Life.

    Science.gov (United States)

    Xu, Jiantie; Lin, Yi; Connell, John W; Dai, Liming

    2015-12-01

    Nitrogen-doped holey graphene (N-hG) as an anode material for lithium-ion batteries has delivered a maximum volumetric capacity of 384 mAh cm(-3) with an excellent long-term cycling life up to 6000 cycles, and as an electrochemical capacitor has delivered a maximum volumetric energy density of 171.2 Wh L(-1) and a volumetric capacitance of 201.6 F cm(-3) . © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Somatic mutations associated with MRI-derived volumetric features in glioblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Gutman, David A.; Dunn, William D. [Emory University School of Medicine, Departments of Neurology, Atlanta, GA (United States); Emory University School of Medicine, Biomedical Informatics, Atlanta, GA (United States); Grossmann, Patrick; Alexander, Brian M. [Harvard Medical School, Department of Radiation Oncology, Dana-Farber Cancer Institute, Brigham and Women' s Hospital, Boston, MA (United States); Cooper, Lee A.D. [Emory University School of Medicine, Biomedical Informatics, Atlanta, GA (United States); Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, GA (United States); Holder, Chad A. [Emory University School of Medicine, Radiology and Imaging Sciences, Atlanta, GA (United States); Ligon, Keith L. [Brigham and Women' s Hospital, Harvard Medical School, Pathology, Dana-Farber Cancer Institute, Boston, MA (United States); Aerts, Hugo J.W.L. [Harvard Medical School, Department of Radiation Oncology, Dana-Farber Cancer Institute, Brigham and Women' s Hospital, Boston, MA (United States); Brigham and Women' s Hospital, Harvard Medical School, Radiology, Dana-Farber Cancer Institute, Boston, MA (United States)

    2015-12-15

    MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine. (orig.)

  13. Agreement of mammographic measures of volumetric breast density to MRI.

    Science.gov (United States)

    Wang, Jeff; Azziz, Ania; Fan, Bo; Malkov, Serghei; Klifa, Catherine; Newitt, David; Yitta, Silaja; Hylton, Nola; Kerlikowske, Karla; Shepherd, John A

    2013-01-01

    Clinical scores of mammographic breast density are highly subjective. Automated technologies for mammography exist to quantify breast density objectively, but the technique that most accurately measures the quantity of breast fibroglandular tissue is not known. To compare the agreement of three automated mammographic techniques for measuring volumetric breast density with a quantitative volumetric MRI-based technique in a screening population. Women were selected from the UCSF Medical Center screening population that had received both a screening MRI and digital mammogram within one year of each other, had Breast Imaging Reporting and Data System (BI-RADS) assessments of normal or benign finding, and no history of breast cancer or surgery. Agreement was assessed of three mammographic techniques (Single-energy X-ray Absorptiometry [SXA], Quantra, and Volpara) with MRI for percent fibroglandular tissue volume, absolute fibroglandular tissue volume, and total breast volume. Among 99 women, the automated mammographic density techniques were correlated with MRI measures with R(2) values ranging from 0.40 (log fibroglandular volume) to 0.91 (total breast volume). Substantial agreement measured by kappa statistic was found between all percent fibroglandular tissue measures (0.72 to 0.63), but only moderate agreement for log fibroglandular volumes. The kappa statistics for all percent density measures were highest in the comparisons of the SXA and MRI results. The largest error source between MRI and the mammography techniques was found to be differences in measures of total breast volume. Automated volumetric fibroglandular tissue measures from screening digital mammograms were in substantial agreement with MRI and if associated with breast cancer could be used in clinical practice to enhance risk assessment and prevention.

  14. Somatic mutations associated with MRI-derived volumetric features in glioblastoma

    International Nuclear Information System (INIS)

    Gutman, David A.; Dunn, William D.; Grossmann, Patrick; Alexander, Brian M.; Cooper, Lee A.D.; Holder, Chad A.; Ligon, Keith L.; Aerts, Hugo J.W.L.

    2015-01-01

    MR imaging can noninvasively visualize tumor phenotype characteristics at the macroscopic level. Here, we investigated whether somatic mutations are associated with and can be predicted by MRI-derived tumor imaging features of glioblastoma (GBM). Seventy-six GBM patients were identified from The Cancer Imaging Archive for whom preoperative T1-contrast (T1C) and T2-FLAIR MR images were available. For each tumor, a set of volumetric imaging features and their ratios were measured, including necrosis, contrast enhancing, and edema volumes. Imaging genomics analysis assessed the association of these features with mutation status of nine genes frequently altered in adult GBM. Finally, area under the curve (AUC) analysis was conducted to evaluate the predictive performance of imaging features for mutational status. Our results demonstrate that MR imaging features are strongly associated with mutation status. For example, TP53-mutated tumors had significantly smaller contrast enhancing and necrosis volumes (p = 0.012 and 0.017, respectively) and RB1-mutated tumors had significantly smaller edema volumes (p = 0.015) compared to wild-type tumors. MRI volumetric features were also found to significantly predict mutational status. For example, AUC analysis results indicated that TP53, RB1, NF1, EGFR, and PDGFRA mutations could each be significantly predicted by at least one imaging feature. MRI-derived volumetric features are significantly associated with and predictive of several cancer-relevant, drug-targetable DNA mutations in glioblastoma. These results may shed insight into unique growth characteristics of individual tumors at the macroscopic level resulting from molecular events as well as increase the use of noninvasive imaging in personalized medicine. (orig.)

  15. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  16. Synthetic ALSPAC longitudinal datasets for the Big Data VR project [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Demetris Avraam

    2017-08-01

    Full Text Available Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables without including the original data itself or disclosing participant information.  In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.

  17. Volumetric Real-Time Imaging Using a CMUT Ring Array

    OpenAIRE

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device.

  18. In-Situ Spatial Variability Of Thermal Conductivity And Volumetric ...

    African Journals Online (AJOL)

    Studies of spatial variability of thermal conductivity and volumetric water content of silty topsoil were conduct-ed on a 0.6 ha site at Abeokuta, South-Western Nigeria. The thermal conductivity (k) was measured at depths of up to 0.06 m along four parallel profiles of 200 m long and at an average temperature of 25 C, using ...

  19. Dataset of Atmospheric Environment Publication in 2016, Source emission and model evaluation of formaldehyde from composite and solid wood furniture in a full-scale chamber

    Data.gov (United States)

    U.S. Environmental Protection Agency — The data presented in this data file is a product of a journal publication. The dataset contains formaldehyde air concentrations in the emission test chamber and...

  20. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  1. Real-time volumetric scintillation dosimetry

    International Nuclear Information System (INIS)

    Beddar, S

    2015-01-01

    The goal of this brief review is to review the current status of real-time 3D scintillation dosimetry and what has been done so far in this area. The basic concept is to use a large volume of a scintillator material (liquid or solid) to measure or image the dose distributions from external radiation therapy (RT) beams in three dimensions. In this configuration, the scintillator material fulfills the dual role of being the detector and the phantom material in which the measurements are being performed. In this case, dose perturbations caused by the introduction of a detector within a phantom will not be at issue. All the detector configurations that have been conceived to date used a Charge-Coupled Device (CCD) camera to measure the light produced within the scintillator. In order to accurately measure the scintillation light, one must correct for various optical artefacts that arise as the light propagates from the scintillating centers through the optical chain to the CCD chip. Quenching, defined in its simplest form as a nonlinear response to high-linear energy transfer (LET) charged particles, is one of the disadvantages when such systems are used to measure the absorbed dose from high-LET particles such protons. However, correction methods that restore the linear dose response through the whole proton range have been proven to be effective for both liquid and plastic scintillators. Volumetric scintillation dosimetry has the potential to provide fast, high-resolution and accurate 3D imaging of RT dose distributions. Further research is warranted to optimize the necessary image reconstruction methods and optical corrections needed to achieve its full potential

  2. TAILS N-terminomic and proteomic datasets of healthy human dental pulp

    Directory of Open Access Journals (Sweden)

    Ulrich Eckhard

    2015-12-01

    Full Text Available The Data described here provide the in depth proteomic assessment of the human dental pulp proteome and N-terminome (Eckhard et al., 2015 [1]. A total of 9 human dental pulps were processed and analyzed by the positional proteomics technique TAILS (Terminal Amine Isotopic Labeling of Substrates N-terminomics. 38 liquid chromatography tandem mass spectrometry (LC-MS/MS datasets were collected and analyzed using four database search engines in combination with statistical downstream evaluation, to yield the by far largest proteomic and N-terminomic dataset of any dental tissue to date. The raw mass spectrometry data and the corresponding metadata have been deposited in ProteomeXchange with the PXD identifier ; Supplementary Tables described in this article are available via Mendeley Data (10.17632/555j3kk4sw.1.

  3. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2015-01-01

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity

  4. PENERAPAN TEKNIK BAGGING PADA ALGORITMA KLASIFIKASI UNTUK MENGATASI KETIDAKSEIMBANGAN KELAS DATASET MEDIS

    Directory of Open Access Journals (Sweden)

    Rizki Tri Prasetio

    2016-03-01

    Full Text Available ABSTRACT – The class imbalance problems have been reported to severely hinder classification performance of many standard learning algorithms, and have attracted a great deal of attention from researchers of different fields. Therefore, a number of methods, such as sampling methods, cost-sensitive learning methods, and bagging and boosting based ensemble methods, have been proposed to solve these problems. Some medical dataset has two classes has two classes or binominal experiencing an imbalance that causes lack of accuracy in classification. This research proposed a combination technique of bagging and algorithms of classification to improve the accuracy of medical datasets. Bagging technique used to solve the problem of imbalanced class. The proposed method is applied on three classifier algorithm i.e., naïve bayes, decision tree and k-nearest neighbor. This research uses five medical datasets obtained from UCI Machine Learning i.e.., breast-cancer, liver-disorder, heart-disease, pima-diabetes and vertebral column. Results of this research indicate that the proposed method makes a significant improvement on two algorithms of classification i.e. decision tree with p value of t-Test 0.0184 and k-nearest neighbor with p value of t-Test 0.0292, but not significant in naïve bayes with p value of t-Test 0.9236. After bagging technique applied at five medical datasets, naïve bayes has the highest accuracy for breast-cancer dataset of 96.14% with AUC of 0.984, heart-disease of 84.44% with AUC of 0.911 and pima-diabetes of 74.73% with AUC of 0.806. While the k-nearest neighbor has the best accuracy for dataset liver-disorder of 62.03% with AUC of 0.632 and vertebral-column of 82.26% with the AUC of 0.867. Keywords: ensemble technique, bagging, imbalanced class, medical dataset. ABSTRAKSI – Masalah ketidakseimbangan kelas telah dilaporkan sangat menghambat kinerja klasifikasi banyak algoritma klasifikasi dan telah menarik banyak perhatian dari

  5. A global gridded dataset of daily precipitation going back to 1950, ideal for analysing precipitation extremes

    Science.gov (United States)

    Contractor, S.; Donat, M.; Alexander, L. V.

    2017-12-01

    Reliable observations of precipitation are necessary to determine past changes in precipitation and validate models, allowing for reliable future projections. Existing gauge based gridded datasets of daily precipitation and satellite based observations contain artefacts and have a short length of record, making them unsuitable to analyse precipitation extremes. The largest limiting factor for the gauge based datasets is a dense and reliable station network. Currently, there are two major data archives of global in situ daily rainfall data, first is Global Historical Station Network (GHCN-Daily) hosted by National Oceanic and Atmospheric Administration (NOAA) and the other by Global Precipitation Climatology Centre (GPCC) part of the Deutsche Wetterdienst (DWD). We combine the two data archives and use automated quality control techniques to create a reliable long term network of raw station data, which we then interpolate using block kriging to create a global gridded dataset of daily precipitation going back to 1950. We compare our interpolated dataset with existing global gridded data of daily precipitation: NOAA Climate Prediction Centre (CPC) Global V1.0 and GPCC Full Data Daily Version 1.0, as well as various regional datasets. We find that our raw station density is much higher than other datasets. To avoid artefacts due to station network variability, we provide multiple versions of our dataset based on various completeness criteria, as well as provide the standard deviation, kriging error and number of stations for each grid cell and timestep to encourage responsible use of our dataset. Despite our efforts to increase the raw data density, the in situ station network remains sparse in India after the 1960s and in Africa throughout the timespan of the dataset. Our dataset would allow for more reliable global analyses of rainfall including its extremes and pave the way for better global precipitation observations with lower and more transparent uncertainties.

  6. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  7. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  8. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  9. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  10. Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets

    Science.gov (United States)

    Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.

    2018-04-01

    Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.

  11. A Novel Technique for Time-Centric Analysis of Massive Remotely-Sensed Datasets

    Directory of Open Access Journals (Sweden)

    Glenn E. Grant

    2015-04-01

    Full Text Available Analyzing massive remotely-sensed datasets presents formidable challenges. The volume of satellite imagery collected often outpaces analytical capabilities, however thorough analyses of complete datasets may provide new insights into processes that would otherwise be unseen. In this study we present a novel, object-oriented approach to storing, retrieving, and analyzing large remotely-sensed datasets. The objective is to provide a new structure for scalable storage and rapid, Internet-based analysis of climatology data. The concept of a “data rod” is introduced, a conceptual data object that organizes time-series information into a temporally-oriented vertical column at any given location. To demonstrate one possible use, we ingest 25 years of Greenland imagery into a series of pure-object databases, then retrieve and analyze the data. The results provide a basis for evaluating the database performance and scientific analysis capabilities. The project succeeds in demonstrating the effectiveness of the prototype database architecture and analysis approach, not because new scientific information is discovered, but because quality control issues are revealed in the source data that had gone undetected for years.

  12. The analysis of colour uniformity for a volumetric display based on a rotating LED array

    International Nuclear Information System (INIS)

    Wu, Jiang; Liu, Xu; Yan, Caijie; Xia, XinXing; Li, Haifeng

    2011-01-01

    There is a colour nonuniformity zone existing in three-dimensional (3D) volumetric displays which is based on the rotating colour light-emitting diode (LED) array. We analyse the reason for the colour nonuniformity zone by measuring the light intensity distribution and chromaticity coordinates of the LED in the volumetric display. Two boundaries of the colour nonuniformity zone are calculated. We measure the colour uniformities for a single cuboid of 3*3*4 voxels to display red, green, blue and white colour in different horizontal viewing angles, and for 64 cuboids distributed in the whole cylindrical image space with a fixed viewpoint. To evaluate the colour uniformity of a 3D image, we propose three evaluation indices of colour uniformity: the average of colour difference, the maximum colour difference and the variance of colour difference. The measurement results show that the character of colour uniformity is different for the 3D volumetric display and the two-dimensional display

  13. A spiral-based volumetric acquisition for MR temperature imaging.

    Science.gov (United States)

    Fielden, Samuel W; Feng, Xue; Zhao, Li; Miller, G Wilson; Geeslin, Matthew; Dallapiazza, Robert F; Elias, W Jeffrey; Wintermark, Max; Butts Pauly, Kim; Meyer, Craig H

    2018-06-01

    To develop a rapid pulse sequence for volumetric MR thermometry. Simulations were carried out to assess temperature deviation, focal spot distortion/blurring, and focal spot shift across a range of readout durations and maximum temperatures for Cartesian, spiral-out, and retraced spiral-in/out (RIO) trajectories. The RIO trajectory was applied for stack-of-spirals 3D imaging on a real-time imaging platform and preliminary evaluation was carried out compared to a standard 2D sequence in vivo using a swine brain model, comparing maximum and mean temperatures measured between the two methods, as well as the temporal standard deviation measured by the two methods. In simulations, low-bandwidth Cartesian trajectories showed substantial shift of the focal spot, whereas both spiral trajectories showed no shift while maintaining focal spot geometry. In vivo, the 3D sequence achieved real-time 4D monitoring of thermometry, with an update time of 2.9-3.3 s. Spiral imaging, and RIO imaging in particular, is an effective way to speed up volumetric MR thermometry. Magn Reson Med 79:3122-3127, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Verbal Memory Decline following DBS for Parkinson's Disease: Structural Volumetric MRI Relationships.

    Science.gov (United States)

    Geevarghese, Ruben; Lumsden, Daniel E; Costello, Angela; Hulse, Natasha; Ayis, Salma; Samuel, Michael; Ashkan, Keyoumars

    2016-01-01

    Parkinson's disease is a chronic degenerative movement disorder. The mainstay of treatment is medical. In certain patients Deep Brain Stimulation (DBS) may be offered. However, DBS has been associated with post-operative neuropsychology changes, especially in verbal memory. Firstly, to determine if pre-surgical thalamic and hippocampal volumes were related to verbal memory changes following DBS. Secondly, to determine if clinical factors such as age, duration of symptoms or motor severity (UPDRS Part III score) were related to verbal memory changes. A consecutive group of 40 patients undergoing bilateral Subthalamic Nucleus (STN)-DBS for PD were selected. Brain MRI data was acquired, pre-processed and structural volumetric data was extracted using FSL. Verbal memory test scores for pre- and post-STN-DBS surgery were recorded. Linear regression was used to investigate the relationship between score change and structural volumetric data. A significant relationship was demonstrated between change in List Learning test score and thalamic (left, p = 0.02) and hippocampal (left, p = 0.02 and right p = 0.03) volumes. Duration of symptoms was also associated with List Learning score change (p = 0.02 to 0.03). Verbal memory score changes appear to have a relationship to pre-surgical MRI structural volumetric data. The findings of this study provide a basis for further research into the use of pre-surgical MRI to counsel PD patients regarding post-surgical verbal memory changes.

  15. A prospective pilot study measuring muscle volumetric change in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Jenkins, Thomas M; Burness, Christine; Connolly, Daniel J; Rao, D Ganesh; Hoggard, Nigel; Mawson, Susan; McDermott, Christopher J; Wilkinson, Iain D; Shaw, Pamela J

    2013-09-01

    Our objective was to investigate the potential of muscle volume, measured with magnetic resonance (MR), as a biomarker to quantify disease progression in patients with amyotrophic lateral sclerosis (ALS). In this longitudinal pilot study, we first sought to determine the stability of volumetric muscle MR measurements in 11 control subjects at two time-points. We assessed feasibility of detecting atrophy in four patients with ALS, followed at three-month intervals for 12 months. Muscle power and MR volume were measured in thenar eminence (TEm), first dorsal interosseous (1DIO), tibialis anterior (TA) and tongue. Changes over time were assessed using linear regression models and t-tests. Results demonstrated that, in controls, no volumetric MR changes were seen (mean volume variation in all muscles 0.1). In patients, between-subject heterogeneity was identified. Trends for volume loss were found in TEm (mean, - 26.84%, p = 0.056) and TA (- 8.29%, p = 0.077), but not in 1DIO (- 18.47%, p = 0.121) or tongue (< 5%, p = 0.367). In conclusion, volumetric muscle MR appears a stable measure in controls, and progressive volume loss was demonstrable in individuals with ALS in whom clinical weakness progressed. In this small study, subclinical atrophy was not demonstrable using muscle MR. Clinico-radiological discordance between muscle weakness and MR atrophy could reflect a contribution of upper motor neuron pathology.

  16. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  17. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  18. Exploring Parallel Algorithms for Volumetric Mass-Spring-Damper Models in CUDA

    DEFF Research Database (Denmark)

    Rasmusson, Allan; Mosegaard, Jesper; Sørensen, Thomas Sangild

    2008-01-01

    ) from Nvidia. This paper investigates multiple implementations of volumetric Mass-Spring-Damper systems in CUDA. The obtained performance is compared to previous implementations utilizing the GPU through the OpenGL graphics API. We find that both performance and optimization strategies differ widely...

  19. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  20. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  1. Dose-volumetric parameters for predicting hypothyroidism after radiotherapy for head and neck cancer

    International Nuclear Information System (INIS)

    Kim, Mi Young; Yu, Tosol; Wu, Hong-Gyun

    2014-01-01

    To investigate predictors affecting the development of hypothyroidism after radiotherapy for head and neck cancer, focusing on radiation dose-volumetric parameters, and to determine the appropriate radiation dose-volumetric threshold of radiation-induced hypothyroidism. A total of 114 patients with head and neck cancer whose radiotherapy fields included the thyroid gland were analysed. The purpose of the radiotherapy was either definitive (n=81) or post-operative (n=33). Thyroid function was monitored before starting radiotherapy and after completion of radiotherapy at 1 month, 6 months, 1 year and 2 years. A diagnosis of hypothyroidism was based on a thyroid stimulating hormone value greater than the maximum value of laboratory range, regardless of symptoms. In all patients, dose volumetric parameters were analysed. Median follow-up duration was 25 months (range; 6-38). Forty-six percent of the patients were diagnosed as hypothyroidism after a median time of 8 months (range; 1-24). There were no significant differences in the distribution of age, gender, surgery, radiotherapy technique and chemotherapy between the euthyroid group and the hypothyroid group. In univariate analysis, the mean dose and V35-V50 results were significantly associated with hypothyroidism. The V45 is the only variable that independently contributes to the prediction of hypothyroidism in multivariate analysis and V45 of 50% was a threshold value. If V45 was <50%, the cumulative incidence of hypothyroidism at 1 year was 22.8%, whereas the incidence was 56.1% if V45 was ≥50%. (P=0.034). The V45 may predict risk of developing hypothyroidism after radiotherapy for head and neck cancer, and a V45 of 50% can be a useful dose-volumetric threshold of radiation-induced hypothyroidism. (author)

  2. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a

  3. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    International Nuclear Information System (INIS)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina

    2013-01-01

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0

  4. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  5. Thermodynamic and volumetric databases and software for magnesium alloys

    Science.gov (United States)

    Kang, Youn-Bae; Aliravci, Celil; Spencer, Philip J.; Eriksson, Gunnar; Fuerst, Carlton D.; Chartrand, Patrice; Pelton, Arthur D.

    2009-05-01

    Extensive databases for the thermodynamic and volumetric properties of magnesium alloys have been prepared by critical evaluation, modeling, and optimization of available data. Software has been developed to access the databases to calculate equilibrium phase diagrams, heat effects, etc., and to follow the course of equilibrium or Scheil-Gulliver cooling, calculating not only the amounts of the individual phases, but also of the microstructural constituents.

  6. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  7. Streaming Model Based Volume Ray Casting Implementation for Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Jusub Kim

    2009-01-01

    Full Text Available Interactive high quality volume rendering is becoming increasingly more important as the amount of more complex volumetric data steadily grows. While a number of volumetric rendering techniques have been widely used, ray casting has been recognized as an effective approach for generating high quality visualization. However, for most users, the use of ray casting has been limited to datasets that are very small because of its high demands on computational power and memory bandwidth. However the recent introduction of the Cell Broadband Engine (Cell B.E. processor, which consists of 9 heterogeneous cores designed to handle extremely demanding computations with large streams of data, provides an opportunity to put the ray casting into practical use. In this paper, we introduce an efficient parallel implementation of volume ray casting on the Cell B.E. The implementation is designed to take full advantage of the computational power and memory bandwidth of the Cell B.E. using an intricate orchestration of the ray casting computation on the available heterogeneous resources. Specifically, we introduce streaming model based schemes and techniques to efficiently implement acceleration techniques for ray casting on Cell B.E. In addition to ensuring effective SIMD utilization, our method provides two key benefits: there is no cost for empty space skipping and there is no memory bottleneck on moving volumetric data for processing. Our experimental results show that we can interactively render practical datasets on a single Cell B.E. processor.

  8. In situ coating nickel organic complexes on free-standing nickel wire films for volumetric-energy-dense supercapacitors.

    Science.gov (United States)

    Hong, Min; Xu, Shusheng; Yao, Lu; Zhou, Chao; Hu, Nantao; Yang, Zhi; Hu, Jing; Zhang, Liying; Zhou, Zhihua; Wei, Hao; Zhang, Yafei

    2018-07-06

    A self-free-standing core-sheath structured hybrid membrane electrodes based on nickel and nickel based metal-organic complexes (Ni@Ni-OC) was designed and constructed for high volumetric supercapacitors. The self-standing Ni@Ni-OC film electrode had a high volumetric specific capacity of 1225.5 C cm -3 at 0.3 A cm -3 and an excellent rate capability. Moreover, when countered with graphene-carbon nanotube (G-CNT) film electrode, the as-assembled Ni@Ni-OC//G-CNT hybrid supercapacitor device delivered an extraordinary volumetric capacitance of 85 F cm -3 at 0.5 A cm -3 and an outstanding energy density of 33.8 at 483 mW cm -3 . Furthermore, the hybrid supercapacitor showed no capacitance loss after 10 000 cycles at 2 A cm -3 , indicating its excellent cycle stability. These fascinating performances can be ascribed to its unique core-sheath structure that high capacity nano-porous nickel based metal-organic complexes (Ni-OC) in situ coated on highly conductive Ni wires. The impressive results presented here may pave the way to construct s self-standing membrane electrode for applications in high volumetric-performance energy storage.

  9. A prototype table-top inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Star-Lack, Josh; Bennett, N. Robert; Mazin, Samuel R.; Solomon, Edward G.; Fahrig, Rebecca; Pelc, Norbert J.

    2006-01-01

    A table-top volumetric CT system has been implemented that is able to image a 5-cm-thick volume in one circular scan with no cone-beam artifacts. The prototype inverse-geometry CT (IGCT) scanner consists of a large-area, scanned x-ray source and a detector array that is smaller in the transverse direction. The IGCT geometry provides sufficient volumetric sampling because the source and detector have the same axial, or slice direction, extent. This paper describes the implementation of the table-top IGCT scanner, which is based on the NexRay Scanning-Beam Digital X-ray system (NexRay, Inc., Los Gatos, CA) and an investigation of the system performance. The alignment and flat-field calibration procedures are described, along with a summary of the reconstruction algorithm. The resolution and noise performance of the prototype IGCT system are studied through experiments and further supported by analytical predictions and simulations. To study the presence of cone-beam artifacts, a ''Defrise'' phantom was scanned on both the prototype IGCT scanner and a micro CT system with a ±5 deg.cone angle for a 4.5-cm volume thickness. Images of inner ear specimens are presented and compared to those from clinical CT systems. Results showed that the prototype IGCT system has a 0.25-mm isotropic resolution and that noise comparable to that from a clinical scanner with equivalent spatial resolution is achievable. The measured MTF and noise values agreed reasonably well with theoretical predictions and computer simulations. The IGCT system was able to faithfully reconstruct the laminated pattern of the Defrise phantom while the micro CT system suffered severe cone-beam artifacts for the same object. The inner ear acquisition verified that the IGCT system can image a complex anatomical object, and the resulting images exhibited more high-resolution details than the clinical CT acquisition. Overall, the successful implementation of the prototype system supports the IGCT concept for

  10. Cosmological models constructed by van der Waals fluid approximation and volumetric expansion

    Science.gov (United States)

    Samanta, G. C.; Myrzakulov, R.

    The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.

  11. Volumetric three-dimensional display system with rasterization hardware

    Science.gov (United States)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  12. Three-dimensional volumetric assessment of response to treatment

    International Nuclear Information System (INIS)

    Willett, C.G.; Stracher, M.A.; Linggood, R.M.; Leong, J.C.; Skates, S.J.; Miketic, L.M.; Kushner, D.C.; Jacobson, J.O.

    1988-01-01

    From 1981 to 1986, 12 patients with Stage I and II diffuse large cell lymphoma of the mediastinum were treated with 4 or more cycles of multiagent chemotherapy and for nine patients this was followed by mediastinal irradiation. The response to treatment was assessed by three-dimensional volumetric analysis utilizing thoracic CT scans. The initial mean tumor volume of the five patients relapsing was 540 ml in contrast to an initial mean tumor volume of 360 ml for the seven patients remaining in remission. Of the eight patients in whom mediastinal lymphoma volumes could be assessed 1-2 months after chemotherapy prior to mediastinal irradiation, the three patients who have relapsed had volumes of 292, 92 and 50 ml (mean volume 145 ml) in contrast to five patients who have remained in remission with residual volume abnormalities of 4-87 ml (mean volume 32 ml). Four patients in prolonged remission with CT scans taken one year after treatment have been noted to have mediastinal tumor volumes of 0-28 ml with a mean value of 10 ml. This volumetric technique to assess the extent of mediastinal large cell lymphoma from thoracic CT scans appears to be a useful method to quantitate the amount of disease at presentation as well as objectively monitor response to treatment. 13 refs.; 2 figs.; 1 table

  13. A public dataset of overground and treadmill walking kinematics and kinetics in healthy individuals

    Directory of Open Access Journals (Sweden)

    Claudiane A. Fukuchi

    2018-04-01

    Full Text Available In a typical clinical gait analysis, the gait patterns of pathological individuals are commonly compared with the typically faster, comfortable pace of healthy subjects. However, due to potential bias related to gait speed, this comparison may not be valid. Publicly available gait datasets have failed to address this issue. Therefore, the goal of this study was to present a publicly available dataset of 42 healthy volunteers (24 young adults and 18 older adults who walked both overground and on a treadmill at a range of gait speeds. Their lower-extremity and pelvis kinematics were measured using a three-dimensional (3D motion-capture system. The external forces during both overground and treadmill walking were collected using force plates and an instrumented treadmill, respectively. The results include both raw and processed kinematic and kinetic data in different file formats: c3d and ASCII files. In addition, a metadata file is provided that contain demographic and anthropometric data and data related to each file in the dataset. All data are available at Figshare (DOI: 10.6084/m9.figshare.5722711. We foresee several applications of this public dataset, including to examine the influences of speed, age, and environment (overground vs. treadmill on gait biomechanics, to meet educational needs, and, with the inclusion of additional participants, to use as a normative dataset.

  14. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    Directory of Open Access Journals (Sweden)

    Mark Driscoll

    2013-01-01

    Full Text Available A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices.

  15. RAE: The Rainforest Automation Energy Dataset for Smart Grid Meter Data Analysis

    Directory of Open Access Journals (Sweden)

    Stephen Makonin

    2018-02-01

    Full Text Available Datasets are important for researchers to build models and test how well their machine learning algorithms perform. This paper presents the Rainforest Automation Energy (RAE dataset to help smart grid researchers test their algorithms that make use of smart meter data. This initial release of RAE contains 1 Hz data (mains and sub-meters from two residential houses. In addition to power data, environmental and sensor data from the house’s thermostat is included. Sub-meter data from one of the houses includes heat pump and rental suite captures, which is of interest to power utilities. We also show an energy breakdown of each house and show (by example how RAE can be used to test non-intrusive load monitoring (NILM algorithms.

  16. Synoptic volumetric variations and flushing of the Tampa Bay estuary

    Science.gov (United States)

    Wilson, M.; Meyers, S. D.; Luther, M. E.

    2014-03-01

    Two types of analyses are used to investigate the synoptic wind-driven flushing of Tampa Bay in response to the El Niño-Southern Oscillation (ENSO) cycle from 1950 to 2007. Hourly sea level elevations from the St. Petersburg tide gauge, and wind speed and direction from three different sites around Tampa Bay are used for the study. The zonal (u) and meridional (v) wind components are rotated clockwise by 40° to obtain axial and co-axial components according to the layout of the bay. First, we use the subtidal observed water level as a proxy for mean tidal height to estimate the rate of volumetric bay outflow. Second, we use wavelet analysis to bandpass sea level and wind data in the time-frequency domain to isolate the synoptic sea level and surface wind variance. For both analyses the long-term monthly climatology is removed and we focus on the volumetric and wavelet variance anomalies. The overall correlation between the Oceanic Niño Index and volumetric analysis is small due to the seasonal dependence of the ENSO response. The mean monthly climatology between the synoptic wavelet variance of elevation and axial winds are in close agreement. During the winter, El Niño (La Niña) increases (decreases) the synoptic variability, but decreases (increases) it during the summer. The difference in winter El Niño/La Niña wavelet variances is about 20 % of the climatological value, meaning that ENSO can swing the synoptic flushing of the bay by 0.22 bay volumes per month. These changes in circulation associated with synoptic variability have the potential to impact mixing and transport within the bay.

  17. EEG datasets for motor imagery brain-computer interface.

    Science.gov (United States)

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  18. Daily Megavoltage Computed Tomography in Lung Cancer Radiotherapy: Correlation Between Volumetric Changes and Local Outcome

    International Nuclear Information System (INIS)

    Bral, Samuel; De Ridder, Mark; Duchateau, Michael; Gevaert, Thierry; Engels, Benedikt; Schallier, Denis; Storme, Guy

    2011-01-01

    Purpose: To assess the predictive or comparative value of volumetric changes, measured on daily megavoltage computed tomography during radiotherapy for lung cancer. Patients and Methods: We included 80 patients with locally advanced non-small-cell lung cancer treated with image-guided intensity-modulated radiotherapy. The radiotherapy was combined with concurrent chemotherapy, combined with induction chemotherapy, or given as primary treatment. Patients entered two parallel studies with moderately hypofractionated radiotherapy. Tumor volume contouring was done on the daily acquired images. A regression coefficient was derived from the volumetric changes on megavoltage computed tomography, and its predictive value was validated. Logarithmic or polynomial fits were applied to the intratreatment changes to compare the different treatment schedules radiobiologically. Results: Regardless of the treatment type, a high regression coefficient during radiotherapy predicted for a significantly prolonged cause-specific local progression free-survival (p = 0.05). Significant differences were found in the response during radiotherapy. The significant difference in volumetric treatment response between radiotherapy with concurrent chemotherapy and radiotherapy plus induction chemotherapy translated to a superior long-term local progression-free survival for concurrent chemotherapy (p = 0.03). An enhancement ratio of 1.3 was measured for the used platinum/taxane doublet in comparison with radiotherapy alone. Conclusion: Contouring on daily megavoltage computed tomography images during radiotherapy enabled us to predict the efficacy of a given treatment. The significant differences in volumetric response between treatment strategies makes it a possible tool for future schedule comparison.

  19. Trapping volumetric measurement by multidetector CT in chronic obstructive pulmonary disease: Effect of CT threshold

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaohua; Yuan, Huishu [Department of Radiology, Peking University Third Hospital, Beijing 100191 (China); Duan, Jianghui [Medical School, Peking University, Beijing 100191 (China); Du, Yipeng; Shen, Ning; He, Bei [Department of Respiration Internal Medicine, Peking University Third Hospital, Beijing 100191 (China)

    2013-08-15

    Purpose: The purpose of this study was to evaluate the effect of various computed tomography (CT) thresholds on trapping volumetric measurements by multidetector CT in chronic obstructive pulmonary disease (COPD).Methods: Twenty-three COPD patients were scanned with a 64-slice CT scanner in both the inspiratory and expiratory phase. CT thresholds of −950 Hu in inspiration and −950 to −890 Hu in expiration were used, after which trapping volumetric measurements were made using computer software. Trapping volume percentage (Vtrap%) under the different CT thresholds in the expiratory phase and below −950 Hu in the inspiratory phase was compared and correlated with lung function.Results: Mean Vtrap% was similar under −930 Hu in the expiratory phase and below −950 Hu in the inspiratory phase, being 13.18 ± 9.66 and 13.95 ± 6.72 (both lungs), respectively; this difference was not significant (P= 0.240). Vtrap% under −950 Hu in the inspiratory phase and below the −950 to −890 Hu threshold in the expiratory phase was moderately negatively correlated with the ratio of forced expiratory volume in one second to forced vital capacity and the measured value of forced expiratory volume in one second as a percentage of the predicted value.Conclusions: Trapping volumetric measurement with multidetector CT is a promising method for the quantification of COPD. It is important to know the effect of various CT thresholds on trapping volumetric measurements.

  20. Support for external validity of radiological anatomy tests using volumetric images

    NARCIS (Netherlands)

    Ravesloot, Cécile J.; van der Gijp, Anouk; van der Schaaf, Marieke F.; Huige, Josephine C B M; Vincken, Koen L.; Mol, Christian P.; Bleys, Ronald L A W; ten Cate, Olle T.; van Schaik, Jan P J

    2015-01-01

    Rationale and Objectives: Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological

  1. Support for external validity of radiological anatomy tests using volumetric images

    NARCIS (Netherlands)

    Ravesloot, Cecile J.; van der Gijp, Anouk; van der Schaaf, Marieke F; Huige, Josephine C B M; Vincken, Koen L; Mol, Christian P; Bleys, Ronald L A W; ten Cate, Olle T; van Schaik, JPJ

    2015-01-01

    RATIONALE AND OBJECTIVES: Radiology practice has become increasingly based on volumetric images (VIs), but tests in medical education still mainly involve two-dimensional (2D) images. We created a novel, digital, VI test and hypothesized that scores on this test would better reflect radiological

  2. Robust computational analysis of rRNA hypervariable tag datasets.

    Directory of Open Access Journals (Sweden)

    Maksim Sipos

    Full Text Available Next-generation DNA sequencing is increasingly being utilized to probe microbial communities, such as gastrointestinal microbiomes, where it is important to be able to quantify measures of abundance and diversity. The fragmented nature of the 16S rRNA datasets obtained, coupled with their unprecedented size, has led to the recognition that the results of such analyses are potentially contaminated by a variety of artifacts, both experimental and computational. Here we quantify how multiple alignment and clustering errors contribute to overestimates of abundance and diversity, reflected by incorrect OTU assignment, corrupted phylogenies, inaccurate species diversity estimators, and rank abundance distribution functions. We show that straightforward procedural optimizations, combining preexisting tools, are effective in handling large (10(5-10(6 16S rRNA datasets, and we describe metrics to measure the effectiveness and quality of the estimators obtained. We introduce two metrics to ascertain the quality of clustering of pyrosequenced rRNA data, and show that complete linkage clustering greatly outperforms other widely used methods.

  3. A Solar Volumetric Receiver: Influence of Absorbing Cells Configuration on Device Thermal Performance

    Science.gov (United States)

    Yilbas, B. S.; Shuja, S. Z.

    2017-01-01

    Thermal performance of a solar volumetric receiver incorporating the different cell geometric configurations is investigated. Triangular, hexagonal, and rectangular absorbing cells are incorporated in the analysis. The fluid volume fraction, which is the ratio of the volume of the working fluid over the total volume of solar volumetric receiver, is introduced to assess the effect of cell size on the heat transfer rates in the receiver. In this case, reducing the fluid volume fraction corresponds to increasing cell size in the receiver. SiC is considered as the cell material, and air is used as the working fluid in the receiver. The Lambert's Beer law is incorporated to account for the solar absorption in the receiver. A finite element method is used to solve the governing equation of flow and heat transfer. It is found that the fluid volume fraction has significant effect on the flow field in the solar volumetric receiver, which also modifies thermal field in the working fluid. The triangular absorbing cell gives rise to improved effectiveness of the receiver and then follows the hexagonal and rectangular cells. The second law efficiency of the receiver remains high when hexagonal cells are used. This occurs for the fluid volume fraction ratio of 0.5.

  4. A high-resolution European dataset for hydrologic modeling

    Science.gov (United States)

    Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta

    2013-04-01

    There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as

  5. SPATIO-TEMPORAL DATA MODEL FOR INTEGRATING EVOLVING NATION-LEVEL DATASETS

    Directory of Open Access Journals (Sweden)

    A. Sorokine

    2017-10-01

    Full Text Available Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc. and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets. Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  6. A first dataset toward a standardized community-driven global mapping of the human immunopeptidome

    Directory of Open Access Journals (Sweden)

    Pouya Faridi

    2016-06-01

    Full Text Available We present the first standardized HLA peptidomics dataset generated by the immunopeptidomics community. The dataset is composed of native HLA class I peptides as well as synthetic HLA class II peptides that were acquired in data-dependent acquisition mode using multiple types of mass spectrometers. All laboratories used the spiked-in landmark iRT peptides for retention time normalization and data analysis. The mass spectrometric data were deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier http://www.ebi.ac.uk/pride/archive/projects/PXD001872. The generated data were used to build HLA allele-specific peptide spectral and assay libraries, which were stored in the SWATHAtlas database. Data presented here are described in more detail in the original eLife article entitled ‘An open-source computational and data resource to analyze digital maps of immunopeptidomes’.

  7. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  8. Would the ‘real’ observed dataset stand up? A critical examination of eight observed gridded climate datasets for China

    International Nuclear Information System (INIS)

    Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Kong, Dongxian; Ye, Aizhong; Di, Zhenhua; Gong, Wei

    2014-01-01

    This research compared and evaluated the spatio-temporal similarities and differences of eight widely used gridded datasets. The datasets include daily precipitation over East Asia (EA), the Climate Research Unit (CRU) product, the Global Precipitation Climatology Centre (GPCC) product, the University of Delaware (UDEL) product, Precipitation Reconstruction over Land (PREC/L), the Asian Precipitation Highly Resolved Observational (APHRO) product, the Institute of Atmospheric Physics (IAP) dataset from the Chinese Academy of Sciences, and the National Meteorological Information Center dataset from the China Meteorological Administration (CN05). The meteorological variables focus on surface air temperature (SAT) or precipitation (PR) in China. All datasets presented general agreement on the whole spatio-temporal scale, but some differences appeared for specific periods and regions. On a temporal scale, EA shows the highest amount of PR, while APHRO shows the lowest. CRU and UDEL show higher SAT than IAP or CN05. On a spatial scale, the most significant differences occur in western China for PR and SAT. For PR, the difference between EA and CRU is the largest. When compared with CN05, CRU shows higher SAT in the central and southern Northwest river drainage basin, UDEL exhibits higher SAT over the Southwest river drainage system, and IAP has lower SAT in the Tibetan Plateau. The differences in annual mean PR and SAT primarily come from summer and winter, respectively. Finally, potential factors impacting agreement among gridded climate datasets are discussed, including raw data sources, quality control (QC) schemes, orographic correction, and interpolation techniques. The implications and challenges of these results for climate research are also briefly addressed. (paper)

  9. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  10. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  11. Integration of Neuroimaging and Microarray Datasets  through Mapping and Model-Theoretic Semantic Decomposition of Unstructured Phenotypes

    Directory of Open Access Journals (Sweden)

    Spiro P. Pantazatos

    2009-06-01

    Full Text Available An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP and a knowledge-based phenotype organizer system (PhenOS to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®. The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50, and precision of the semantic mapping between these terms across datasets was 98% (n = 100. To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets.

  12. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data

    OpenAIRE

    Fischer, Felix; Selver, M. Alper; Gezer, Sinem; Dicle, O?uz; Hillen, Walter

    2015-01-01

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant addi...

  13. Microfabricated pseudocapacitors using Ni(OH)2 electrodes exhibit remarkable volumetric capacitance and energy density

    KAUST Repository

    Kurra, Narendra

    2014-09-10

    Metal hydroxide based microfabricated pseudocapacitors with impressive volumetric stack capacitance and energy density are demonstrated. A combination of top-down photolithographic process and bottom-up chemical synthesis is employed to fabricate the micro-pseudocapacitors (μ-pseudocapacitors). The resulting Ni(OH)2-based devices show several excellent characteristics including high-rate redox activity up to 500 V s-1 and an areal cell capacitance of 16 mF cm-2 corresponding to a volumetric stack capacitance of 325 F cm-3. This volumetric capacitance is two-fold higher than carbon and metal oxide based μ-supercapacitors with interdigitated electrode architecture. Furthermore, these μ-pseudocapacitors show a maximum energy density of 21 mWh cm-3, which is superior to the Li-based thin film batteries. The heterogeneous growth of Ni(OH)2 over the Ni surface during the chemical bath deposition is found to be the key parameter in the formation of uniform monolithic Ni(OH)2 mesoporous nanosheets with vertical orientation, responsible for the remarkable properties of the fabricated devices. Additionally, functional tandem configurations of the μ-pseudocapacitors are shown to be capable of powering a light-emitting diode.

  14. Evaluation of Fatigue Crack Initiation for Volumetric Flaw in Pressure Tube

    International Nuclear Information System (INIS)

    Choi, Sung Nam; Yoo, Hyun Joo

    2005-01-01

    CAN/CSA.N285.4-94 requires the periodic inservice inspection and surveillance of pressure tubes in operating CANDU nuclear power reactors. If the inspection results reveal a flaw exceeding the acceptance criteria of the Code, the flaw must be evaluated to determine if the pressure is acceptable for continued service. Currently, the flaw evaluation methodology and acceptance criteria specified in CSA-N285.05-2005, 'Technical requirements for in-service evaluation of zirconium alloy pressure tubes in CANDU reactors'. The Code is applicable to zirconium alloy pressure tubes. The evaluation methodology for a crack-like flaw is similar to that of ASME B and PV Sec. XI, 'Inservice Inspection of Nuclear Power Plant Components'. However, the evaluation methodology for a blunt volumetric flaw is described in CSA-N285.05-2005 code. The object of this paper is to address the fatigue crack initiation evaluation for the blunt volumetric flaw as it applies to the pressure tube at Wolsong NPP

  15. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    Science.gov (United States)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in

  16. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Zheng Huiru

    2009-01-01

    Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  17. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  18. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  19. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  20. Robust and conductive two-dimensional metal-organic frameworks with exceptionally high volumetric and areal capacitance

    Science.gov (United States)

    Feng, Dawei; Lei, Ting; Lukatskaya, Maria R.; Park, Jihye; Huang, Zhehao; Lee, Minah; Shaw, Leo; Chen, Shucheng; Yakovenko, Andrey A.; Kulkarni, Ambarish; Xiao, Jianping; Fredrickson, Kurt; Tok, Jeffrey B.; Zou, Xiaodong; Cui, Yi; Bao, Zhenan

    2018-01-01

    For miniaturized capacitive energy storage, volumetric and areal capacitances are more important metrics than gravimetric ones because of the constraints imposed by device volume and chip area. Typically used in commercial supercapacitors, porous carbons, although they provide a stable and reliable performance, lack volumetric performance because of their inherently low density and moderate capacitances. Here we report a high-performing electrode based on conductive hexaaminobenzene (HAB)-derived two-dimensional metal-organic frameworks (MOFs). In addition to possessing a high packing density and hierarchical porous structure, these MOFs also exhibit excellent chemical stability in both acidic and basic aqueous solutions, which is in sharp contrast to conventional MOFs. Submillimetre-thick pellets of HAB MOFs showed high volumetric capacitances up to 760 F cm-3 and high areal capacitances over 20 F cm-2. Furthermore, the HAB MOF electrodes exhibited highly reversible redox behaviours and good cycling stability with a capacitance retention of 90% after 12,000 cycles. These promising results demonstrate the potential of using redox-active conductive MOFs in energy-storage applications.

  1. Volumetric and calorimetric properties of aqueous ionene solutions.

    Science.gov (United States)

    Lukšič, Miha; Hribar-Lee, Barbara

    2017-02-01

    The volumetric (partial and apparent molar volumes) and calorimetric properties (apparent heat capacities) of aqueous cationic polyelectrolyte solutions - ionenes - were studied using the oscillating tube densitometer and differential scanning calorimeter. The polyion's charge density and the counterion properties were considered as variables. The special attention was put to evaluate the contribution of electrostatic and hydrophobic effects to the properties studied. The contribution of the CH 2 group of the polyion's backbone to molar volumes and heat capacities was estimated. Synergistic effect between polyion and counterions was found.

  2. Spatio-volumetric hazard estimation in the Auckland volcanic field

    Science.gov (United States)

    Bebbington, Mark S.

    2015-05-01

    The idea of a volcanic field `boundary' is prevalent in the literature, but ill-defined at best. We use the elliptically constrained vents in the Auckland Volcanic Field to examine how spatial intensity models can be tested to assess whether they are consistent with such features. A means of modifying the anisotropic Gaussian kernel density estimate to reflect the existence of a `hard' boundary is then suggested, and the result shown to reproduce the observed elliptical distribution. A new idea, that of a spatio-volumetric model, is introduced as being more relevant to hazard in a monogenetic volcanic field than the spatiotemporal hazard model due to the low temporal rates in volcanic fields. Significant dependencies between the locations and erupted volumes of the observed centres are deduced, and expressed in the form of a spatially-varying probability density. In the future, larger volumes are to be expected in the `gaps' between existing centres, with the location of the greatest forecast volume lying in the shipping channel between Rangitoto and Castor Bay. The results argue for tectonic control over location and magmatic control over erupted volume. The spatio-volumetric model is consistent with the hypothesis of a flat elliptical area in the mantle where tensional stresses, related to the local tectonics and geology, allow decompressional melting.

  3. Single-chip CMUT-on-CMOS front-end system for real-time volumetric IVUS and ICE imaging.

    Science.gov (United States)

    Gurun, Gokce; Tekes, Coskun; Zahorian, Jaime; Xu, Toby; Satir, Sarp; Karaman, Mustafa; Hasler, Jennifer; Degertekin, F Levent

    2014-02-01

    Intravascular ultrasound (IVUS) and intracardiac echography (ICE) catheters with real-time volumetric ultrasound imaging capability can provide unique benefits to many interventional procedures used in the diagnosis and treatment of coronary and structural heart diseases. Integration of capacitive micromachined ultrasonic transducer (CMUT) arrays with front-end electronics in single-chip configuration allows for implementation of such catheter probes with reduced interconnect complexity, miniaturization, and high mechanical flexibility. We implemented a single-chip forward-looking (FL) ultrasound imaging system by fabricating a 1.4-mm-diameter dual-ring CMUT array using CMUT-on-CMOS technology on a front-end IC implemented in 0.35-μm CMOS process. The dual-ring array has 56 transmit elements and 48 receive elements on two separate concentric annular rings. The IC incorporates a 25-V pulser for each transmitter and a low-noise capacitive transimpedance amplifier (TIA) for each receiver, along with digital control and smart power management. The final shape of the silicon chip is a 1.5-mm-diameter donut with a 430-μm center hole for a guide wire. The overall front-end system requires only 13 external connections and provides 4 parallel RF outputs while consuming an average power of 20 mW. We measured RF A-scans from the integrated single- chip array which show full functionality at 20.1 MHz with 43% fractional bandwidth. We also tested and demonstrated the image quality of the system on a wire phantom and an ex vivo chicken heart sample. The measured axial and lateral point resolutions are 92 μm and 251 μm, respectively. We successfully acquired volumetric imaging data from the ex vivo chicken heart at 60 frames per second without any signal averaging. These demonstrative results indicate that single-chip CMUT-on-CMOS systems have the potential to produce realtime volumetric images with image quality and speed suitable for catheter-based clinical applications.

  4. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  5. Designing the colorectal cancer core dataset in Iran

    Directory of Open Access Journals (Sweden)

    Sara Dorri

    2017-01-01

    Full Text Available Background: There is no need to explain the importance of collection, recording and analyzing the information of disease in any health organization. In this regard, systematic design of standard data sets can be helpful to record uniform and consistent information. It can create interoperability between health care systems. The main purpose of this study was design the core dataset to record colorectal cancer information in Iran. Methods: For the design of the colorectal cancer core data set, a combination of literature review and expert consensus were used. In the first phase, the draft of the data set was designed based on colorectal cancer literature review and comparative studies. Then, in the second phase, this data set was evaluated by experts from different discipline such as medical informatics, oncology and surgery. Their comments and opinion were taken. In the third phase refined data set, was evaluated again by experts and eventually data set was proposed. Results: In first phase, based on the literature review, a draft set of 85 data elements was designed. In the second phase this data set was evaluated by experts and supplementary information was offered by professionals in subgroups especially in treatment part. In this phase the number of elements totally were arrived to 93 numbers. In the third phase, evaluation was conducted by experts and finally this dataset was designed in five main parts including: demographic information, diagnostic information, treatment information, clinical status assessment information, and clinical trial information. Conclusion: In this study the comprehensive core data set of colorectal cancer was designed. This dataset in the field of collecting colorectal cancer information can be useful through facilitating exchange of health information. Designing such data set for similar disease can help providers to collect standard data from patients and can accelerate retrieval from storage systems.

  6. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  7. “Controlled, cross-species dataset for exploring biases in genome annotation and modification profiles”

    Directory of Open Access Journals (Sweden)

    Alison McAfee

    2015-12-01

    Full Text Available Since the sequencing of the honey bee genome, proteomics by mass spectrometry has become increasingly popular for biological analyses of this insect; but we have observed that the number of honey bee protein identifications is consistently low compared to other organisms [1]. In this dataset, we use nanoelectrospray ionization-coupled liquid chromatography–tandem mass spectrometry (nLC–MS/MS to systematically investigate the root cause of low honey bee proteome coverage. To this end, we present here data from three key experiments: a controlled, cross-species analyses of samples from Apis mellifera, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Mus musculus and Homo sapiens; a proteomic analysis of an individual honey bee whose genome was also sequenced; and a cross-tissue honey bee proteome comparison. The cross-species dataset was interrogated to determine relative proteome coverages between species, and the other two datasets were used to search for polymorphic sequences and to compare protein cleavage profiles, respectively.

  8. TH-EF-BRA-05: A Method of Near Real-Time 4D MRI Using Volumetric Dynamic Keyhole (VDK) in the Presence of Respiratory Motion for MR-Guided Radiotherapy

    International Nuclear Information System (INIS)

    Lewis, B; Kim, S; Kim, T

    2016-01-01

    Purpose: To develop a novel method that enables 4D MR imaging in near real-time for continuous monitoring of tumor motion in MR-guided radiotherapy. Methods: This method is mainly based on an idea of expanding dynamic keyhole to full volumetric imaging acquisition. In the VDK approach introduced in this study, a library of peripheral volumetric k-space data is generated in given number of phases (5 and 10 in this study) in advance. For 4D MRI at any given time, only volumetric central k-space data are acquired in real-time and combined with pre-acquired peripheral volumetric k-space data in the library corresponding to the respiratory phase (or amplitude). The combined k-space data are Fourier-transformed to MR images. For simulation study, an MRXCAT program was used to generate synthetic MR images of the thorax with desired respiratory motion, contrast levels, and spatial and temporal resolution. 20 phases of volumetric MR images, with 200 ms temporal resolution in 4 s respiratory period, were generated using balanced steady-state free precession MR pulse sequence. The total acquisition time was 21.5s/phase with a voxel size of 3×3×5 mm 3 and an image matrix of 128×128×56. Image similarity was evaluated with difference maps between the reference and reconstructed images. The VDK, conventional keyhole, and zero filling methods were compared for this simulation study. Results: Using 80% of the ky data and 70% of the kz data from the library resulted in 12.20% average intensity difference from the reference, and 21.60% and 28.45% difference in threshold pixel difference for conventional keyhole and zero filling, respectively. The imaging time will be reduced from 21.5s to 1.3s per volume using the VDK method. Conclusion: Near real-time 4D MR imaging can be achieved using the volumetric dynamic keyhole method. That makes the possibility of utilizing 4D MRI during MR-guided radiotherapy.

  9. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  10. Volumetric formulation of lattice Boltzmann models with energy conservation

    OpenAIRE

    Sbragaglia, M.; Sugiyama, K.

    2010-01-01

    We analyze a volumetric formulation of lattice Boltzmann for compressible thermal fluid flows. The velocity set is chosen with the desired accuracy, based on the Gauss-Hermite quadrature procedure, and tested against controlled problems in bounded and unbounded fluids. The method allows the simulation of thermohydrodyamical problems without the need to preserve the exact space-filling nature of the velocity set, but still ensuring the exact conservation laws for density, momentum and energy. ...

  11. The LANDFIRE Refresh strategy: updating the national dataset

    Science.gov (United States)

    Nelson, Kurtis J.; Connot, Joel A.; Peterson, Birgit E.; Martin, Charley

    2013-01-01

    The LANDFIRE Program provides comprehensive vegetation and fuel datasets for the entire United States. As with many large-scale ecological datasets, vegetation and landscape conditions must be updated periodically to account for disturbances, growth, and natural succession. The LANDFIRE Refresh effort was the first attempt to consistently update these products nationwide. It incorporated a combination of specific systematic improvements to the original LANDFIRE National data, remote sensing based disturbance detection methods, field collected disturbance information, vegetation growth and succession modeling, and vegetation transition processes. This resulted in the creation of two complete datasets for all 50 states: LANDFIRE Refresh 2001, which includes the systematic improvements, and LANDFIRE Refresh 2008, which includes the disturbance and succession updates to the vegetation and fuel data. The new datasets are comparable for studying landscape changes in vegetation type and structure over a decadal period, and provide the most recent characterization of fuel conditions across the country. The applicability of the new layers is discussed and the effects of using the new fuel datasets are demonstrated through a fire behavior modeling exercise using the 2011 Wallow Fire in eastern Arizona as an example.

  12. Systematic characterizations of text similarity in full text biomedical publications.

    Science.gov (United States)

    Sun, Zhaohui; Errami, Mounir; Long, Tara; Renard, Chris; Choradia, Nishant; Garner, Harold

    2010-09-15

    Computational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central. 72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively). While quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when

  13. A comparative study of volumetric breast density estimation in digital mammography and magnetic resonance imaging: results from a high-risk population

    Science.gov (United States)

    Kontos, Despina; Xing, Ye; Bakic, Predrag R.; Conant, Emily F.; Maidment, Andrew D. A.

    2010-03-01

    We performed a study to compare methods for volumetric breast density estimation in digital mammography (DM) and magnetic resonance imaging (MRI) for a high-risk population of women. DM and MRI images of the unaffected breast from 32 women with recently detected abnormalities and/or previously diagnosed breast cancer (age range 31-78 yrs, mean 50.3 yrs) were retrospectively analyzed. DM images were analyzed using QuantraTM (Hologic Inc). The MRI images were analyzed using a fuzzy-C-means segmentation algorithm on the T1 map. Both methods were compared to Cumulus (Univ. Toronto). Volumetric breast density estimates from DM and MRI are highly correlated (r=0.90, pwomen with very low-density breasts (peffects in MRI and differences in the computational aspects of the image analysis methods in MRI and DM. The good correlation between the volumetric and the area-based measures, shown to correlate with breast cancer risk, suggests that both DM and MRI volumetric breast density measures can aid in breast cancer risk assessment. Further work is underway to fully-investigate the association between volumetric breast density measures and breast cancer risk.

  14. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on-treatment imaging

    International Nuclear Information System (INIS)

    McBain, Catherine A.; Henry, Ann M.; Sykes, Jonathan; Amer, Ali; Marchant, Tom; Moore, Christopher M.; Davies, Julie; Stratford, Julia; McCarthy, Claire; Porritt, Bridget; Williams, Peter; Khoo, Vincent S.; Price, Pat

    2006-01-01

    Purpose: X-ray volumetric imaging (XVI) for the first time allows for the on-treatment acquisition of three-dimensional (3D) kV cone beam computed tomography (CT) images. Clinical imaging using the Synergy System (Elekta, Crawley, UK) commenced in July 2003. This study evaluated image quality and dose delivered and assessed clinical utility for treatment verification at a range of anatomic sites. Methods and Materials: Single XVIs were acquired from 30 patients undergoing radiotherapy for tumors at 10 different anatomic sites. Patients were imaged in their setup position. Radiation doses received were measured using TLDs on the skin surface. The utility of XVI in verifying target volume coverage was qualitatively assessed by experienced clinicians. Results: X-ray volumetric imaging acquisition was completed in the treatment position at all anatomic sites. At sites where a full gantry rotation was not possible, XVIs were reconstructed from projection images acquired from partial rotations. Soft-tissue definition of organ boundaries allowed direct assessment of 3D target volume coverage at all sites. Individual image quality depended on both imaging parameters and patient characteristics. Radiation dose ranged from 0.003 Gy in the head to 0.03 Gy in the pelvis. Conclusions: On-treatment XVI provided 3D verification images with soft-tissue definition at all anatomic sites at acceptably low radiation doses. This technology sets a new standard in treatment verification and will facilitate novel adaptive radiotherapy techniques

  15. Experimental investigation of the liquid volumetric mass transfer coefficient for upward gas-liquid two-phase flow in rectangular microchannels

    Directory of Open Access Journals (Sweden)

    X. Y. Ji

    2010-12-01

    Full Text Available The gas-liquid two-phase mass transfer process in microchannels is complicated due to the special dynamical characteristics. In this work, a novel method was explored to measure the liquid side volumetric mass transfer coefficient kLa. Pressure transducers were utilized to measure the pressure variation of upward gas-liquid two-phase flow in three vertical rectangular microchannels and the liquid side volumetric mass transfer coefficient kLa was calculated through the Pressure-Volume-Temperature correlation of the gas phase. Carbon dioxide-water, carbon dioxide-ethanol and carbon dioxide-n-propanol were used as working fluids, respectively. The dimensions of the microchannels were 40 µm×240 µm (depth×width, 100 µm×800 µm and 100 µm×2000 µm, respectively. Results showed that the channel diameter and the capillary number influence kLa remarkably and that the maximum value of kLa occurs in the annular flow regime. A new correlation of kLa was proposed based on the Sherwood number, Schmidt number and the capillary number. The predicted values of kLa agreed well with the experimental data.

  16. Merged SAGE II, Ozone_cci and OMPS ozone profile dataset and evaluation of ozone trends in the stratosphere

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2017-10-01

    Full Text Available In this paper, we present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of the European Space Agency Climate Change Initiative (Ozone_cci with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and intercompared; only those datasets which are in good agreement, and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE–CCI–OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE–CCI–OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90° S to 90° N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at midlatitudes and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s and stratospheric cooling.

  17. Omicseq: a web-based search engine for exploring omics datasets

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S.; Xu, Tianlei; Chen, Li; Zwick, Michael E.; Jiang, Xiaoqian; Wang, Fusheng

    2017-01-01

    Abstract The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve ‘findability’ of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. PMID:28402462

  18. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  19. Energy dissipation dataset for reversible logic gates in quantum dot-cellular automata

    Directory of Open Access Journals (Sweden)

    Ali Newaz Bahar

    2017-02-01

    Full Text Available This paper presents an energy dissipation dataset of different reversible logic gates in quantum-dot cellular automata. The proposed circuits have been designed and verified using QCADesigner simulator. Besides, the energy dissipation has been calculated under three different tunneling energy level at temperature T=2 K. For estimating the energy dissipation of proposed gates; QCAPro tool has been employed.

  20. Imaging-genomics reveals driving pathways of MRI derived volumetric tumor phenotype features in Glioblastoma

    International Nuclear Information System (INIS)

    Grossmann, Patrick; Gutman, David A.; Dunn, William D. Jr; Holder, Chad A.; Aerts, Hugo J. W. L.

    2016-01-01

    Glioblastoma (GBM) tumors exhibit strong phenotypic differences that can be quantified using magnetic resonance imaging (MRI), but the underlying biological drivers of these imaging phenotypes remain largely unknown. An Imaging-Genomics analysis was performed to reveal the mechanistic associations between MRI derived quantitative volumetric tumor phenotype features and molecular pathways. One hundred fourty one patients with presurgery MRI and survival data were included in our analysis. Volumetric features were defined, including the necrotic core (NE), contrast-enhancement (CE), abnormal tumor volume assessed by post-contrast T1w (tumor bulk or TB), tumor-associated edema based on T2-FLAIR (ED), and total tumor volume (TV), as well as ratios of these tumor components. Based on gene expression where available (n = 91), pathway associations were assessed using a preranked gene set enrichment analysis. These results were put into context of molecular subtypes in GBM and prognostication. Volumetric features were significantly associated with diverse sets of biological processes (FDR < 0.05). While NE and TB were enriched for immune response pathways and apoptosis, CE was associated with signal transduction and protein folding processes. ED was mainly enriched for homeostasis and cell cycling pathways. ED was also the strongest predictor of molecular GBM subtypes (AUC = 0.61). CE was the strongest predictor of overall survival (C-index = 0.6; Noether test, p = 4x10 −4 ). GBM volumetric features extracted from MRI are significantly enriched for information about the biological state of a tumor that impacts patient outcomes. Clinical decision-support systems could exploit this information to develop personalized treatment strategies on the basis of noninvasive imaging. The online version of this article (doi:10.1186/s12885-016-2659-5) contains supplementary material, which is available to authorized users

  1. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  2. The relationship between anatomic noise and volumetric breast density for digital mammography

    International Nuclear Information System (INIS)

    Mainprize, James G.; Tyson, Albert H.; Yaffe, Martin J.

    2012-01-01

    Purpose: The appearance of parenchymal/stromal patterns in mammography have been characterized as having a Wiener power spectrum with an inverse power-law shape described by the exponential parameter, β. The amount of fibroglandular tissue, which can be quantified in terms of volumetric breast density (VBD), influences the texture and appearance of the patterns formed in a mammogram. Here, a large study is performed to investigate the variations in β in a clinical population and to indicate the relationship between β and breast density. Methods: From a set of 2686 cranio-caudal normal screening mammograms, the parameter β was extracted from log-log fits to the Wiener spectrum over the range 0.15–1 mm −1 . The Wiener spectrum was calculated from regions of interest in the compression paddle contact region of the breast. An in-house computer program, Cumulus V, was used to extract the volumetric breast density and identify the compression paddle contact regions of the breast. The Wiener spectra were calculated with and without modulation transfer function (MTF) correction to determine the impact of VBD on the intrinsic anatomic noise. Results: The mean volumetric breast density was 25.5% (±12.6%) over all images. The mean β following a MTF correction which decreased the β slightly (≈−0.08) was found to be 2.87. Varying the maximum of the spatial frequency range of the fits from 0.7 to 1.0, 1.25 or 1.5 mm −1 showing small decreases in the result, although the effect of the quantum noise power component on reducing β was clearly observed at 1.5 mm −1 . Conclusions: The texture parameter, β, was found to increase with VBD at low volumetric breast densities with an apparent leveling off at higher densities. The relationship between β and VBD measured here can be used to create probabilistic models for computer simulations of detectability. As breast density is a known risk predictor for breast cancer, the correlation between β and VBD suggests that

  3. Dataset on records of Hericium erinaceus in Slovakia

    Directory of Open Access Journals (Sweden)

    Vladimír Kunca

    2017-06-01

    Full Text Available The data presented in this article are related to the research article entitled “Habitat preferences of Hericium erinaceus in Slovakia” (Kunca and Čiliak, 2016 [FUNECO607] [2]. The dataset include all available and unpublished data from Slovakia, besides the records from the same tree or stem. We compiled a database of records of collections by processing data from herbaria, personal records and communication with mycological activists. Data on altitude, tree species, host tree vital status, host tree position and intensity of management of forest stands were evaluated in this study. All surveys were based on basidioma occurrence and some result from targeted searches.

  4. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  5. Quantifying spatial and temporal trends in beach-dune volumetric changes using spatial statistics

    Science.gov (United States)

    Eamer, Jordan B. R.; Walker, Ian J.

    2013-06-01

    Spatial statistics are generally underutilized in coastal geomorphology, despite offering great potential for identifying and quantifying spatial-temporal trends in landscape morphodynamics. In particular, local Moran's Ii provides a statistical framework for detecting clusters of significant change in an attribute (e.g., surface erosion or deposition) and quantifying how this changes over space and time. This study analyzes and interprets spatial-temporal patterns in sediment volume changes in a beach-foredune-transgressive dune complex following removal of invasive marram grass (Ammophila spp.). Results are derived by detecting significant changes in post-removal repeat DEMs derived from topographic surveys and airborne LiDAR. The study site was separated into discrete, linked geomorphic units (beach, foredune, transgressive dune complex) to facilitate sub-landscape scale analysis of volumetric change and sediment budget responses. Difference surfaces derived from a pixel-subtraction algorithm between interval DEMs and the LiDAR baseline DEM were filtered using the local Moran's Ii method and two different spatial weights (1.5 and 5 m) to detect statistically significant change. Moran's Ii results were compared with those derived from a more spatially uniform statistical method that uses a simpler student's t distribution threshold for change detection. Morphodynamic patterns and volumetric estimates were similar between the uniform geostatistical method and Moran's Ii at a spatial weight of 5 m while the smaller spatial weight (1.5 m) consistently indicated volumetric changes of less magnitude. The larger 5 m spatial weight was most representative of broader site morphodynamics and spatial patterns while the smaller spatial weight provided volumetric changes consistent with field observations. All methods showed foredune deflation immediately following removal with increased sediment volumes into the spring via deposition at the crest and on lobes in the lee

  6. Qualitative values of radioactivity, area and volumetric: Application on phantoms (target and background)

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rahman Al-Shakhrah, Issa [Department of Physics, University of Jordan, Queen Rania Street, Amman (Jordan)], E-mail: issashak@yahoo.com

    2009-04-15

    The visualization of a lesion depends on the contrast between the lesion and surrounding background (T/B; (target/background) ratio). For imaging in vivo not only is the radioactivity in the target organ important, but so too is the ratio of radioactivity in the target versus that in the background. Nearly all studies reported in the literature have dealt with the surface index, as a standard factor to study the relationship between the target (tissue or organ) and the background. It is necessary to know the ratio between the volumetric activity of lesions (targets) and normal tissues (background) instead of knowing the ratio between the area activity, the volume index being a more realistic factor than the area index as the targets (tissues or organs) are real volumes that have surfaces. The intention is that this work should aid in approaching a quantitative relationship and differentiation between different tissues (target/background or abnormal/normal tissues). For the background, square regions of interest (Rios) (11x11 pixels in size) were manually drawn by the observer at locations far from the border of the plastic cylinder (simulated organ), while an isocontour region with 50% threshold was drawn automatically over the cylinder. The total number of counts and pixels in each of these regions was calculated. The relationship between different phantom parameters, cylinder (target) depth, area activity ratio (background/target, A(B/T)) and real volumetric activity ratio (background/target, V(B/T)), was demonstrated. Variations in the area and volumetric activity ratio values with respect to the depth were deduced. To find a realistic value of the ratio, calibration charts have been constructed that relate the area and real volumetric ratios as a function of depth of the tissues and organs. Our experiments show that the cross-sectional area of the cylinder (applying a threshold 50% isocontour) has a weak dependence on the activity concentrations of the

  7. Qualitative values of radioactivity, area and volumetric: Application on phantoms (target and background)

    International Nuclear Information System (INIS)

    Abdel-Rahman Al-Shakhrah, Issa

    2009-01-01

    The visualization of a lesion depends on the contrast between the lesion and surrounding background (T/B; (target/background) ratio). For imaging in vivo not only is the radioactivity in the target organ important, but so too is the ratio of radioactivity in the target versus that in the background. Nearly all studies reported in the literature have dealt with the surface index, as a standard factor to study the relationship between the target (tissue or organ) and the background. It is necessary to know the ratio between the volumetric activity of lesions (targets) and normal tissues (background) instead of knowing the ratio between the area activity, the volume index being a more realistic factor than the area index as the targets (tissues or organs) are real volumes that have surfaces. The intention is that this work should aid in approaching a quantitative relationship and differentiation between different tissues (target/background or abnormal/normal tissues). For the background, square regions of interest (Rios) (11x11 pixels in size) were manually drawn by the observer at locations far from the border of the plastic cylinder (simulated organ), while an isocontour region with 50% threshold was drawn automatically over the cylinder. The total number of counts and pixels in each of these regions was calculated. The relationship between different phantom parameters, cylinder (target) depth, area activity ratio (background/target, A(B/T)) and real volumetric activity ratio (background/target, V(B/T)), was demonstrated. Variations in the area and volumetric activity ratio values with respect to the depth were deduced. To find a realistic value of the ratio, calibration charts have been constructed that relate the area and real volumetric ratios as a function of depth of the tissues and organs. Our experiments show that the cross-sectional area of the cylinder (applying a threshold 50% isocontour) has a weak dependence on the activity concentrations of the

  8. Volumetric image-guidance: Does routine usage prompt adaptive re-planning? An institutional review

    International Nuclear Information System (INIS)

    Tanyi, James A.; Fuss, Martin H.

    2008-01-01

    Purpose. To investigate how the use of volumetric image-guidance using an on-board cone-beam computed tomography (CBCT) system impacts on the frequency of adaptive re-planning. Material and methods. Treatment courses of 146 patients who have undergone a course of external beam radiation therapy (EBRT) using volumetric CBCT image-guidance were analyzed. Target locations included the brain, head and neck, chest, abdomen, as well as prostate and non-prostate pelvis. The majority of patients (57.5%) were treated with hypo-fractionated treatment regimens (three to 15 fraction courses). The frequency of image-guidance ranged from daily (87.7%) to weekly or twice weekly. The underlying medical necessity for adaptive re-planning as well as frequency and consequences of plan adaptation to dose-volume parameters was assessed. Results. Radiation plans of 34 patients (23.3%) were adapted at least once (up to six time) during their course of EBRT as a result of image-guidance CBCT review. Most common causes for adaptive planning were: tumor change (mostly shrinkage: 10 patients; four patients more than one re-plan), change in abdominal girth (systematic change in hollow organ filling; n=7, two patients more than one re-plan), weight loss (n=5), and systematic target setup deviation from simulation (n=5). Adaptive re-plan was required mostly for conventionally fractionated courses; only 5 patient plans undergoing hypo-fractionated treatment were adjusted. In over 91% of adapted plans, the dose-volume parameters did deviate from the prescribed plan parameters by more than 5% for at least 10% of the target volume, or organs-at-risk in close proximity to the target volume. Discussion. Routine use of volumetric image-guidance has in our practice increased the demand for adaptive re-planning. Volumetric CBCT image-guidance provides sufficient imaging information to reliably predict the need for dose adjustment. In the vast majority of cases evaluated, the initial and adapted dose

  9. Engineering three-dimensionally electrodeposited Si-on-Ni inverse opal structure for high volumetric capacity Li-ion microbattery anode.

    Science.gov (United States)

    Liu, Hao; Cho, Hyung-Man; Meng, Ying Shirley; Li, Quan

    2014-06-25

    Aiming at improving the volumetric capacity of nanostructured Li-ion battery anode, an electrodeposited Si-on-Ni inverse opal structure has been proposed in the present work. This type of electrode provides three-dimensional bi-continuous pathways for ion/electron transport and high surface area-to-volume ratios, and thus exhibits lower interfacial resistance, but higher effective Li ions diffusion coefficients, when compared to the Si-on-Ni nanocable array electrode of the same active material mass. As a result, improved volumetric capacities and rate capabilities have been demonstrated in the Si-on-Ni inverse opal anode. We also show that optimization of the volumetric capacities and the rate performance of the inverse opal electrode can be realized by manipulating the pore size of the Ni scaffold and the thickness of the Si deposit.

  10. Dataset concerning the analytical approximation of the Ae3 temperature

    Directory of Open Access Journals (Sweden)

    B.L. Ennis

    2017-02-01

    The dataset includes the terms of the function and the values for the polynomial coefficients for major alloying elements in steel. A short description of the approximation method used to derive and validate the coefficients has also been included. For discussion and application of this model, please refer to the full length article entitled “The role of aluminium in chemical and phase segregation in a TRIP-assisted dual phase steel” 10.1016/j.actamat.2016.05.046 (Ennis et al., 2016 [1].

  11. Influence of fluid-mechanical characteristics of the system on the volumetric mass transfer coefficient and gas dispersion in three-phase system

    Directory of Open Access Journals (Sweden)

    Knežević Milena M.

    2014-01-01

    Full Text Available Distribution of gas bubbles and volumetric mass transfer coefficient, Kla, in a three phase system, with different types of solid particles at different operation conditions were studied in this paper. The ranges of superficial gas and liquid velocities used in this study were 0,03-0,09 m/s and 0-0,1 m/s, respectively. The three different types of solid particles were used as a bed in the column (glass dp=3 mm, dp=6 mm; ceramic dp=6 mm. The experiments were carried out in a 2D plexiglas column, 278 x 20,4 x 500 mm and in a cylindrical plexiglas column, with a diameter of 64 mm and a hight of 2000 mm. The Kla coefficient increased with gas and liquid velocities. Results showed that the volumetric mass transfer coefficient has a higher values in three phase system, with solid particles, compared with two phase system. The particles properties (diameter and density have a major impact on oxygen mass transfer in three phase systems.

  12. 2006 Fynmeet sea clutter measurement trial: Datasets

    CSIR Research Space (South Africa)

    Herselman, PLR

    2007-09-06

    Full Text Available -011............................................................................................................................................................................................. 25 iii Dataset CAD14-001 0 5 10 15 20 25 30 35 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-001 2400 2600 2800... 40 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-002 2400 2600 2800 3000 3200 3400 3600 -30 -25 -20 -15 -10 -5 0 5 10...

  13. Experimental evaluation and simulation of volumetric shrinkage and warpage on polymeric composite reinforced with short natural fibers

    Science.gov (United States)

    Santos, Jonnathan D.; Fajardo, Jorge I.; Cuji, Alvaro R.; García, Jaime A.; Garzón, Luis E.; López, Luis M.

    2015-09-01

    A polymeric natural fiber-reinforced composite is developed by extrusion and injection molding process. The shrinkage and warpage of high-density polyethylene reinforced with short natural fibers of Guadua angustifolia Kunth are analyzed by experimental measurements and computer simulations. Autodesk Moldflow® and Solid Works® are employed to simulate both volumetric shrinkage and warpage of injected parts at different configurations: 0 wt.%, 20 wt.%, 30 wt.% and 40 wt.% reinforcing on shrinkage and warpage behavior of polymer composite. Become evident the restrictive effect of reinforcing on the volumetric shrinkage and warpage of injected parts. The results indicate that volumetric shrinkage of natural composite is reduced up to 58% with fiber increasing, whereas the warpage shows a reduction form 79% to 86% with major fiber content. These results suggest that it is a highly beneficial use of natural fibers to improve the assembly properties of polymeric natural fiber-reinforced composites.

  14. Using Multiple Big Datasets and Machine Learning to Produce a New Global Particulate Dataset: A Technology Challenge Case Study

    Science.gov (United States)

    Lary, D. J.

    2013-12-01

    A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.

  15. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  16. Superconductivity in volumetric and film ceramics Bi-Sr-Ca-Cu-O

    International Nuclear Information System (INIS)

    Sukhanov, A.A.; Ozmanyan, Kh.R.; Sandomirskij, B.B.

    1988-01-01

    A superconducting transition with T c0 =82-95 K and T c (R=0)=82-72 K was observed in volumetric and film Bi(Sr 1-x Ca x ) 2 Cu 3 O y samples obtained by solid-phase reaction. Temperature dependences of resistance critical current and magnetic susceptibility are measured

  17. Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    2018-03-01

    Full Text Available Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN based scheme, namely P a l m R CNN (short for palmprint/palmvein recognition using CNNs. The effectiveness and efficiency of P a l m R CNN have been verified through extensive experiments conducted on benchmark datasets. In addition, though substantial effort has been devoted to palmvein recognition, it is still quite difficult for the researchers to know the potential discriminating capability of the contactless palmvein. One of the root reasons is that a large-scale and publicly available dataset comprising high-quality, contactless palmvein images is still lacking. To this end, a user-friendly acquisition device for collecting high quality contactless palmvein images is at first designed and developed in this work. Then, a large-scale palmvein image dataset is established, comprising 12,000 images acquired from 600 different palms in two separate collection sessions. The collected dataset now is publicly available.

  18. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  19. Development of a volumetric projection technique for the digital evaluation of field of view.

    Science.gov (United States)

    Marshall, Russell; Summerskill, Stephen; Cook, Sharon

    2013-01-01

    Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.

  20. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  1. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  3. Volumetric velocimetry for fluid flows

    Science.gov (United States)

    Discetti, Stefano; Coletti, Filippo

    2018-04-01

    In recent years, several techniques have been introduced that are capable of extracting 3D three-component velocity fields in fluid flows. Fast-paced developments in both hardware and processing algorithms have generated a diverse set of methods, with a growing range of applications in flow diagnostics. This has been further enriched by the increasingly marked trend of hybridization, in which the differences between techniques are fading. In this review, we carry out a survey of the prominent methods, including optical techniques and approaches based on medical imaging. An overview of each is given with an example of an application from the literature, while focusing on their respective strengths and challenges. A framework for the evaluation of velocimetry performance in terms of dynamic spatial range is discussed, along with technological trends and emerging strategies to exploit 3D data. While critical challenges still exist, these observations highlight how volumetric techniques are transforming experimental fluid mechanics, and that the possibilities they offer have just begun to be explored.

  4. Comparison of a radiomic biomarker with volumetric analysis for decoding tumour phenotypes of lung adenocarcinoma with different disease-specific survival

    International Nuclear Information System (INIS)

    Yuan, Mei; Zhang, Yu-Dong; Pu, Xue-Hui; Zhong, Yan; Yu, Tong-Fu; Li, Hai; Wu, Jiang-Fen

    2017-01-01

    To compare a multi-feature-based radiomic biomarker with volumetric analysis in discriminating lung adenocarcinomas with different disease-specific survival on computed tomography (CT) scans. This retrospective study obtained institutional review board approval and was Health Insurance Portability and Accountability Act (HIPAA) compliant. Pathologically confirmed lung adenocarcinoma (n = 431) manifested as subsolid nodules on CT were identified. Volume and percentage solid volume were measured by using a computer-assisted segmentation method. Radiomic features quantifying intensity, texture and wavelet were extracted from the segmented volume of interest (VOI). Twenty best features were chosen by using the Relief method and subsequently fed to a support vector machine (SVM) for discriminating adenocarcinoma in situ (AIS)/minimally invasive adenocarcinoma (MIA) from invasive adenocarcinoma (IAC). Performance of the radiomic signatures was compared with volumetric analysis via receiver-operating curve (ROC) analysis and logistic regression analysis. The accuracy of proposed radiomic signatures for predicting AIS/MIA from IAC achieved 80.5% with ROC analysis (Az value, 0.829; sensitivity, 72.1%; specificity, 80.9%), which showed significantly higher accuracy than volumetric analysis (69.5%, P = 0.049). Regression analysis showed that radiomic signatures had superior prognostic performance to volumetric analysis, with AIC values of 81.2% versus 70.8%, respectively. The radiomic tumour-phenotypes biomarker exhibited better diagnostic accuracy than traditional volumetric analysis in discriminating lung adenocarcinoma with different disease-specific survival. (orig.)

  5. Comparison of a radiomic biomarker with volumetric analysis for decoding tumour phenotypes of lung adenocarcinoma with different disease-specific survival

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Mei; Zhang, Yu-Dong; Pu, Xue-Hui; Zhong, Yan; Yu, Tong-Fu [First Affiliated Hospital of Nanjing Medical University, Department of Radiology, Nanjing, Jiangsu Province (China); Li, Hai [First Affiliated Hospital of Nanjing Medical University, Department of Pathology, Nanjing (China); Wu, Jiang-Fen [GE Healthcare, Shanghai (China)

    2017-11-15

    To compare a multi-feature-based radiomic biomarker with volumetric analysis in discriminating lung adenocarcinomas with different disease-specific survival on computed tomography (CT) scans. This retrospective study obtained institutional review board approval and was Health Insurance Portability and Accountability Act (HIPAA) compliant. Pathologically confirmed lung adenocarcinoma (n = 431) manifested as subsolid nodules on CT were identified. Volume and percentage solid volume were measured by using a computer-assisted segmentation method. Radiomic features quantifying intensity, texture and wavelet were extracted from the segmented volume of interest (VOI). Twenty best features were chosen by using the Relief method and subsequently fed to a support vector machine (SVM) for discriminating adenocarcinoma in situ (AIS)/minimally invasive adenocarcinoma (MIA) from invasive adenocarcinoma (IAC). Performance of the radiomic signatures was compared with volumetric analysis via receiver-operating curve (ROC) analysis and logistic regression analysis. The accuracy of proposed radiomic signatures for predicting AIS/MIA from IAC achieved 80.5% with ROC analysis (Az value, 0.829; sensitivity, 72.1%; specificity, 80.9%), which showed significantly higher accuracy than volumetric analysis (69.5%, P = 0.049). Regression analysis showed that radiomic signatures had superior prognostic performance to volumetric analysis, with AIC values of 81.2% versus 70.8%, respectively. The radiomic tumour-phenotypes biomarker exhibited better diagnostic accuracy than traditional volumetric analysis in discriminating lung adenocarcinoma with different disease-specific survival. (orig.)

  6. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  7. Prognostic value of (18)F-FDG PET/CT volumetric parameters in recurrent epithelial ovarian cancer.

    Science.gov (United States)

    Mayoral, M; Fernandez-Martinez, A; Vidal, L; Fuster, D; Aya, F; Pavia, J; Pons, F; Lomeña, F; Paredes, P

    2016-01-01

    Metabolic tumour volume (MTV) and total lesion glycolysis (TLG) from (18)F-FDG PET/CT are emerging prognostic biomarkers in various solid neoplasms. These volumetric parameters and the SUVmax have shown to be useful criteria for disease prognostication in preoperative and post-treatment epithelial ovarian cancer (EOC) patients. The purpose of this study was to evaluate the utility of (18)F-FDG PET/CT measurements to predict survival in patients with recurrent EOC. Twenty-six patients with EOC who underwent a total of 31 (18)F-FDG PET/CT studies for suspected recurrence were retrospectively included. SUVmax and volumetric parameters whole-body MTV (wbMTV) and whole-body TLG (wbTLG) with a threshold of 40% and 50% of the SUVmax were obtained. Correlation between PET parameters and progression-free survival (PFS) and the survival analysis of prognostic factors were calculated. Serous cancer was the most common histological subtype (76.9%). The median PFS was 12.5 months (range 10.7-20.6 months). Volumetric parameters showed moderate inverse correlation with PFS but there was no significant correlation in the case of SUVmax. The correlation was stronger for first recurrences. By Kaplan-Meier analysis and log-rank test, wbMTV 40%, wbMTV 50% and wbTLG 50% correlated with PFS. However, SUVmax and wbTLG 40% were not statistically significant predictors for PFS. Volumetric parameters wbMTV and wbTLG 50% measured by (18)F-FDG PET/CT appear to be useful prognostic predictors of outcome and may provide valuable information to individualize treatment strategies in patients with recurrent EOC. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  8. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  9. Triaxial extensometer for volumetric strain measurement in a hydro-compression loading test for foam materials

    International Nuclear Information System (INIS)

    Feng, Bo; Xu, Ming-long; Zhao, Tian-fei; Zhang, Zhi-jun; Lu, Tian-jian

    2010-01-01

    A new strain gauge-based triaxial extensometer (radial extensometers x, y and axial extensometer z) is presented to improve the volumetric strain measurement in a hydro-compression loading test for foam materials. By the triaxial extensometer, triaxial deformations of the foam specimen can be measured directly, from which the volumetric strain is determined. Sensitivities of the triaxial extensometer are predicted using a finite-element model, and verified through experimental calibrations. The axial extensometer is validated by conducting a uniaxial compression test in aluminium foam and comparing deformation measured by the axial extensometer to that by the advanced optical 3D deformation analysis system ARAMIS; the result from the axial extensometer agrees well with that from ARAMIS. A new modus of two-wire measurement and transmission in a hydrostatic environment is developed to avoid the punching and lead sealing techniques on the pressure vessel for the hydro-compression test. The effect of hydrostatic pressure on the triaxial extensometer is determined through an experimental test. An application in an aluminium foam hydrostatic compression test shows that the triaxial extensometer is effective for volumetric strain measurement in a hydro-compression loading test for foam materials

  10. Solvent evaporation induced graphene powder with high volumetric capacitance and outstanding rate capability for supercapacitors

    Science.gov (United States)

    Zhang, Xiaozhe; Raj, Devaraj Vasanth; Zhou, Xufeng; Liu, Zhaoping

    2018-04-01

    Graphene-based electrode materials for supercapacitors usually suffer from poor volumetric performance due to the low density. The enhancement of volumetric capacitance by densification of graphene materials, however, is usually accompanied by deterioration of rate capability, as the huge contraction of pore size hinders rapid diffusion of electrolytes. Thus, it is important to develop suitable pore size in graphene materials, which can sustain fast ion diffusion and avoid excessive voids to acquire high density simultaneously for supercapacitor applications. Accordingly, we propose a simple solvent evaporation method to control the pore size of graphene powders by adjusting the surface tension of solvents. Ethanol is used instead of water to reduce the shrinkage degree of graphene powder during solvent evaporation process, due to its lower surface tension comparing with water. Followed by the assistance of mechanical compression, graphene powder having high compaction density of 1.30 g cm-3 and a large proportion of mesopores in the pore size range of 2-30 nm is obtained, which delivers high volumetric capacitance of 162 F cm-3 and exhibits outstanding rate performance of 76% capacity retention at a high current density of 100 A g-1 simultaneously.

  11. An Analysis of the GTZAN Music Genre Dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    Most research in automatic music genre recognition has used the dataset assembled by Tzanetakis et al. in 2001. The composition and integrity of this dataset, however, has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine...

  12. as-PSOCT: Volumetric microscopic imaging of human brain architecture and connectivity.

    Science.gov (United States)

    Wang, Hui; Magnain, Caroline; Wang, Ruopeng; Dubb, Jay; Varjabedian, Ani; Tirrell, Lee S; Stevens, Allison; Augustinack, Jean C; Konukoglu, Ender; Aganj, Iman; Frosch, Matthew P; Schmahmann, Jeremy D; Fischl, Bruce; Boas, David A

    2018-01-15

    Polarization sensitive optical coherence tomography (PSOCT) with serial sectioning has enabled the investigation of 3D structures in mouse and human brain tissue samples. By using intrinsic optical properties of back-scattering and birefringence, PSOCT reliably images cytoarchitecture, myeloarchitecture and fiber orientations. In this study, we developed a fully automatic serial sectioning polarization sensitive optical coherence tomography (as-PSOCT) system to enable volumetric reconstruction of human brain samples with unprecedented sample size and resolution. The 3.5 μm in-plane resolution and 50 μm through-plane voxel size allow inspection of cortical layers that are a single-cell in width, as well as small crossing fibers. We show the abilities of as-PSOCT in quantifying layer thicknesses of the cerebellar cortex and creating microscopic tractography of intricate fiber networks in the subcortical nuclei and internal capsule regions, all based on volumetric reconstructions. as-PSOCT provides a viable tool for studying quantitative cytoarchitecture and myeloarchitecture and mapping connectivity with microscopic resolution in the human brain. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Dataset definition for CMS operations and physics analyses

    Science.gov (United States)

    Franzoni, Giovanni; Compact Muon Solenoid Collaboration

    2016-04-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets and secondary datasets/dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concepts of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the LHC run I, and we discuss the plans for the run II.

  14. Dataset definition for CMS operations and physics analyses

    CERN Document Server

    AUTHOR|(CDS)2051291

    2016-01-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets, secondary datasets, and dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concept of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the first run, and we discuss the plans for the second LHC run.

  15. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  16. Medical Image Data and Datasets in the Era of Machine Learning-Whitepaper from the 2016 C-MIMI Meeting Dataset Session.

    Science.gov (United States)

    Kohli, Marc D; Summers, Ronald M; Geis, J Raymond

    2017-08-01

    At the first annual Conference on Machine Intelligence in Medical Imaging (C-MIMI), held in September 2016, a conference session on medical image data and datasets for machine learning identified multiple issues. The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities. High-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products should be better described. NIH and other government agencies should promote and, where applicable, enforce, access to medical image datasets. We should improve communication among medical imaging domain experts, medical imaging informaticists, academic clinical and basic science researchers, government and industry data scientists, and interested commercial entities.

  17. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  18. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  19. Spirometry and volumetric capnography in lung function assessment of obese and normal-weight individuals without asthma.

    Science.gov (United States)

    Ferreira, Mariana S; Mendes, Roberto T; Marson, Fernando A L; Zambon, Mariana P; Antonio, Maria A R G M; Paschoal, Ilma A; Toro, Adyléia A D C; Severino, Silvana D; Ribeiro, Maria A G O; Ribeiro, José D

    To analyze and compare lung function of obese and healthy, normal-weight children and adolescents, without asthma, through spirometry and volumetric capnography. Cross-sectional study including 77 subjects (38 obese) aged 5-17 years. All subjects underwent spirometry and volumetric capnography. The evaluations were repeated in obese subjects after the use of a bronchodilator. At the spirometry assessment, obese individuals, when compared with the control group, showed lower values of forced expiratory volume in the first second by forced vital capacity (FEV 1 /FVC) and expiratory flows at 75% and between 25 and 75% of the FVC (p11 years (p<0.05). Even without the diagnosis of asthma by clinical criteria and without response to bronchodilator use, obese individuals showed lower FEV 1 /FVC values and forced expiratory flow, indicating the presence of an obstructive process. Volumetric capnography showed that obese individuals had higher alveolar tidal volume, with no alterations in ventilation homogeneity, suggesting flow alterations, without affecting lung volumes. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  20. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  1. SU-E-T-540: Volumetric Modulated Total Body Irradiation Using a Rotational Lazy Susan-Like Immobilization System

    International Nuclear Information System (INIS)

    Gu, X; Hrycushko, B; Lee, H; Lamphier, R; Jiang, S; Abdulrahman, R; Timmerman, R

    2014-01-01

    Purpose: Traditional extended SSD total body irradiation (TBI) techniques can be problematic in terms of patient comfort and/or dose uniformity. This work aims to develop a comfortable TBI technique that achieves a uniform dose distribution to the total body while reducing the dose to organs at risk for complications. Methods: To maximize patient comfort, a lazy Susan-like couch top immobilization system which rotates about a pivot point was developed. During CT simulation, a patient is immobilized by a Vac-Lok bag within the body frame. The patient is scanned head-first and then feet-first following 180° rotation of the frame. The two scans are imported into the Pinnacle treatment planning system and concatenated to give a full-body CT dataset. Treatment planning matches multiple isocenter volumetric modulated arc (VMAT) fields of the upper body and multiple isocenter parallel-opposed fields of the lower body. VMAT fields of the torso are optimized to satisfy lung dose constraints while achieving a therapeutic dose to the torso. The multiple isocenter VMAT fields are delivered with an indexed couch, followed by body frame rotation about the pivot point to treat the lower body isocenters. The treatment workflow was simulated with a Rando phantom, and the plan was mapped to a solid water slab phantom for point- and film-dose measurements at multiple locations. Results: The treatment plan of 12Gy over 8 fractions achieved 80.2% coverage of the total body volume within ±10% of the prescription dose. The mean lung dose was 8.1 Gy. All ion chamber measurements were within ±1.7% compared to the calculated point doses. All relative film dosimetry showed at least a 98.0% gamma passing rate using a 3mm/3% passing criteria. Conclusion: The proposed patient comfort-oriented TBI technique provides for a uniform dose distribution within the total body while reducing the dose to the lungs

  2. Adsorption indicators in double precipitation volumetric. II. Use of radioactive indicators

    International Nuclear Information System (INIS)

    Carnicero Tejerina, M. I.

    1961-01-01

    1 31I-fluorescein and 1 10Ag-silver sulphate have been used in order to check the role of adsorption indicators in the volumetric analysis of double precipitation reactions. It has been shown by using isotopes that adsorption of fluorescein on silver halides depends on the foreign cations present in the solution. (Author) 8 refs

  3. Superconductivity in volumetric and film ceramics Bi-Sr-Ca-Cu-O

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, A A; Ozmanyan, Kh R; Sandomirskij, B B

    1988-07-10

    A superconducting transition with T/sub c0/=82-95 K and T/sub c/(R=0)=82-72 K was observed in volumetric and film Bi(Sr/sub 1-x/Ca/sub x/)/sub 2/Cu/sub 3/O/sub y/ samples obtained by solid-phase reaction. Temperature dependences of resistance critical current and magnetic susceptibility are measured.

  4. MDCT linear and volumetric analysis of adrenal glands: Normative data and multiparametric assessment

    International Nuclear Information System (INIS)

    Carsin-Vu, Aline; Mule, Sebastien; Janvier, Annaelle; Hoeffel, Christine; Oubaya, Nadia; Delemer, Brigitte; Soyer, Philippe

    2016-01-01

    To study linear and volumetric adrenal measurements, their reproducibility, and correlations between total adrenal volume (TAV) and adrenal micronodularity, age, gender, body mass index (BMI), visceral (VAAT) and subcutaneous adipose tissue volume (SAAT), presence of diabetes, chronic alcoholic abuse and chronic inflammatory disease (CID). We included 154 patients (M/F, 65/89; mean age, 57 years) undergoing abdominal multidetector row computed tomography (MDCT). Two radiologists prospectively independently performed adrenal linear and volumetric measurements with semi-automatic software. Inter-observer reliability was studied using inter-observer correlation coefficient (ICC). Relationships between TAV and associated factors were studied using bivariate and multivariable analysis. Mean TAV was 8.4 ± 2.7 cm 3 (3.3-18.7 cm 3 ). ICC was excellent for TAV (0.97; 95 % CI: 0.96-0.98) and moderate to good for linear measurements. TAV was significantly greater in men (p < 0.0001), alcoholics (p = 0.04), diabetics (p = 0.0003) and those with micronodular glands (p = 0.001). TAV was lower in CID patients (p = 0.0001). TAV correlated positively with VAAT (r = 0.53, p < 0.0001), BMI (r = 0.42, p < 0.0001), SAAT (r = 0.29, p = 0.0003) and age (r = 0.23, p = 0.005). Multivariable analysis revealed gender, micronodularity, diabetes, age and BMI as independent factors influencing TAV. Adrenal gland MDCT-based volumetric measurements are more reproducible than linear measurements. Gender, micronodularity, age, BMI and diabetes independently influence TAV. (orig.)

  5. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an

  6. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  7. Tension in the recent Type Ia supernovae datasets

    International Nuclear Information System (INIS)

    Wei, Hao

    2010-01-01

    In the present work, we investigate the tension in the recent Type Ia supernovae (SNIa) datasets Constitution and Union. We show that they are in tension not only with the observations of the cosmic microwave background (CMB) anisotropy and the baryon acoustic oscillations (BAO), but also with other SNIa datasets such as Davis and SNLS. Then, we find the main sources responsible for the tension. Further, we make this more robust by employing the method of random truncation. Based on the results of this work, we suggest two truncated versions of the Union and Constitution datasets, namely the UnionT and ConstitutionT SNIa samples, whose behaviors are more regular.

  8. Viability of Controlling Prosthetic Hand Utilizing Electroencephalograph (EEG) Dataset Signal

    Science.gov (United States)

    Miskon, Azizi; A/L Thanakodi, Suresh; Raihan Mazlan, Mohd; Mohd Haziq Azhar, Satria; Nooraya Mohd Tawil, Siti

    2016-11-01

    This project presents the development of an artificial hand controlled by Electroencephalograph (EEG) signal datasets for the prosthetic application. The EEG signal datasets were used as to improvise the way to control the prosthetic hand compared to the Electromyograph (EMG). The EMG has disadvantages to a person, who has not used the muscle for a long time and also to person with degenerative issues due to age factor. Thus, the EEG datasets found to be an alternative for EMG. The datasets used in this work were taken from Brain Computer Interface (BCI) Project. The datasets were already classified for open, close and combined movement operations. It served the purpose as an input to control the prosthetic hand by using an Interface system between Microsoft Visual Studio and Arduino. The obtained results reveal the prosthetic hand to be more efficient and faster in response to the EEG datasets with an additional LiPo (Lithium Polymer) battery attached to the prosthetic. Some limitations were also identified in terms of the hand movements, weight of the prosthetic, and the suggestions to improve were concluded in this paper. Overall, the objective of this paper were achieved when the prosthetic hand found to be feasible in operation utilizing the EEG datasets.

  9. Technical note: An inorganic water chemistry dataset (1972–2011 ...

    African Journals Online (AJOL)

    A national dataset of inorganic chemical data of surface waters (rivers, lakes, and dams) in South Africa is presented and made freely available. The dataset comprises more than 500 000 complete water analyses from 1972 up to 2011, collected from more than 2 000 sample monitoring stations in South Africa. The dataset ...

  10. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  11. Wind and wave dataset for Matara, Sri Lanka

    Science.gov (United States)

    Luo, Yao; Wang, Dongxiao; Priyadarshana Gamage, Tilak; Zhou, Fenghua; Madusanka Widanage, Charith; Liu, Taiwei

    2018-01-01

    We present a continuous in situ hydro-meteorology observational dataset from a set of instruments first deployed in December 2012 in the south of Sri Lanka, facing toward the north Indian Ocean. In these waters, simultaneous records of wind and wave data are sparse due to difficulties in deploying measurement instruments, although the area hosts one of the busiest shipping lanes in the world. This study describes the survey, deployment, and measurements of wind and waves, with the aim of offering future users of the dataset the most comprehensive and as much information as possible. This dataset advances our understanding of the nearshore hydrodynamic processes and wave climate, including sea waves and swells, in the north Indian Ocean. Moreover, it is a valuable resource for ocean model parameterization and validation. The archived dataset (Table 1) is examined in detail, including wave data at two locations with water depths of 20 and 10 m comprising synchronous time series of wind, ocean astronomical tide, air pressure, etc. In addition, we use these wave observations to evaluate the ERA-Interim reanalysis product. Based on Buoy 2 data, the swells are the main component of waves year-round, although monsoons can markedly alter the proportion between swell and wind sea. The dataset (Luo et al., 2017) is publicly available from Science Data Bank (https://doi.org/10.11922/sciencedb.447).

  12. Dosimetric analysis of testicular doses in prostate intensity-modulated and volumetric-modulated arc radiation therapy at different energy levels

    Energy Technology Data Exchange (ETDEWEB)

    Onal, Cem, E-mail: hcemonal@hotmail.com; Arslan, Gungor; Dolek, Yemliha; Efe, Esma

    2016-01-01

    The aim of this study is to evaluate the incidental testicular doses during prostate radiation therapy with intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc radiotherapy (VMAT) at different energies. Dosimetric data of 15 patients with intermediate-risk prostate cancer who were treated with radiotherapy were analyzed. The prescribed dose was 78 Gy in 39 fractions. Dosimetric analysis compared testicular doses generated by 7-field intensity-modulated radiotherapy and volumetric-modulated arc radiotherapy with a single arc at 6, 10, and 15 MV energy levels. Testicular doses calculated from the treatment planning system and doses measured from the detectors were analyzed. Mean testicular doses from the intensity-modulated radiotherapy and volumetric-modulated arc radiotherapy per fraction calculated in the treatment planning system were 16.3 ± 10.3 cGy vs 21.5 ± 11.2 cGy (p = 0.03) at 6 MV, 13.4 ± 10.4 cGy vs 17.8 ± 10.7 cGy (p = 0.04) at 10 MV, and 10.6 ± 8.5 cGy vs 14.5 ± 8.6 cGy (p = 0.03) at 15 MV, respectively. Mean scattered testicular doses in the phantom measurements were 99.5 ± 17.2 cGy, 118.7 ± 16.4 cGy, and 193.9 ± 14.5 cGy at 6, 10, and 15 MV, respectively, in the intensity-modulated radiotherapy plans. In the volumetric-modulated arc radiotherapy plans, corresponding testicular doses per course were 90.4 ± 16.3 cGy, 103.6 ± 16.4 cGy, and 139.3 ± 14.6 cGy at 6, 10, and 15 MV, respectively. In conclusions, this study was the first to measure the incidental testicular doses by intensity-modulated radiotherapy and volumetric-modulated arc radiotherapy plans at different energy levels during prostate-only irradiation. Higher photon energy and volumetric-modulated arc radiotherapy plans resulted in higher incidental testicular doses compared with lower photon energy and intensity-modulated radiotherapy plans.

  13. Heuristics for Relevancy Ranking of Earth Dataset Search Results

    Science.gov (United States)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2016-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  14. Condensing Massive Satellite Datasets For Rapid Interactive Analysis

    Science.gov (United States)

    Grant, G.; Gallaher, D. W.; Lv, Q.; Campbell, G. G.; Fowler, C.; LIU, Q.; Chen, C.; Klucik, R.; McAllister, R. A.

    2015-12-01

    Our goal is to enable users to interactively analyze massive satellite datasets, identifying anomalous data or values that fall outside of thresholds. To achieve this, the project seeks to create a derived database containing only the most relevant information, accelerating the analysis process. The database is designed to be an ancillary tool for the researcher, not an archival database to replace the original data. This approach is aimed at improving performance by reducing the overall size by way of condensing the data. The primary challenges of the project include: - The nature of the research question(s) may not be known ahead of time. - The thresholds for determining anomalies may be uncertain. - Problems associated with processing cloudy, missing, or noisy satellite imagery. - The contents and method of creation of the condensed dataset must be easily explainable to users. The architecture of the database will reorganize spatially-oriented satellite imagery into temporally-oriented columns of data (a.k.a., "data rods") to facilitate time-series analysis. The database itself is an open-source parallel database, designed to make full use of clustered server technologies. A demonstration of the system capabilities will be shown. Applications for this technology include quick-look views of the data, as well as the potential for on-board satellite processing of essential information, with the goal of reducing data latency.

  15. The Dataset of Countries at Risk of Electoral Violence

    OpenAIRE

    Birch, Sarah; Muchlinski, David

    2017-01-01

    Electoral violence is increasingly affecting elections around the world, yet researchers have been limited by a paucity of granular data on this phenomenon. This paper introduces and describes a new dataset of electoral violence – the Dataset of Countries at Risk of Electoral Violence (CREV) – that provides measures of 10 different types of electoral violence across 642 elections held around the globe between 1995 and 2013. The paper provides a detailed account of how and why the dataset was ...

  16. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    Science.gov (United States)

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but

  17. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  18. Toward computational cumulative biology by combining models of biological datasets.

    Science.gov (United States)

    Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel

    2014-01-01

    A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.

  19. Modeling of macrosegregation caused by volumetric deformation in a coherent mushy zone

    Science.gov (United States)

    Nicolli, Lilia C.; Mo, Asbjørn; M'hamdi, Mohammed

    2005-02-01

    A two-phase volume-averaged continuum model is presented that quantifies macrosegregation formation during solidification of metallic alloys caused by deformation of the dendritic network and associated melt flow in the coherent part of the mushy zone. Also, the macrosegregation formation associated with the solidification shrinkage (inverse segregation) is taken into account. Based on experimental evidence established elsewhere, volumetric viscoplastic deformation (densification/dilatation) of the coherent dendritic network is included in the model. While the thermomechanical model previously outlined (M. M’Hamdi, A. Mo, and C.L. Martin: Metall. Mater. Trans. A, 2002, vol. 33A, pp. 2081-93) has been used to calculate the temperature and velocity fields associated with the thermally induced deformations and shrinkage driven melt flow, the solute conservation equation including both the liquid and a solid volume-averaged velocity is solved in the present study. In modeling examples, the macrosegregation formation caused by mechanically imposed as well as by thermally induced deformations has been calculated. The modeling results for an Al-4 wt pct Cu alloy indicate that even quite small volumetric strains (≈2 pct), which can be associated with thermally induced deformations, can lead to a macroscopic composition variation in the final casting comparable to that resulting from the solidification shrinkage induced melt flow. These results can be explained by the relatively large volumetric viscoplastic deformation in the coherent mush resulting from the applied constitutive model, as well as the relatively large difference in composition for the studied Al-Cu alloy in the solid and liquid phases at high solid fractions at which the deformation takes place.

  20. The Role of Datasets on Scientific Influence within Conflict Research.

    Directory of Open Access Journals (Sweden)

    Tracy Van Holt

    Full Text Available We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS over a 66-year period (1945-2011. We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA, a specialized social network analysis on this citation network (~1.5 million works, to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993. The critical path consisted of a number of key features: 1 Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2 Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3 We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography. Publically available conflict datasets developed early on helped

  1. VOLUMETRIC LEAK DETECTION IN LARGE UNDERGROUND STORAGE TANKS - VOLUME II: APPENDICES A-E

    Science.gov (United States)

    The program of experiments conducted at Griffiss Air Force Base was devised to expand the understanding of large underground storage tank behavior as it impacts the performance of volumetric leak detection testing. The report addresses three important questions about testing the ...

  2. Lung, liver and lymph node metastases in follow-up MSCT. Comprehensive volumetric assessment of lesion size changes

    International Nuclear Information System (INIS)

    Wulff, A.M.; Fischer, S.; Biederer, J.; Heller, M.; Fabel, M.; Bolte, H.; Freitag-Wolf, S.; Soza, G.; Tietjen, C.

    2012-01-01

    Purpose: To investigate measurement accuracy in terms of precision and inter-rater variability in the simultaneous volumetric assessment of lung, liver and lymph node metastasis size change over time in comparison to RECIST 1.1. Materials and Methods: Three independent readers evaluated multislice CT data from clinical follow-up studies (chest/abdomen) in 50 patients with metastases. A total of 117 lung, 77 liver and 97 lymph node metastases were assessed manually (RECIST 1.1) and by volumetry with semi-automated software. The quality of segmentation and need for manual adjustments were recorded. Volumes were converted to effective diameters to allow comparison to RECIST. For statistical assessment of precision and interobserver agreement, the Wilcoxon-signed rank test and Bland-Altman plots were utilized. Results: The quality of segmentation after manual correction was acceptable to excellent in 95 % of lesions and manual corrections were applied in 21 - 36 % of all lesions, most predominantly in lymph nodes. Mean precision was 2.6 - 6.3 % (manual) with 0.2 - 1.5 % (effective) relative measurement deviation (p <.001). Inter-reader median variation coefficients ranged from 9.4 - 12.8 % (manual) and 2.9 - 8.2 % (volumetric) for different lesion types (p <.001). The limits of agreement were ± 9.8 to ± 11.2 % for volumetric assessment. Conclusion: Superior precision and inter-rater variability of volumetric over manual measurement of lesion change over time was demonstrated in a whole body setting. (orig.)

  3. Lung, liver and lymph node metastases in follow-up MSCT. Comprehensive volumetric assessment of lesion size changes

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, A.M.; Fischer, S.; Biederer, J.; Heller, M.; Fabel, M. [Universitaetsklinikum Schleswig-Holstein, Kiel (Germany). Klinik fuer Diagnostische Radiologie; Bolte, H. [Universitaetsklinikum Muenster (Germany). Klinik und Poliklinik fuer Nuklearmedizin; Freitag-Wolf, S. [Universitaetsklinikum Schleswig-Holstein, Kiel (Germany). Inst. fuer Medizinische Informatik und Statistik; Soza, G.; Tietjen, C. [Siemens AG (Germany). Imaging and IT Div. Computed Tomography

    2012-09-15

    Purpose: To investigate measurement accuracy in terms of precision and inter-rater variability in the simultaneous volumetric assessment of lung, liver and lymph node metastasis size change over time in comparison to RECIST 1.1. Materials and Methods: Three independent readers evaluated multislice CT data from clinical follow-up studies (chest/abdomen) in 50 patients with metastases. A total of 117 lung, 77 liver and 97 lymph node metastases were assessed manually (RECIST 1.1) and by volumetry with semi-automated software. The quality of segmentation and need for manual adjustments were recorded. Volumes were converted to effective diameters to allow comparison to RECIST. For statistical assessment of precision and interobserver agreement, the Wilcoxon-signed rank test and Bland-Altman plots were utilized. Results: The quality of segmentation after manual correction was acceptable to excellent in 95 % of lesions and manual corrections were applied in 21 - 36 % of all lesions, most predominantly in lymph nodes. Mean precision was 2.6 - 6.3 % (manual) with 0.2 - 1.5 % (effective) relative measurement deviation (p <.001). Inter-reader median variation coefficients ranged from 9.4 - 12.8 % (manual) and 2.9 - 8.2 % (volumetric) for different lesion types (p <.001). The limits of agreement were {+-} 9.8 to {+-} 11.2 % for volumetric assessment. Conclusion: Superior precision and inter-rater variability of volumetric over manual measurement of lesion change over time was demonstrated in a whole body setting. (orig.)

  4. Amphiphilic ligand exchange reaction-induced supercapacitor electrodes with high volumetric and scalable areal capacitances

    Science.gov (United States)

    Nam, Donghyeon; Heo, Yeongbeom; Cheong, Sanghyuk; Ko, Yongmin; Cho, Jinhan

    2018-05-01

    We introduce high-performance supercapacitor electrodes with ternary components prepared from consecutive amphiphilic ligand-exchange-based layer-by-layer (LbL) assembly among amine-functionalized multi-walled carbon nanotubes (NH2-MWCNTs) in alcohol, oleic acid-stabilized Fe3O4 nanoparticles (OA-Fe3O4 NPs) in toluene, and semiconducting polymers (PEDOT:PSS) in water. The periodic insertion of semiconducting polymers within the (OA-Fe3O4 NP/NH2-MWCNT)n multilayer-coated indium tin oxide (ITO) electrode enhanced the volumetric and areal capacitances up to 408 ± 4 F cm-3 and 8.79 ± 0.06 mF cm-2 at 5 mV s-1, respectively, allowing excellent cycling stability (98.8% of the initial capacitance after 5000 cycles) and good rate capability. These values were higher than those of the OA-Fe3O4 NP/NH2-MWCNT multilayered electrode without semiconducting polymer linkers (volumetric capacitance ∼241 ± 4 F cm-3 and areal capacitance ∼1.95 ± 0.03 mF cm-2) at the same scan rate. Furthermore, when the asymmetric supercapacitor cells (ASCs) were prepared using OA-Fe3O4 NP- and OA-MnO NP-based ternary component electrodes, they displayed high volumetric energy (0.36 mW h cm-3) and power densities (820 mW cm-3).

  5. Generation of three-dimensional prototype models based on cone beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lambrecht, J.T.; Berndt, D.C.; Zehnder, M. [University of Basel, Department of Oral Surgery, University Hospital for Oral Surgery, Oral Radiology and Oral Medicine, Basel (Switzerland); Schumacher, R. [University of Applied Sciences Northwestern Switzerland, School of Life Sciences, Institute for Medical and Analytical Technologies, Muttenz (Switzerland)

    2009-03-15

    The purpose of this study was to generate three-dimensional models based on digital volumetric data that can be used in basic and advanced education. Four sets of digital volumetric data were established by cone beam computed tomography (CBCT) (Accuitomo, J. Morita, Kyoto, Japan). Datasets were exported as Dicom formats and imported into Mimics and Magic software programs to separate the different tissues such as nerve, tooth and bone. These data were transferred to a Polyjet 3D Printing machine (Eden 330, Object, Israel) to generate the models. Three-dimensional prototype models of certain limited anatomical structures as acquired volumetrically were fabricated. Generating three-dimensional models based on CBCT datasets is possible. Automated routine fabrication of these models, with the given infrastructure, is too time-consuming and therefore too expensive. (orig.)

  6. Generation of three-dimensional prototype models based on cone beam computed tomography

    International Nuclear Information System (INIS)

    Lambrecht, J.T.; Berndt, D.C.; Zehnder, M.; Schumacher, R.

    2009-01-01

    The purpose of this study was to generate three-dimensional models based on digital volumetric data that can be used in basic and advanced education. Four sets of digital volumetric data were established by cone beam computed tomography (CBCT) (Accuitomo, J. Morita, Kyoto, Japan). Datasets were exported as Dicom formats and imported into Mimics and Magic software programs to separate the different tissues such as nerve, tooth and bone. These data were transferred to a Polyjet 3D Printing machine (Eden 330, Object, Israel) to generate the models. Three-dimensional prototype models of certain limited anatomical structures as acquired volumetrically were fabricated. Generating three-dimensional models based on CBCT datasets is possible. Automated routine fabrication of these models, with the given infrastructure, is too time-consuming and therefore too expensive. (orig.)

  7. Astronaut Photography of the Earth: A Long-Term Dataset for Earth Systems Research, Applications, and Education

    Science.gov (United States)

    Stefanov, William L.

    2017-01-01

    The NASA Earth observations dataset obtained by humans in orbit using handheld film and digital cameras is freely accessible to the global community through the online searchable database at https://eol.jsc.nasa.gov, and offers a useful compliment to traditional ground-commanded sensor data. The dataset includes imagery from the NASA Mercury (1961) through present-day International Space Station (ISS) programs, and currently totals over 2.6 million individual frames. Geographic coverage of the dataset includes land and oceans areas between approximately 52 degrees North and South latitudes, but is spatially and temporally discontinuous. The photographic dataset includes some significant impediments for immediate research, applied, and educational use: commercial RGB films and camera systems with overlapping bandpasses; use of different focal length lenses, unconstrained look angles, and variable spacecraft altitudes; and no native geolocation information. Such factors led to this dataset being underutilized by the community but recent advances in automated and semi-automated image geolocation, image feature classification, and web-based services are adding new value to the astronaut-acquired imagery. A coupled ground software and on-orbit hardware system for the ISS is in development for planned deployment in mid-2017; this system will capture camera pose information for each astronaut photograph to allow automated, full georegistration of the data. The ground system component of the system is currently in use to fully georeference imagery collected in response to International Disaster Charter activations, and the auto-registration procedures are being applied to the extensive historical database of imagery to add value for research and educational purposes. In parallel, machine learning techniques are being applied to automate feature identification and classification throughout the dataset, in order to build descriptive metadata that will improve search

  8. [Benefits of volumetric to facial rejuvenation. Part 1: Fat grafting].

    Science.gov (United States)

    Bui, P; Lepage, C

    2017-10-01

    For a number of years, a volumetric approach using autologous fat injection has been implemented to improve cosmetic outcome in face-lift procedures and to achieve lasting rejuvenation. Autologous fat as filling tissue has been used in plastic surgery since the late 19th century, but has only recently been associated to face lift procedures. The interest of the association lies on the one hand in the pathophysiology of facial aging, involving skin sag and loss of volume, and on the other hand in the tissue induction properties of grafted fat, "rejuvenating" the injected area. The strict methodology consisting in harvesting, treating then injecting an autologous fat graft is known as LipoStructure ® or lipofilling. We here describe the technique overall, then region by region. It is now well known and seems simple, effective and reproducible, but is nevertheless delicate. For each individual, it is necessary to restore a harmonious face with well-distributed volumes. By associating volumetric to the face lift procedure, the plastic surgeon plays a new role: instead of being a tailor, cutting away excess skin, he or she becomes a sculptor, remodeling the face to restore the harmony of youth. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  9. Automated volumetric breast density estimation: A comparison with visual assessment

    International Nuclear Information System (INIS)

    Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.

    2013-01-01

    Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors

  10. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  11. A reanalysis dataset of the South China Sea

    Science.gov (United States)

    Zeng, Xuezhi; Peng, Shiqiu; Li, Zhijin; Qi, Yiquan; Chen, Rongyu

    2014-01-01

    Ocean reanalysis provides a temporally continuous and spatially gridded four-dimensional estimate of the ocean state for a better understanding of the ocean dynamics and its spatial/temporal variability. Here we present a 19-year (1992–2010) high-resolution ocean reanalysis dataset of the upper ocean in the South China Sea (SCS) produced from an ocean data assimilation system. A wide variety of observations, including in-situ temperature/salinity profiles, ship-measured and satellite-derived sea surface temperatures, and sea surface height anomalies from satellite altimetry, are assimilated into the outputs of an ocean general circulation model using a multi-scale incremental three-dimensional variational data assimilation scheme, yielding a daily high-resolution reanalysis dataset of the SCS. Comparisons between the reanalysis and independent observations support the reliability of the dataset. The presented dataset provides the research community of the SCS an important data source for studying the thermodynamic processes of the ocean circulation and meso-scale features in the SCS, including their spatial and temporal variability. PMID:25977803

  12. A dataset of forest biomass structure for Eurasia.

    Science.gov (United States)

    Schepaschenko, Dmitry; Shvidenko, Anatoly; Usoltsev, Vladimir; Lakyda, Petro; Luo, Yunjian; Vasylyshyn, Roman; Lakyda, Ivan; Myklush, Yuriy; See, Linda; McCallum, Ian; Fritz, Steffen; Kraxner, Florian; Obersteiner, Michael

    2017-05-16

    The most comprehensive dataset of in situ destructive sampling measurements of forest biomass in Eurasia have been compiled from a combination of experiments undertaken by the authors and from scientific publications. Biomass is reported as four components: live trees (stem, bark, branches, foliage, roots); understory (above- and below ground); green forest floor (above- and below ground); and coarse woody debris (snags, logs, dead branches of living trees and dead roots), consisting of 10,351 unique records of sample plots and 9,613 sample trees from ca 1,200 experiments for the period 1930-2014 where there is overlap between these two datasets. The dataset also contains other forest stand parameters such as tree species composition, average age, tree height, growing stock volume, etc., when available. Such a dataset can be used for the development of models of biomass structure, biomass extension factors, change detection in biomass structure, investigations into biodiversity and species distribution and the biodiversity-productivity relationship, as well as the assessment of the carbon pool and its dynamics, among many others.

  13. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    separability and full symmetry. We formulate test functions as functions of temporal lags for each pair of spatial locations and develop a rank-based testing procedure induced by functional data depth for assessing these properties. The method is illustrated using simulated data from widely used spatio-temporal covariance models, as well as real datasets from weather stations and climate model outputs.

  14. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    Science.gov (United States)

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  15. CT volumetric measurements of the orbits in Graves' disease

    International Nuclear Information System (INIS)

    Krahe, T.; Schlolaut, K.H.; Poss, T.; Trier, H.G.; Lackner, K.; Bonn Univ.; Bonn Univ.

    1989-01-01

    The volumes of the four recti muscles and the orbital fat was measured by CT in 40 normal persons and in 60 patients with clinically confirmed Graves' disease. Compared with normal persons, 42 patients (70%) showed an increase in muscle volume and 28 patients (46.7%) an increase in the amount of fat. In nine patients (15%) muscle volume was normal, but the fat was increased. By using volumetric measurements, the amount of fat in the orbits in patients with Graves' disease could be determined. (orig.) [de

  16. Modelling of volumetric composition and mechanical properties of unidirectional hemp/epoxy composites - Effect of enzymatic fibre treatment

    DEFF Research Database (Denmark)

    Liu, Ming; Thygesen, Anders; Meyer, Anne S.

    2016-01-01

    The objective of the present study is to assess the effect of enzymatic fibre treatments on the fibre performance in unidirectional hemp/epoxy composites by modelling the volumetric composition and mechanical properties of the composites. It is shown that the applied models can well predict...... the changes in volumetric composition and mechanical properties of the composites when differently treated hemp fibres are used. The decrease in the fibre correlated porosity factor with the enzymatic fibre treatments shows that the removal of pectin by pectinolytic enzymes results in a better fibre...

  17. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    International Nuclear Information System (INIS)

    Xu, Y; Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X; Zhou, L

    2014-01-01

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  18. WE-G-BRF-04: Robust Real-Time Volumetric Imaging Based On One Single Projection

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Yan, H; Ouyang, L; Wang, J; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)

    2014-06-15

    Purpose: Real-time volumetric imaging is highly desirable to provide instantaneous image guidance for lung radiation therapy. This study proposes a scheme to achieve this goal using one single projection by utilizing sparse learning and a principal component analysis (PCA) based lung motion model. Methods: A patient-specific PCA-based lung motion model is first constructed by analyzing deformable vector fields (DVFs) between a reference image and 4DCT images at each phase. At the training stage, we “learn” the relationship between the DVFs and the projection using sparse learning. Specifically, we first partition the projections into patches, and then apply sparse learning to automatically identify patches that best correlate with the principal components of the DVFs. Once the relationship is established, at the application stage, we first employ a patchbased intensity correction method to overcome the problem of different intensity scale between the calculated projection in the training stage and the measured projection in the application stage. The corrected projection image is then fed to the trained model to derive a DVF, which is applied to the reference image, yielding a volumetric image corresponding to the projection. We have validated our method through a NCAT phantom simulation case and one experiment case. Results: Sparse learning can automatically select those patches containing motion information, such as those around diaphragm. For the simulation case, over 98% of the lung region pass the generalized gamma test (10HU/1mm), indicating combined accuracy in both intensity and spatial domain. For the experimental case, the average tumor localization errors projected to the imager are 0.68 mm and 0.4 mm on the axial and tangential direction, respectively. Conclusion: The proposed method is capable of accurately generating a volumetric image using one single projection. It will potentially offer real-time volumetric image guidance to facilitate lung

  19. Prediction of breast cancer recurrence using lymph node metabolic and volumetric parameters from {sup 18}F-FDG PET/CT in operable triple-negative breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong-il [CHA University, Department of Nuclear Medicine, CHA Bundang Medical Center, Seongnam (Korea, Republic of); Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Kim, Yong Joong [Veterans Health Service Medical Center, Seoul (Korea, Republic of); Paeng, Jin Chul; Cheon, Gi Jeong; Lee, Dong Soo [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Chung, June-Key [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Kang, Keon Wook [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Biomedical Sciences, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Nuclear Medicine, Seoul (Korea, Republic of)

    2017-10-15

    Triple-negative breast cancer has a poor prognosis. We evaluated several metabolic and volumetric parameters from preoperative {sup 18}F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) in the prognosis of triple-negative breast cancer and compared them with current clinicopathologic parameters. A total of 228 patients with triple-negative breast cancer (mean age 47.0 ± 10.8 years, all women) who had undergone preoperative PET/CT were included. The PET/CT metabolic parameters evaluated included maximum, peak, and mean standardized uptake values (SUVmax, SUVpeak, and SUVmean, respectively). The volumetric parameters evaluated included metabolic tumor volume (MTV) and total lesion glycolysis (TLG). Metabolic and volumetric parameters were evaluated separately for tumor (T) and lymph nodes (N). The prognostic value of these parameters was compared with that of clinicopathologic parameters. All lymph node metabolic and volumetric parameters showed significant differences between patients with and without recurrence. However, tumor metabolic and volumetric parameters showed no significant differences. In a univariate survival analysis, all lymph node metabolic and volumetric parameters (SUVmax-N, SUVpeak-N, SUVmean-N, MTV-N, and TLG-N; all P < 0.001), T stage (P = 0.010), N stage (P < 0.001), and TNM stage (P < 0.001) were significant parameters. In a multivariate survival analysis, SUVmax-N (P = 0.005), MTV (P = 0.008), and TLG (P = 0.006) with TNM stage (all P < 0.001) were significant parameters. Lymph node metabolic and volumetric parameters were significant predictors of recurrence in patients with triple-negative breast cancer after surgery. Lymph node metabolic and volumetric parameters were useful parameters for evaluating prognosis in patients with triple-negative breast cancer by {sup 18}F-FDG PET/CT, rather than tumor parameters. (orig.)

  20. Kinetic, volumetric and structural effects induced by liquid Ga penetration into ultrafine grained Al

    International Nuclear Information System (INIS)

    Naderi, Mehrnoosh; Peterlechner, Martin; Schafler, Erhard; Divinski, Sergiy V.; Wilde, Gerhard

    2015-01-01

    Kinetic, volumetric and structural effects induced by penetration of liquid Ga in ultrafine grained (UFG) Al produced by severe plastic deformation using high-pressure torsion were studied by isothermal dilatometric measurements, electron microscopy, atomic force microscopy and X-ray diffraction. Severe plastic deformation changed the distribution of impurities and their segregation was revealed by transmission electron microscopy. Two-stage length changes of UFG Al were observed which are explained by counteracting effects of expansion due to grain boundary segregation of Ga and contraction due to precipitation and recrystallization. After applying Ga, the kinetics of the liquid Ga penetration in UFG Al is studied in-situ in the electron microscope by the “first appearance” method and the time scales are in agreement with those inducing the volumetric changes

  1. Towards ultrahigh volumetric capacitance: graphene derived highly dense but porous carbons for supercapacitors

    Science.gov (United States)

    Tao, Ying; Xie, Xiaoying; Lv, Wei; Tang, Dai-Ming; Kong, Debin; Huang, Zhenghong; Nishihara, Hirotomo; Ishii, Takafumi; Li, Baohua; Golberg, Dmitri; Kang, Feiyu; Kyotani, Takashi; Yang, Quan-Hong

    2013-10-01

    A small volumetric capacitance resulting from a low packing density is one of the major limitations for novel nanocarbons finding real applications in commercial electrochemical energy storage devices. Here we report a carbon with a density of 1.58 g cm-3, 70% of the density of graphite, constructed of compactly interlinked graphene nanosheets, which is produced by an evaporation-induced drying of a graphene hydrogel. Such a carbon balances two seemingly incompatible characteristics: a porous microstructure and a high density, and therefore has a volumetric capacitance for electrochemical capacitors (ECs) up to 376 F cm-3, which is the highest value so far reported for carbon materials in an aqueous electrolyte. More promising, the carbon is conductive and moldable, and thus could be used directly as a well-shaped electrode sheet for the assembly of a supercapacitor device free of any additives, resulting in device-level high energy density ECs.

  2. Nanocellulose coupled flexible polypyrrole@graphene oxide composite paper electrodes with high volumetric capacitance

    Science.gov (United States)

    Wang, Zhaohui; Tammela, Petter; Strømme, Maria; Nyholm, Leif

    2015-02-01

    A robust and compact freestanding conducting polymer-based electrode material based on nanocellulose coupled polypyrrole@graphene oxide paper is straightforwardly prepared via in situ polymerization for use in high-performance paper-based charge storage devices, exhibiting stable cycling over 16 000 cycles at 5 A g-1 as well as the largest specific volumetric capacitance (198 F cm-3) so far reported for flexible polymer-based electrodes.A robust and compact freestanding conducting polymer-based electrode material based on nanocellulose coupled polypyrrole@graphene oxide paper is straightforwardly prepared via in situ polymerization for use in high-performance paper-based charge storage devices, exhibiting stable cycling over 16 000 cycles at 5 A g-1 as well as the largest specific volumetric capacitance (198 F cm-3) so far reported for flexible polymer-based electrodes. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07251k

  3. Numerical evaluation of an innovative cup layout for open volumetric solar air receivers

    Science.gov (United States)

    Cagnoli, Mattia; Savoldi, Laura; Zanino, Roberto; Zaversky, Fritz

    2016-05-01

    This paper proposes an innovative volumetric solar absorber design to be used in high-temperature air receivers of solar power tower plants. The innovative absorber, a so-called CPC-stacked-plate configuration, applies the well-known principle of a compound parabolic concentrator (CPC) for the first time in a volumetric solar receiver, heating air to high temperatures. The proposed absorber configuration is analyzed numerically, applying first the open-source ray-tracing software Tonatiuh in order to obtain the solar flux distribution on the absorber's surfaces. Next, a Computational Fluid Dynamic (CFD) analysis of a representative single channel of the innovative receiver is performed, using the commercial CFD software ANSYS Fluent. The solution of the conjugate heat transfer problem shows that the behavior of the new absorber concept is promising, however further optimization of the geometry will be necessary in order to exceed the performance of the classical absorber designs.

  4. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    Science.gov (United States)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization

  5. Accuracy and Reliability of Cone-Beam Computed Tomography for Linear and Volumetric Mandibular Condyle Measurements. A Human Cadaver Study.

    Science.gov (United States)

    García-Sanz, Verónica; Bellot-Arcís, Carlos; Hernández, Virginia; Serrano-Sánchez, Pedro; Guarinos, Juan; Paredes-Gallardo, Vanessa

    2017-09-20

    The accuracy of Cone-Beam Computed Tomography (CBCT) on linear and volumetric measurements on condyles has only been assessed on dry skulls. The aim of this study was to evaluate the reliability and accuracy of linear and volumetric measurements of mandibular condyles in the presence of soft tissues using CBCT. Six embalmed cadaver heads were used. CBCT scans were taken, followed by the extraction of the condyles. The water displacement technique was used to calculate the volumes of the condyles and three linear measurements were made using a digital caliper, these measurements serving as the gold standard. Surface models of the condyles were obtained using a 3D scanner, and superimposed onto the CBCT images. Condyles were isolated on the CBCT render volume using the surface models as reference and volumes were measured. Linear measurements were made on CBCT slices. The CBCT method was found to be reliable for both volumetric and linear measurements (CV  0.90). Highly accurate values were obtained for the three linear measurements and volume. CBCT is a reliable and accurate method for taking volumetric and linear measurements on mandibular condyles in the presence of soft tissue, and so a valid tool for clinical diagnosis.

  6. Volumetric Titrations Using Electrolytically Generated Reagents for the Determination of Ascorbic Acid and Iron in Dietary Supplement Tablets: An Undergraduate Laboratory Experiment

    Science.gov (United States)

    Scanlon, Christopher; Gebeyehu, Zewdu; Griffin, Kameron; Dabke, Rajeev B.

    2014-01-01

    An undergraduate laboratory experiment for the volumetric quantitative analysis of ascorbic acid and iron in dietary supplement tablets is presented. Powdered samples of the dietary supplement tablets were volumetrically titrated against electrolytically generated reagents, and the mass of dietary reagent in the tablet was determined from the…

  7. Multicenter assessment of the reproducibility of volumetric radiofrequency-based intravascular ultrasound measurements in coronary lesions that were consecutively stented

    DEFF Research Database (Denmark)

    Huisman, Jennifer; Egede, Rasmus; Rdzanek, Adam

    2012-01-01

    To assess in a multicenter design the between-center reproducibility of volumetric virtual histology intravascular ultrasound (VH-IVUS) measurements with a semi-automated, computer-assisted contour detection system in coronary lesions that were consecutively stented. To evaluate the reproducibility...... of volumetric VH-IVUS measurements, experienced analysts of 4 European IVUS centers performed independent analyses (in total 8,052 cross-sectional analyses) to obtain volumetric data of 40 coronary segments (length 20.0 ± 0.3 mm) from target lesions prior to percutaneous intervention that were performed...... in the setting of stable (65%) or unstable angina pectoris (35%). Geometric and compositional VH-IVUS measurements were highly correlated for the different comparisons. Overall intraclass correlation for vessel, lumen, plaque volume and plaque burden was 0.99, 0.92, 0.96, and 0.83, respectively; for fibrous...

  8. Effect of Hydroxylamine Sulfate on Volumetric Behavior of Glycine, L-Alanine, and L-Arginine in Aqueous Solution

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2013-01-01

    Full Text Available The apparent molar volumes of glycine, L-alanine, and L-arginine in aqueous hydroxylamine sulfate solutions have been determined at T=298.15 K and atmospheric pressure. The standard partial molar volumes, V20, corresponding partial molar volumes of transfer, ΔtrV20, and hydration numbers, NH, have been calculated for these α-amino acids from the experimental data. The ΔtrV20 values are positive for glycine, L-alanine, and L-arginine and are all increased with the increase in the concentration of hydroxylamine ions. These parameters obtained from the volumetric data are interpreted in terms of various mixing effects between amino acids and hydroxylamine sulfate in aqueous solutions.

  9. Volumetric Visualization of Human Skin

    Science.gov (United States)

    Kawai, Toshiyuki; Kurioka, Yoshihiro

    We propose a modeling and rendering technique of human skin, which can provide realistic color, gloss and translucency for various applications in computer graphics. Our method is based on volumetric representation of the structure inside of the skin. Our model consists of the stratum corneum and three layers of pigments. The stratum corneum has also layered structure in which the incident light is reflected, refracted and diffused. Each layer of pigment has carotene, melanin or hemoglobin. The density distributions of pigments which define the color of each layer can be supplied as one of the voxel values. Surface normals of upper-side voxels are fluctuated to produce bumps and lines on the skin. We apply ray tracing approach to this model to obtain the rendered image. Multiple scattering in the stratum corneum, reflective and absorptive spectrum of pigments are considered. We also consider Fresnel term to calculate the specular component for glossy surface of skin. Some examples of rendered images are shown, which can successfully visualize a human skin.

  10. Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset

    Science.gov (United States)

    Eyre, T.; Van der Baan, M.

    2016-12-01

    Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and

  11. Multilayered complex network datasets for three supply chain network archetypes on an urban road grid

    Directory of Open Access Journals (Sweden)

    Nadia M. Viljoen

    2018-02-01

    Full Text Available This article presents the multilayered complex network formulation for three different supply chain network archetypes on an urban road grid and describes how 500 instances were randomly generated for each archetype. Both the supply chain network layer and the urban road network layer are directed unweighted networks. The shortest path set is calculated for each of the 1 500 experimental instances. The datasets are used to empirically explore the impact that the supply chain's dependence on the transport network has on its vulnerability in Viljoen and Joubert (2017 [1]. The datasets are publicly available on Mendeley (Joubert and Viljoen, 2017 [2]. Keywords: Multilayered complex networks, Supply chain vulnerability, Urban road networks

  12. Spatial distribution of bacterial communities on volumetric and planar anodes in single-chamber air-cathode microbial fuel cells

    KAUST Repository

    Vargas, Ignacio T.

    2013-05-29

    Pyrosequencing was used to characterize bacterial communities in air-cathode microbial fuel cells across a volumetric (graphite fiber brush) and a planar (carbon cloth) anode, where different physical and chemical gradients would be expected associated with the distance between anode location and the air cathode. As expected, the stable operational voltage and the coulombic efficiency (CE) were higher for the volumetric anode than the planar anode (0.57V and CE=22% vs. 0.51V and CE=12%). The genus Geobacter was the only known exoelectrogen among the observed dominant groups, comprising 57±4% of recovered sequences for the brush and 27±5% for the carbon-cloth anode. While the bacterial communities differed between the two anode materials, results showed that Geobacter spp. and other dominant bacterial groups were homogenously distributed across both planar and volumetric anodes. This lends support to previous community analysis interpretations based on a single biofilm sampling location in these systems. © 2013 Wiley Periodicals, Inc.

  13. An Analysis on Better Testing than Training Performances on the Iris Dataset

    NARCIS (Netherlands)

    Schutten, Marten; Wiering, Marco

    2016-01-01

    The Iris dataset is a well known dataset containing information on three different types of Iris flowers. A typical and popular method for solving classification problems on datasets such as the Iris set is the support vector machine (SVM). In order to do so the dataset is separated in a set used

  14. Integrated dataset of anatomical, morphological, and architectural traits for plant species in Madagascar

    Directory of Open Access Journals (Sweden)

    Amira Azizan

    2017-12-01

    Full Text Available In this work, we present a dataset, which provides information on the structural diversity of some endemic tropical species in Madagascar. The data were from CIRAD xylotheque (since 1937, and were also collected during various fieldworks (since 1964. The field notes and photographs were provided by French botanists; particularly by Francis Hallé. The dataset covers 250 plant species with anatomical, morphological, and architectural traits indexed from digitized wood slides and fieldwork documents. The digitized wood slides were constituted by the transverse, tangential, and radial sections with three optical magnifications. The main specific anatomical traits can be found within the digitized area. Information on morphological and architectural traits were indexed from digitized field drawings including notes and photographs. The data are hosted in the website ArchiWood (http://archiwood.cirad.fr. Keywords: Morpho-architectural traits, Plant architecture, Wood anatomy, Madagascar

  15. Real-time volumetric deformable models for surgery simulation using finite elements and condensation

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten; Cotin, S.

    1996-01-01

    This paper discusses the application of SD solid volumetric Finite Element models to surgery simulation. In particular it introduces three new ideas for solving the problem of achieving real-time performance for these models. The simulation system we have developed is described and we demonstrate...

  16. Something From Nothing (There): Collecting Global IPv6 Datasets from DNS

    NARCIS (Netherlands)

    Fiebig, T.; Borgolte, Kevin; Hao, Shuang; Kruegel, Christopher; Vigna, Giovanny; Spring, Neil; Riley, George F.

    2017-01-01

    Current large-scale IPv6 studies mostly rely on non-public datasets, asmost public datasets are domain specific. For instance, traceroute-based datasetsare biased toward network equipment. In this paper, we present a new methodologyto collect IPv6 address datasets that does not require access to

  17. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  18. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    Science.gov (United States)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  19. Programmable segmented volumetric modulated arc therapy for respiratory coordination in pancreatic cancer

    International Nuclear Information System (INIS)

    Wu, Jian-Kuen; Wu, Chien-Jang; Cheng, Jason Chia-Hsien

    2012-01-01

    We programmably divided long-arc volumetric modulated arc therapy (VMAT) into split short arcs, each taking less than 30 s for respiratory coordination. The VMAT plans of five pancreatic cancer patients were modified; the short-arc plans had negligible dose differences and satisfied the 3%/3-mm gamma index on a MapCHECK-2 device.

  20. Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor

    Directory of Open Access Journals (Sweden)

    Chang Xu

    2018-05-01

    Full Text Available This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs. Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.