WorldWideScience

Sample records for cross-platform positioning datasets

  1. Cross-Platform Technologies

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2017-04-01

    Full Text Available Cross-platform - a concept becoming increasingly used in recent years especially in the development of mobile apps, but this consistently over time and in the development of conventional desktop applications. The notion of cross-platform software (multi-platform or platform-independent refers to a software application that can run on more than one operating system or computing architecture. Thus, a cross-platform application can operate independent of software or hardware platform on which it is execute. As a generic definition presents a wide range of meanings for purposes of this paper we individualize this definition as follows: we will reduce the horizon of meaning and we use functionally following definition: a cross-platform application is a software application that can run on more than one operating system (desktop or mobile identical or in a similar way.

  2. FASTQSim: platform-independent data characterization and in silico read generation for NGS datasets.

    Science.gov (United States)

    Shcherbina, Anna

    2014-08-15

    High-throughput next generation sequencing technologies have enabled rapid characterization of clinical and environmental samples. Consequently, the largest bottleneck to actionable data has become sample processing and bioinformatics analysis, creating a need for accurate and rapid algorithms to process genetic data. Perfectly characterized in silico datasets are a useful tool for evaluating the performance of such algorithms. Background contaminating organisms are observed in sequenced mixtures of organisms. In silico samples provide exact truth. To create the best value for evaluating algorithms, in silico data should mimic actual sequencer data as closely as possible. FASTQSim is a tool that provides the dual functionality of NGS dataset characterization and metagenomic data generation. FASTQSim is sequencing platform-independent, and computes distributions of read length, quality scores, indel rates, single point mutation rates, indel size, and similar statistics for any sequencing platform. To create training or testing datasets, FASTQSim has the ability to convert target sequences into in silico reads with specific error profiles obtained in the characterization step. FASTQSim enables users to assess the quality of NGS datasets. The tool provides information about read length, read quality, repetitive and non-repetitive indel profiles, and single base pair substitutions. FASTQSim allows the user to simulate individual read datasets that can be used as standardized test scenarios for planning sequencing projects or for benchmarking metagenomic software. In this regard, in silico datasets generated with the FASTQsim tool hold several advantages over natural datasets: they are sequencing platform independent, extremely well characterized, and less expensive to generate. Such datasets are valuable in a number of applications, including the training of assemblers for multiple platforms, benchmarking bioinformatics algorithm performance, and creating challenge

  3. Cross-platform comparison of microarray data using order restricted inference

    Science.gov (United States)

    Klinglmueller, Florian; Tuechler, Thomas; Posch, Martin

    2013-01-01

    Motivation Titration experiments measuring the gene expression from two different tissues, along with total RNA mixtures of the pure samples, are frequently used for quality evaluation of microarray technologies. Such a design implies that the true mRNA expression of each gene, is either constant or follows a monotonic trend between the mixtures, applying itself to the use of order restricted inference procedures. Exploiting only the postulated monotonicity of titration designs, we propose three statistical analysis methods for the validation of high-throughput genetic data and corresponding preprocessing techniques. Results Our methods allow for inference of accuracy, repeatability and cross-platform agreement, with minimal required assumptions regarding the underlying data generating process. Therefore, they are readily applicable to all sorts of genetic high-throughput data independent of the degree of preprocessing. An application to the EMERALD dataset was used to demonstrate how our methods provide a rich spectrum of easily interpretable quality metrics and allow the comparison of different microarray technologies and normalization methods. The results are on par with previous work, but provide additional new insights that cast doubt on the utility of popular preprocessing techniques, specifically concerning the EMERALD projects dataset. Availability All datasets are available on EBI’s ArrayExpress web site (http://www.ebi.ac.uk/microarray-as/ae/) under accession numbers E-TABM-536, E-TABM-554 and E-TABM-555. Source code implemented in C and R is available at: http://statistics.msi.meduniwien.ac.at/float/cross_platform/. Methods for testing and variance decomposition have been made available in the R-package orQA, which can be downloaded and installed from CRAN http://cran.r-project.org. PMID:21317143

  4. PR-PR: cross-platform laboratory automation system.

    Science.gov (United States)

    Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J

    2014-08-15

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  5. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    Science.gov (United States)

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  6. CROPPER: a metagene creator resource for cross-platform and cross-species compendium studies.

    Science.gov (United States)

    Paananen, Jussi; Storvik, Markus; Wong, Garry

    2006-09-22

    Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies. We have developed a web-based software resource, called CROPPER that uses the latest genomic information concerning different data identifiers and orthologous genes from the Ensembl database. CROPPER can be used to combine genomic data from different heterogeneous sources, allowing researchers to perform cross-platform/cross-species compendium studies without the need for complex computational tools or the requirement of setting up one's own in-house database. We also present an example of a simple cross-platform/cross-species compendium study based on publicly available Parkinson's disease data derived from different sources. CROPPER is a user-friendly and freely available web-based software resource that can be successfully used for cross-species/cross-platform compendium studies.

  7. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    Directory of Open Access Journals (Sweden)

    Pielot Rainer

    2010-01-01

    Full Text Available Abstract Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE, a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  8. Platform Architecture for Decentralized Positioning Systems

    Directory of Open Access Journals (Sweden)

    Zakaria Kasmi

    2017-04-01

    Full Text Available A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.

  9. Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study.

    Science.gov (United States)

    Liu, Qi; Xu, Qian; Zheng, Vincent W; Xue, Hong; Cao, Zhiwei; Yang, Qiang

    2010-04-10

    Gene silencing using exogenous small interfering RNAs (siRNAs) is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC) to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs) have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. The knowledge gained from our study provides useful insights on how to analyze various cross-platform RNAi data for uncovering

  10. Multi-task learning for cross-platform siRNA efficacy prediction: an in-silico study

    Directory of Open Access Journals (Sweden)

    Xue Hong

    2010-04-01

    Full Text Available Abstract Background Gene silencing using exogenous small interfering RNAs (siRNAs is now a widespread molecular tool for gene functional study and new-drug target identification. The key mechanism in this technique is to design efficient siRNAs that incorporated into the RNA-induced silencing complexes (RISC to bind and interact with the mRNA targets to repress their translations to proteins. Although considerable progress has been made in the computational analysis of siRNA binding efficacy, few joint analysis of different RNAi experiments conducted under different experimental scenarios has been done in research so far, while the joint analysis is an important issue in cross-platform siRNA efficacy prediction. A collective analysis of RNAi mechanisms for different datasets and experimental conditions can often provide new clues on the design of potent siRNAs. Results An elegant multi-task learning paradigm for cross-platform siRNA efficacy prediction is proposed. Experimental studies were performed on a large dataset of siRNA sequences which encompass several RNAi experiments recently conducted by different research groups. By using our multi-task learning method, the synergy among different experiments is exploited and an efficient multi-task predictor for siRNA efficacy prediction is obtained. The 19 most popular biological features for siRNA according to their jointly importance in multi-task learning were ranked. Furthermore, the hypothesis is validated out that the siRNA binding efficacy on different messenger RNAs(mRNAs have different conditional distribution, thus the multi-task learning can be conducted by viewing tasks at an "mRNA"-level rather than at the "experiment"-level. Such distribution diversity derived from siRNAs bound to different mRNAs help indicate that the properties of target mRNA have important implications on the siRNA binding efficacy. Conclusions The knowledge gained from our study provides useful insights on how to

  11. Cross-platform learning: on the nature of children's learning from multiple media platforms.

    Science.gov (United States)

    Fisch, Shalom M

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several recent studies to explore cross-platform learning (i.e., learning from combined use of multiple media platforms) and how such learning compares to learning from one medium. The paper discusses unique benefits of cross-platform learning, a theoretical mechanism to explain how these benefits might arise, and questions for future research in this emerging field. Copyright © 2013 Wiley Periodicals, Inc., A Wiley Company.

  12. A Cross-Platform Tactile Capabilities Interface for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Jie eMa

    2016-04-01

    Full Text Available This article presents the core elements of a cross-platform tactile capabilities interface (TCI for humanoid arms. The aim of the interface is to reduce the cost of developing humanoid robot capabilities by supporting reuse through cross-platform deployment. The article presents a comparative analysis of existing robot middleware frameworks, as well as the technical details of the TCI framework that builds on the the existing YARP platform. The TCI framework currently includes robot arm actuators with robot skin sensors. It presents such hardware in a platform independent manner, making it possible to write robot control software that can be executed on different robots through the TCI frameworks. The TCI framework supports multiple humanoid platforms and this article also presents a case study of a cross-platform implementation of a set of tactile protective withdrawal reflexes that have been realised on both the Nao and iCub humanoid robot platforms using the same high-level source code.

  13. NMRFx Processor: a cross-platform NMR data processing program

    International Nuclear Information System (INIS)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A.

    2016-01-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  14. NMRFx Processor: a cross-platform NMR data processing program

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Michael; Fetler, Bayard [One Moon Scientific, Inc. (United States); Marchant, Jan [University of Maryland Baltimore County, Howard Hughes Medical Institute (United States); Johnson, Bruce A., E-mail: bruce.johnson@asrc.cuny.edu [One Moon Scientific, Inc. (United States)

    2016-08-15

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  15. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    Science.gov (United States)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization

  16. Xamarin cross-platform application development

    CERN Document Server

    Peppers, Jonathan

    2015-01-01

    If you are a developer with experience in C# and are just getting into mobile development, this is the book for you. If you have experience with desktop applications or the Web, this book will give you a head start on cross-platform development.

  17. A cross-country Exchange Market Pressure (EMP dataset

    Directory of Open Access Journals (Sweden)

    Mohit Desai

    2017-06-01

    Full Text Available The data presented in this article are related to the research article titled - “An exchange market pressure measure for cross country analysis” (Patnaik et al. [1]. In this article, we present the dataset for Exchange Market Pressure values (EMP for 139 countries along with their conversion factors, ρ (rho. Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values for the point estimates of ρ’s. Using the standard errors of estimates of ρ’s, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  18. A cross-country Exchange Market Pressure (EMP) dataset.

    Science.gov (United States)

    Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay

    2017-06-01

    The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  19. Cross-Platform Mobile Application Development: A Pattern-Based Approach

    Science.gov (United States)

    2012-03-01

    TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Cross-Platform Mobile Application Development: A Pattern-Based Approach 5. FUNDING...for public release; distribution is unlimited CROSS-PLATFORM MOBILE APPLICATION DEVELOPMENT: A PATTERN-BASED APPROACH Christian G. Acord...occurring design problems. We then discuss common approaches to mobile development, including common aspects of mobile application development, including

  20. Open source platform for collaborative construction of wearable sensor datasets for human motion analysis and an application for gait analysis.

    Science.gov (United States)

    Llamas, César; González, Manuel A; Hernández, Carmen; Vegas, Jesús

    2016-10-01

    Nearly every practical improvement in modeling human motion is well founded in a properly designed collection of data or datasets. These datasets must be made publicly available for the community could validate and accept them. It is reasonable to concede that a collective, guided enterprise could serve to devise solid and substantial datasets, as a result of a collaborative effort, in the same sense as the open software community does. In this way datasets could be complemented, extended and expanded in size with, for example, more individuals, samples and human actions. For this to be possible some commitments must be made by the collaborators, being one of them sharing the same data acquisition platform. In this paper, we offer an affordable open source hardware and software platform based on inertial wearable sensors in a way that several groups could cooperate in the construction of datasets through common software suitable for collaboration. Some experimental results about the throughput of the overall system are reported showing the feasibility of acquiring data from up to 6 sensors with a sampling frequency no less than 118Hz. Also, a proof-of-concept dataset is provided comprising sampled data from 12 subjects suitable for gait analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Researching intimacy through social media: A cross-platform approach

    Directory of Open Access Journals (Sweden)

    Cristina Miguel

    2016-06-01

    Full Text Available This paper aims to contribute to the understanding of how to study the way people build intimacy and manage privacy through social media interaction. It explores the research design and methodology of a research project based on a multi-sited case study composed of three different social media platforms: Badoo, CouchSurfing, and Facebook. This cross-platform approach is useful to observe how intimacy is often negotiated across different platforms. The research project focuses on the cities of Leeds (UK and Barcelona (Spain. In particular, this article discusses the methods used to recruit participants and collect data for that study - namely, participant observation, semi-structured interviews, and user profiles analysis. This cross-platform approach and multi-method research design is helpful to investigate the nature of intimacy practices facilitated by social media at several levels: online/offline, across different platforms, among different types of relationships, within both new and existing relationships, and in different locations

  2. Cross-brain neurofeedback: scientific concept and experimental platform.

    Directory of Open Access Journals (Sweden)

    Lian Duan

    Full Text Available The present study described a new type of multi-person neurofeedback with the neural synchronization between two participants as the direct regulating target, termed as "cross-brain neurofeedback." As a first step to implement this concept, an experimental platform was built on the basis of functional near-infrared spectroscopy, and was validated with a two-person neurofeedback experiment. This novel concept as well as the experimental platform established a framework for investigation of the relationship between multiple participants' cross-brain neural synchronization and their social behaviors, which could provide new insight into the neural substrate of human social interactions.

  3. Cross-platform digital assessment forms for evaluating surgical skills

    Directory of Open Access Journals (Sweden)

    Steven Arild Wuyts Andersen

    2015-04-01

    Full Text Available A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.

  4. Cross platform SCA component using C++ builder and KYLIX

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Timossi, Chiris; McDonald, James L.

    2003-01-01

    A cross-platform component for EPICS Simple Channel Access (SCA) has been developed. EPICS client programs with GUI become portable at their C++ source-code level both on Windows and Linux by using Borland C++ Builder 6 and Kylix 3 on these platforms respectively

  5. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.

  6. Competitive Positioning of Complementors on Digital Platforms

    DEFF Research Database (Denmark)

    Wessel, Michael; Thies, Ferdinand; Benlian, Alexander

    2017-01-01

    markets. With increasing numbers of products and services offered via the platforms, signals such as popularity and reputation have become critical market mechanisms that affect the decision-making processes of end-users. In this paper, we examine the positioning strategies of new hosts on Airbnb......, a platform focused on accommodation sharing, to understand how they attempt to cope with the inherent lack of credible quality signals as they join the platform. By analyzing close to 47,000 listings, we find that new hosts follow a cost-leadership strategy rather than trying to differentiate their offerings...

  7. “Controlled, cross-species dataset for exploring biases in genome annotation and modification profiles”

    Directory of Open Access Journals (Sweden)

    Alison McAfee

    2015-12-01

    Full Text Available Since the sequencing of the honey bee genome, proteomics by mass spectrometry has become increasingly popular for biological analyses of this insect; but we have observed that the number of honey bee protein identifications is consistently low compared to other organisms [1]. In this dataset, we use nanoelectrospray ionization-coupled liquid chromatography–tandem mass spectrometry (nLC–MS/MS to systematically investigate the root cause of low honey bee proteome coverage. To this end, we present here data from three key experiments: a controlled, cross-species analyses of samples from Apis mellifera, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Mus musculus and Homo sapiens; a proteomic analysis of an individual honey bee whose genome was also sequenced; and a cross-tissue honey bee proteome comparison. The cross-species dataset was interrogated to determine relative proteome coverages between species, and the other two datasets were used to search for polymorphic sequences and to compare protein cleavage profiles, respectively.

  8. Researching intimacy through social media: A cross-platform approach

    OpenAIRE

    Miguel, C

    2016-01-01

    This paper aims to contribute to the understanding of how to study the way people build intimacy and manage privacy through social media interaction. It explores the research design and methodology of a research project based on a multi-sited case study composed of three different social media platforms: Badoo, CouchSurfing, and Facebook. This cross-platform approach is useful to observe how intimacy is often negotiated across different platforms. The research project focuses on the cities of...

  9. Professional Cross-Platform Mobile Development in C#

    CERN Document Server

    Olson, Scott; Horgen, Ben; Goers, Kenny

    2012-01-01

    Develop mobile enterprise applications in a language you already know! With employees, rather than the IT department, now driving the decision of which devices to use on the job, many companies are scrambling to integrate enterprise applications. Fortunately, enterprise developers can now create apps for all major mobile devices using C#/.NET and Mono, languages most already know. A team of authors draws on their vast experiences to teach you how to create cross-platform mobile applications, while delivering the same functionality to PC's, laptops and the web from a single technology platform

  10. Building cross-platform apps using Titanium, Alloy, and Appcelerator cloud services

    CERN Document Server

    Saunders, Aaron

    2014-01-01

    Skip Objective-C and Java to get your app to market faster, using the skills you already have Building Cross-Platform Apps using Titanium, Alloy, and Appcelerator Cloud Services shows you how to build cross-platform iOS and Android apps without learning Objective-C or Java. With detailed guidance given toward using the Titanium Mobile Platform and Appcelerator Cloud Services, you will quickly develop the skills to build real, native apps- not web apps-using existing HTML, CSS, and JavaScript know-how. This guide takes you step-by-step through the creation of a photo-sharing app that leverages

  11. Analysis of human plasma metabolites across different liquid chromatography/mass spectrometry platforms: Cross-platform transferable chemical signatures.

    Science.gov (United States)

    Telu, Kelly H; Yan, Xinjian; Wallace, William E; Stein, Stephen E; Simón-Manso, Yamil

    2016-03-15

    The metabolite profiling of a NIST plasma Standard Reference Material (SRM 1950) on different liquid chromatography/mass spectrometry (LC/MS) platforms showed significant differences. Although these findings suggest caution when interpreting metabolomics results, the degree of overlap of both profiles allowed us to use tandem mass spectral libraries of recurrent spectra to evaluate to what extent these results are transferable across platforms and to develop cross-platform chemical signatures. Non-targeted global metabolite profiles of SRM 1950 were obtained on different LC/MS platforms using reversed-phase chromatography and different chromatographic scales (conventional HPLC, UHPLC and nanoLC). The data processing and the metabolite differential analysis were carried out using publically available (XCMS), proprietary (Mass Profiler Professional) and in-house software (NIST pipeline). Repeatability and intermediate precision showed that the non-targeted SRM 1950 profiling was highly reproducible when working on the same platform (relative standard deviation (RSD) HPLC, UHPLC and nanoLC) on the same platform. A substantial degree of overlap (common molecular features) was also found. A procedure to generate consistent chemical signatures using tandem mass spectral libraries of recurrent spectra is proposed. Different platforms rendered significantly different metabolite profiles, but the results were highly reproducible when working within one platform. Tandem mass spectral libraries of recurrent spectra are proposed to evaluate the degree of transferability of chemical signatures generated on different platforms. Chemical signatures based on our procedure are most likely cross-platform transferable. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.

  12. MiSTIC, an integrated platform for the analysis of heterogeneity in large tumour transcriptome datasets.

    Science.gov (United States)

    Lemieux, Sebastien; Sargeant, Tobias; Laperrière, David; Ismail, Houssam; Boucher, Geneviève; Rozendaal, Marieke; Lavallée, Vincent-Philippe; Ashton-Beaucage, Dariel; Wilhelm, Brian; Hébert, Josée; Hilton, Douglas J; Mader, Sylvie; Sauvageau, Guy

    2017-07-27

    Genome-wide transcriptome profiling has enabled non-supervised classification of tumours, revealing different sub-groups characterized by specific gene expression features. However, the biological significance of these subtypes remains for the most part unclear. We describe herein an interactive platform, Minimum Spanning Trees Inferred Clustering (MiSTIC), that integrates the direct visualization and comparison of the gene correlation structure between datasets, the analysis of the molecular causes underlying co-variations in gene expression in cancer samples, and the clinical annotation of tumour sets defined by the combined expression of selected biomarkers. We have used MiSTIC to highlight the roles of specific transcription factors in breast cancer subtype specification, to compare the aspects of tumour heterogeneity targeted by different prognostic signatures, and to highlight biomarker interactions in AML. A version of MiSTIC preloaded with datasets described herein can be accessed through a public web server (http://mistic.iric.ca); in addition, the MiSTIC software package can be obtained (github.com/iric-soft/MiSTIC) for local use with personalized datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Cross platform development using Delphi and Kylix

    International Nuclear Information System (INIS)

    McDonald, J.L.; Nishimura, H.; Timossi, C.

    2002-01-01

    A cross platform component for EPICS Simple Channel Access (SCA) has been developed for the use with Delphi on Windows and Kylix on Linux. An EPICS controls GUI application developed on Windows runs on Linux by simply rebuilding it, and vice versa. This paper describes the technical details of the component

  14. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    Science.gov (United States)

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  15. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    Directory of Open Access Journals (Sweden)

    Anton Kos

    2016-04-01

    Full Text Available Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models.

  16. Analysis of the development of cross-platform mobile applications

    OpenAIRE

    Pinedo Escribano, Diego

    2012-01-01

    The development of mobile phone applications is a huge market nowadays. There are many companies investing lot of money to develop successful and profitable applications. The problem emerges when trying to develop an application to be used by every user independently of the platform they are using (Android, iOS, BlackBerry OS, Windows Phone, etc.). For this reason, on the last years many different technologies have appeared that making the development of cross-platform applications easier. In...

  17. Navigation and Positioning System Using High Altitude Platforms Systems (HAPS)

    Science.gov (United States)

    Tsujii, Toshiaki; Harigae, Masatoshi; Harada, Masashi

    Recently, some countries have begun conducting feasibility studies and R&D projects on High Altitude Platform Systems (HAPS). Japan has been investigating the use of an airship system that will function as a stratospheric platform for applications such as environmental monitoring, communications and broadcasting. If pseudolites were mounted on the airships, their GPS-like signals would be stable augmentations that would improve the accuracy, availability, and integrity of GPS-based positioning systems. Also, the sufficient number of HAPS can function as a positioning system independent of GPS. In this paper, a system design of the HAPS-based positioning system and its positioning error analyses are described.

  18. Analysis and experiments of a novel and compact 3-DOF precision positioning platform

    International Nuclear Information System (INIS)

    Huang, Hu; Zhao, Hongwei; Fan, Zunqiang; Zhang, Hui; Ma, Zhichao; Yang, Zhaojun

    2013-01-01

    A novel 3-DOF precision positioning platform with dimensions of 48 mm X 50 mm X 35 mm was designed by integrating piezo actuators and flexure hinges. The platform has a compact structure but it can do high precision positioning in three axes. The dynamic model of the platform in a single direction was established. Stiffness of the flexure hinges and modal characteristics of the flexure hinge mechanism were analyzed by the finite element method. Output displacements of the platform along three axes were forecasted via stiffness analysis. Output performance of the platform in x and y axes with open-loop control as well as the z-axis with closed-loop control was tested and discussed. The preliminary application of the platform in the field of nanoindentation indicates that the designed platform works well during nanoindentation tests, and the closed-loop control ensures the linear displacement output. With suitable control, the platform has the potential to realize different positioning functions under various working conditions.

  19. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  20. Cross-platform digital assessment forms for evaluating surgical skills

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts

    2015-01-01

    developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion......A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex...... assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database...

  1. Learning by Doing: How to Develop a Cross-Platform Web App

    Directory of Open Access Journals (Sweden)

    Minh Q. Huynh

    2015-06-01

    Full Text Available As mobile devices become prevalent, there is always a need for apps.  How hard is it to develop an app especially a cross-platform app? The paper shares an experience in a project involved the development of a student services web app that can be run on cross-platform mobile devices.  The paper first describes the background of the project, the clients, and the proposed solution.  Then, it focuses on the step-by-step development processes and provides the illustration of written codes and techniques used.  The goal is for readers to gain an understanding on how to develop a mobile-friendly web app.  The paper concludes with teaching implications and offers thoughts for further development.

  2. Cross-Platform JavaScript Coding: Shifting Sand Dunes and Shimmering Mirages.

    Science.gov (United States)

    Merchant, David

    1999-01-01

    Most libraries don't have the resources to cross-platform and cross-version test all of their JavaScript coding. Many turn to WYSIWYG; however, WYSIWYG editors don't generally produce optimized coding. Web developers should: test their coding on at least one 3.0 browser, code by hand using tools to help speed that process up, and include a simple…

  3. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  4. Robust balancing and position control of a single spherical wheeled mobile platform

    OpenAIRE

    Yavuz, Fırat; Yavuz, Firat; Ünel, Mustafa; Unel, Mustafa

    2016-01-01

    Self-balancing mobile platforms with single spherical wheel, generally called ballbots, are suitable example of underactuated systems. Balancing control of a ballbot platform, which aims to maintain the upright orientation by rejecting external disturbances, is important during station keeping or trajectory tracking. In this paper, acceleration based balancing and position control of a single spherical wheeled mobile platform that has three single-row omniwheel drive m...

  5. ACCURACY ANALYSIS OF A LOW-COST PLATFORM FOR POSITIONING AND NAVIGATION

    Directory of Open Access Journals (Sweden)

    S. Hofmann

    2012-07-01

    Full Text Available This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner’s characteristics.

  6. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  7. PerPos: A Platform Providing Cloud Services for Pervasive Positioning

    DEFF Research Database (Denmark)

    Blunck, Henrik; Godsk, Torben; Grønbæk, Kaj

    2010-01-01

    -based building model manager that allows users to manage building models stored in the PerPos cloud for annotation, logging, and navigation purposes. A core service in the PerPos platform is sensor fusion for positioning that makes it seamless and efficient to combine a rich set of position sensors to obtain...

  8. Multi-platform Integrated Positioning and Attitude Determination using GNSS

    NARCIS (Netherlands)

    Buist, P.J.

    2013-01-01

    There is trend in spacecraft engineering toward distributed systems where a number of smaller spacecraft work as a larger satellite. However, in order to make the small satellites work together as a single large platform, the precise relative positions (baseline) and orientations (attitude) of the

  9. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  10. Groundwater Assessment Platform

    OpenAIRE

    Podgorski, Joel; Berg, Michael

    2018-01-01

    The Groundwater Assessment Platform is a free, interactive online GIS platform for the mapping, sharing and statistical modeling of groundwater quality data. The modeling allows users to take advantage of publicly available global datasets of various environmental parameters to produce prediction maps of their contaminant of interest.

  11. CrossCheck: an open-source web tool for high-throughput screen data analysis.

    Science.gov (United States)

    Najafov, Jamil; Najafov, Ayaz

    2017-07-19

    Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.

  12. Gene Expression Profiles for Predicting Metastasis in Breast Cancer: A Cross-Study Comparison of Classification Methods

    Directory of Open Access Journals (Sweden)

    Mark Burton

    2012-01-01

    Full Text Available Machine learning has increasingly been used with microarray gene expression data and for the development of classifiers using a variety of methods. However, method comparisons in cross-study datasets are very scarce. This study compares the performance of seven classification methods and the effect of voting for predicting metastasis outcome in breast cancer patients, in three situations: within the same dataset or across datasets on similar or dissimilar microarray platforms. Combining classification results from seven classifiers into one voting decision performed significantly better during internal validation as well as external validation in similar microarray platforms than the underlying classification methods. When validating between different microarray platforms, random forest, another voting-based method, proved to be the best performing method. We conclude that voting based classifiers provided an advantage with respect to classifying metastasis outcome in breast cancer patients.

  13. Platform pricing in matching markets

    NARCIS (Netherlands)

    Goos, M.; van Cayseele, P.; Willekens, B.

    2011-01-01

    This paper develops a simple model of monopoly platform pricing accounting for two pertinent features of matching markets. 1) The trading process is characterized by search and matching frictions implying limits to positive cross-side network effects and the presence of own-side congestion.

  14. The Transcriptome Analysis and Comparison Explorer--T-ACE: a platform-independent, graphical tool to process large RNAseq datasets of non-model organisms.

    Science.gov (United States)

    Philipp, E E R; Kraemer, L; Mountfort, D; Schilhabel, M; Schreiber, S; Rosenstiel, P

    2012-03-15

    Next generation sequencing (NGS) technologies allow a rapid and cost-effective compilation of large RNA sequence datasets in model and non-model organisms. However, the storage and analysis of transcriptome information from different NGS platforms is still a significant bottleneck, leading to a delay in data dissemination and subsequent biological understanding. Especially database interfaces with transcriptome analysis modules going beyond mere read counts are missing. Here, we present the Transcriptome Analysis and Comparison Explorer (T-ACE), a tool designed for the organization and analysis of large sequence datasets, and especially suited for transcriptome projects of non-model organisms with little or no a priori sequence information. T-ACE offers a TCL-based interface, which accesses a PostgreSQL database via a php-script. Within T-ACE, information belonging to single sequences or contigs, such as annotation or read coverage, is linked to the respective sequence and immediately accessible. Sequences and assigned information can be searched via keyword- or BLAST-search. Additionally, T-ACE provides within and between transcriptome analysis modules on the level of expression, GO terms, KEGG pathways and protein domains. Results are visualized and can be easily exported for external analysis. We developed T-ACE for laboratory environments, which have only a limited amount of bioinformatics support, and for collaborative projects in which different partners work on the same dataset from different locations or platforms (Windows/Linux/MacOS). For laboratories with some experience in bioinformatics and programming, the low complexity of the database structure and open-source code provides a framework that can be customized according to the different needs of the user and transcriptome project.

  15. Ontology-Based Platform for Conceptual Guided Dataset Analysis

    KAUST Repository

    Rodriguez-Garcia, Miguel Angel

    2016-05-31

    Nowadays organizations should handle a huge amount of both internal and external data from structured, semi-structured, and unstructured sources. This constitutes a major challenge (and also an opportunity) to current Business Intelligence solutions. The complexity and effort required to analyse such plethora of data implies considerable execution times. Besides, the large number of data analysis methods and techniques impede domain experts (laymen from an IT-assisted analytics perspective) to fully exploit their potential, while technology experts lack the business background to get the proper questions. In this work, we present a semantically-boosted platform for assisting layman users in (i) extracting a relevant subdataset from all the data, and (ii) selecting the data analysis technique(s) best suited for scrutinising that subdataset. The outcome is getting better answers in significantly less time. The platform has been evaluated in the music domain with promising results.

  16. RMS: a platform for managing cross-disciplinary and multi-institutional research project collaboration.

    Science.gov (United States)

    Luo, Jake; Apperson-Hansen, Carolyn; Pelfrey, Clara M; Zhang, Guo-Qiang

    2014-11-30

    Cross-institutional cross-disciplinary collaboration has become a trend as researchers move toward building more productive and innovative teams for scientific research. Research collaboration is significantly changing the organizational structure and strategies used in the clinical and translational science domain. However, due to the obstacles of diverse administrative structures, differences in area of expertise, and communication barriers, establishing and managing a cross-institutional research project is still a challenging task. We address these challenges by creating an integrated informatics platform to reduce the barriers to biomedical research collaboration. The Request Management System (RMS) is an informatics infrastructure designed to transform a patchwork of expertise and resources into an integrated support network. The RMS facilitates investigators' initiation of new collaborative projects and supports the management of the collaboration process. In RMS, experts and their knowledge areas are categorized and managed structurally to provide consistent service. A role-based collaborative workflow is tightly integrated with domain experts and services to streamline and monitor the life-cycle of a research project. The RMS has so far tracked over 1,500 investigators with over 4,800 tasks. The research network based on the data collected in RMS illustrated that the investigators' collaborative projects increased close to 3 times from 2009 to 2012. Our experience with RMS indicates that the platform reduces barriers for cross-institutional collaboration of biomedical research projects. Building a new generation of infrastructure to enhance cross-disciplinary and multi-institutional collaboration has become an important yet challenging task. In this paper, we share the experience of developing and utilizing a collaborative project management system. The results of this study demonstrate that a web-based integrated informatics platform can facilitate and

  17. Core-cross-linked polymeric micelles: a versatile nanomedicine platform with broad applicability

    NARCIS (Netherlands)

    Hu, Q.

    2015-01-01

    This dissertation addresses the broad applicability of the nanomedicine platform core-cross-linked polymeric micelles (CCL-PMs) composed of thermosensitive mPEG-b-pHPMAmLacn block copolymers. In Chapter 1, a general introduction to nanomedicines is provided, with a particular focus on polymeric

  18. JS-MS: a cross-platform, modular javascript viewer for mass spectrometry signals.

    Science.gov (United States)

    Rosen, Jebediah; Handy, Kyle; Gillan, André; Smith, Rob

    2017-11-06

    Despite the ubiquity of mass spectrometry (MS), data processing tools can be surprisingly limited. To date, there is no stand-alone, cross-platform 3-D visualizer for MS data. Available visualization toolkits require large libraries with multiple dependencies and are not well suited for custom MS data processing modules, such as MS storage systems or data processing algorithms. We present JS-MS, a 3-D, modular JavaScript client application for viewing MS data. JS-MS provides several advantages over existing MS viewers, such as a dependency-free, browser-based, one click, cross-platform install and better navigation interfaces. The client includes a modular Java backend with a novel streaming.mzML parser to demonstrate the API-based serving of MS data to the viewer. JS-MS enables custom MS data processing and evaluation by providing fast, 3-D visualization using improved navigation without dependencies. JS-MS is publicly available with a GPLv2 license at github.com/optimusmoose/jsms.

  19. Epidemic 2014 enterovirus D68 cross-reacts with human rhinovirus on a respiratory molecular diagnostic platform.

    Science.gov (United States)

    McAllister, Shane C; Schleiss, Mark R; Arbefeville, Sophie; Steiner, Marie E; Hanson, Ryan S; Pollock, Catherine; Ferrieri, Patricia

    2015-01-01

    Enterovirus D68 (EV-D68) is an emerging virus known to cause sporadic disease and occasional epidemics of severe lower respiratory tract infection. However, the true prevalence of infection with EV-D68 is unknown, due in part to the lack of a rapid and specific nucleic acid amplification test as well as the infrequency with which respiratory samples are analyzed by enterovirus surveillance programs. During the 2014 EV-D68 epidemic in the United States, we noted an increased frequency of "low-positive" results for human rhinovirus (HRV) detected in respiratory tract samples using the GenMark Diagnostics eSensor respiratory viral panel, a multiplex PCR assay able to detect 14 known respiratory viruses but not enteroviruses. We simultaneously noted markedly increased admissions to our Pediatric Intensive Care Unit for severe lower respiratory tract infections in patients both with and without a history of reactive airway disease. Accordingly, we hypothesized that these "low-positive" RVP results were due to EV-D68 rather than rhinovirus infection. Sequencing of the picornavirus 5' untranslated region (5'-UTR) of 49 samples positive for HRV by the GenMark RVP revealed that 33 (67.3%) were in fact EV-D68. Notably, the mean intensity of the HRV RVP result was significantly lower in the sequence-identified EV-D68 samples (20.3 nA) compared to HRV (129.7 nA). Using a cut-off of 40 nA for the differentiation of EV-D68 from HRV resulted in 94% sensitivity and 88% specificity. The robust diagnostic characteristics of our data suggest that the cross-reactivity of EV-D68 and HRV on the GenMark Diagnostics eSensor RVP platform may be an important factor to consider in making accurate molecular diagnosis of EV-D68 at institutions utilizing this system or other molecular respiratory platforms that may also cross-react.

  20. ENHANCED DATA DISCOVERABILITY FOR IN SITU HYPERSPECTRAL DATASETS

    Directory of Open Access Journals (Sweden)

    B. Rasaiah

    2016-06-01

    Full Text Available Field spectroscopic metadata is a central component in the quality assurance, reliability, and discoverability of hyperspectral data and the products derived from it. Cataloguing, mining, and interoperability of these datasets rely upon the robustness of metadata protocols for field spectroscopy, and on the software architecture to support the exchange of these datasets. Currently no standard for in situ spectroscopy data or metadata protocols exist. This inhibits the effective sharing of growing volumes of in situ spectroscopy datasets, to exploit the benefits of integrating with the evolving range of data sharing platforms. A core metadataset for field spectroscopy was introduced by Rasaiah et al., (2011-2015 with extended support for specific applications. This paper presents a prototype model for an OGC and ISO compliant platform-independent metadata discovery service aligned to the specific requirements of field spectroscopy. In this study, a proof-of-concept metadata catalogue has been described and deployed in a cloud-based architecture as a demonstration of an operationalized field spectroscopy metadata standard and web-based discovery service.

  1. Browser App Approach: Can It Be an Answer to the Challenges in Cross-Platform App Development?

    Science.gov (United States)

    Huynh, Minh; Ghimire, Prashant

    2017-01-01

    Aim/Purpose: As smartphones proliferate, many different platforms begin to emerge. The challenge to developers as well as IS [Information Systems] educators and students is how to learn the skills to design and develop apps to run on cross-platforms. Background: For developers, the purpose of this paper is to describe an alternative to the complex…

  2. Determination of UAV position using high accuracy navigation platform

    Directory of Open Access Journals (Sweden)

    Ireneusz Kubicki

    2016-07-01

    Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration

  3. A cross-platform solution for light field based 3D telemedicine.

    Science.gov (United States)

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. The scheme and research of TV series multidimensional comprehensive evaluation on cross-platform

    Science.gov (United States)

    Chai, Jianping; Bai, Xuesong; Zhou, Hongjun; Yin, Fulian

    2016-10-01

    As for shortcomings of the comprehensive evaluation system on traditional TV programs such as single data source, ignorance of new media as well as the high time cost and difficulty of making surveys, a new evaluation of TV series is proposed in this paper, which has a perspective in cross-platform multidimensional evaluation after broadcasting. This scheme considers the data directly collected from cable television and the Internet as research objects. It's based on TOPSIS principle, after preprocessing and calculation of the data, they become primary indicators that reflect different profiles of the viewing of TV series. Then after the process of reasonable empowerment and summation by the six methods(PCA, AHP, etc.), the primary indicators form the composite indices on different channels or websites. The scheme avoids the inefficiency and difficulty of survey and marking; At the same time, it not only reflects different dimensions of viewing, but also combines TV media and new media, completing the multidimensional comprehensive evaluation of TV series on cross-platform.

  5. Cross-Cultural Concept Mapping of Standardized Datasets

    DEFF Research Database (Denmark)

    Kano Glückstad, Fumiko

    2012-01-01

    This work compares four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain [1]. Here, datasets based...

  6. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  7. a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data

    Science.gov (United States)

    Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.

    2017-09-01

    Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.

  8. Design and Analysis of a Compact Precision Positioning Platform Integrating Strain Gauges and the Piezoactuator

    Directory of Open Access Journals (Sweden)

    Shunguang Wan

    2012-07-01

    Full Text Available Miniaturization precision positioning platforms are needed for in situ nanomechanical test applications. This paper proposes a compact precision positioning platform integrating strain gauges and the piezoactuator. Effects of geometric parameters of two parallel plates on Von Mises stress distribution as well as static and dynamic characteristics of the platform were studied by the finite element method. Results of the calibration experiment indicate that the strain gauge sensor has good linearity and its sensitivity is about 0.0468 mV/μm. A closed-loop control system was established to solve the problem of nonlinearity of the platform. Experimental results demonstrate that for the displacement control process, both the displacement increasing portion and the decreasing portion have good linearity, verifying that the control system is available. The developed platform has a compact structure but can realize displacement measurement with the embedded strain gauges, which is useful for the closed-loop control and structure miniaturization of piezo devices. It has potential applications in nanoindentation and nanoscratch tests, especially in the field of in situ nanomechanical testing which requires compact structures.

  9. NASA's Platform for Cross-Disciplinary Microchannel Research

    Science.gov (United States)

    Son, Sang Young; Spearing, Scott; Allen, Jeffrey; Monaco, Lisa A.

    2003-01-01

    A team from the Structural Biology group located at the NASA Marshall Space Flight Center in Huntsville, Alabama is developing a platform suitable for cross-disciplinary microchannel research. The original objective of this engineering development effort was to deliver a multi-user flight-certified facility for iterative investigations of protein crystal growth; that is, Iterative Biological Crystallization (IBC). However, the unique capabilities of this facility are not limited to the low-gravity structural biology research community. Microchannel-based research in a number of other areas may be greatly accelerated through use of this facility. In particular, the potential for gas-liquid flow investigations and cellular biological research utilizing the exceptional pressure control and simplified coupling to macroscale diagnostics inherent in the IBC facility will be discussed. In conclusion, the opportunities for research-specific modifications to the microchannel configuration, control, and diagnostics will be discussed.

  10. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  11. Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets.

    Directory of Open Access Journals (Sweden)

    Ilya Belevich

    2016-01-01

    Full Text Available Understanding the structure-function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.

  12. Augmentation of Quasi-Zenith Satellite Positioning System Using High Altitude Platforms Systems (HAPS)

    Science.gov (United States)

    Tsujii, Toshiaki; Harigae, Masatoshi

    Recently, some feasibility studies on a regional positioning system using the quasi-zenith satellites and the geostationary satellites have been conducted in Japan. However, the geometry of this system seems to be unsatisfactory in terms of the positioning accuracy in north-south direction. In this paper, an augmented satellite positioning system by the High Altitude Platform Systems (HAPS) is proposed since the flexibility of the HAPS location is effective to improve the geometry of satellite positioning system. The improved positioning performance of the augmented system is also demonstrated.

  13. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2005-11-01

    Full Text Available Abstract Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85% were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and

  14. Cross-Platform Android/iOS-Based Smart Switch Control Middleware in a Digital Home

    Directory of Open Access Journals (Sweden)

    Guo Jie

    2015-01-01

    Full Text Available With technological and economic development, people’s lives have been improved substantially, especially their home environments. One of the key aspects of these improvements is home intellectualization, whose core is the smart home control system. Furthermore, as smart phones have become increasingly popular, we can use them to control the home system through Wi-Fi, Bluetooth, and GSM. This means that control with phones is more convenient and fast and now becomes the primary terminal controller in the smart home. In this paper, we propose middleware for developing a cross-platform Android/iOS-based solution for smart switch control software, focus on the Wi-Fi based communication protocols between the cellphone and the smart switch, achieved a plugin-based smart switch function, defined and implemented the JavaScript interface, and then implemented the cross-platform Android/iOS-based smart switch control software; also the scenarios are illustrated. Finally, tests were performed after the completed realization of the smart switch control system.

  15. Pro Smartphone Cross-Platform Development IPhone, Blackberry, Windows Mobile, and Android Development and Distribution

    CERN Document Server

    Allen, Sarah; Lundrigan, Lee

    2010-01-01

    Learn the theory behind cross-platform development, and put the theory into practice with code using the invaluable information presented in this book. With in-depth coverage of development and distribution techniques for iPhone, BlackBerry, Windows Mobile, and Android, you'll learn the native approach to working with each of these platforms. With detailed coverage of emerging frameworks like PhoneGap and Rhomobile, you'll learn the art of creating applications that will run across all devices. You'll also be introduced to the code-signing process and the distribution of applications through t

  16. A signal strength priority based position estimation for mobile platforms

    Science.gov (United States)

    Kalgikar, Bhargav; Akopian, David; Chen, Philip

    2010-01-01

    Global Positioning System (GPS) products help to navigate while driving, hiking, boating, and flying. GPS uses a combination of orbiting satellites to determine position coordinates. This works great in most outdoor areas, but the satellite signals are not strong enough to penetrate inside most indoor environments. As a result, a new strain of indoor positioning technologies that make use of 802.11 wireless LANs (WLAN) is beginning to appear on the market. In WLAN positioning the system either monitors propagation delays between wireless access points and wireless device users to apply trilateration techniques or it maintains the database of location-specific signal fingerprints which is used to identify the most likely match of incoming signal data with those preliminary surveyed and saved in the database. In this paper we investigate the issue of deploying WLAN positioning software on mobile platforms with typically limited computational resources. We suggest a novel received signal strength rank order based location estimation system to reduce computational loads with a robust performance. The proposed system performance is compared to conventional approaches.

  17. NEW WEB-BASED ACCESS TO NUCLEAR STRUCTURE DATASETS.

    Energy Technology Data Exchange (ETDEWEB)

    WINCHELL,D.F.

    2004-09-26

    As part of an effort to migrate the National Nuclear Data Center (NNDC) databases to a relational platform, a new web interface has been developed for the dissemination of the nuclear structure datasets stored in the Evaluated Nuclear Structure Data File and Experimental Unevaluated Nuclear Data List.

  18. A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises

    Science.gov (United States)

    O'Brien, Myles

    2012-01-01

    The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…

  19. STRENGTH PERFORMANCE ASSESSMENT IN A SIMULATED MEN'S GYMNASTICS STILL RINGS CROSS

    Directory of Open Access Journals (Sweden)

    Jennifer K. Dunlavy

    2007-03-01

    Full Text Available Athletes in sports such as the gymnastics who perform the still rings cross position are disadvantaged due to a lack of objective and convenient measurement methods. The gymnastics "cross" is a held isometric strength position considered fundamental to all still rings athletes. The purpose of this investigation was to determine if two small force platforms (FPs placed on supports to simulate a cross position could demonstrate the fidelity necessary to differentiate between athletes who could perform a cross from those who could not. Ten gymnasts (5 USA Gymnastics, Senior National Team, and 5 Age Group Level Gymnasts agreed to participate. The five Senior National Team athletes were grouped as cross Performers; the Age Group Gymnasts could not successfully perform the cross position and were grouped as cross Non- Performers. The two small FPs were first tested for reliability and validity and were then used to obtain a force-time record of a simulated cross position. The simulated cross test consisted of standing between two small force platforms placed on top of large solid gymnastics spotting blocks. The gymnasts attempted to perform a cross position by placing their hands at the center of the FPs and pressing downward with sufficient force that they could remove the support of their feet from the floor. Force-time curves (100 Hz were obtained and analyzed for the sum of peak and mean arm ground reaction forces. The summed arm forces, mean and peak, were compared to body weight to determine how close the gymnasts came to achieving forces equal to body weight and thus the ability to perform the cross. The mean and peak summed arm forces were able to statistically differentiate between athletes who could perform the cross from those who could not (p < 0.05. The force-time curves and small FPs showed sufficient fidelity to differentiate between Performer and Non- Performer groups. This experiment showed that small and inexpensive force platforms

  20. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    Science.gov (United States)

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  1. Adding Cross-Platform Support to a High-Throughput Software Stack and Exploration of Vectorization Libraries

    CERN Document Server

    AUTHOR|(CDS)2258962

    This master thesis is written at the LHCb experiment at CERN. It is part of the initiative for improving software in view of the upcoming upgrade in 2021 which will significantly increase the amount of acquired data. This thesis consists of two parts. The first part is about the exploration of different vectorization libraries and their usefulness for the LHCb collaboration. The second part is about adding cross-platform support to the LHCb software stack. Here, the LHCb stack is successfully ported to ARM (aarch64) and its performance is analyzed. At the end of the thesis, the port to PowerPC(ppc64le) awaits the performance analysis. The main goal of porting the stack is the cost-performance evaluation for the different platforms to get the most cost efficient hardware for the new server farm for the upgrade. For this, selected vectorization libraries are extended to support the PowerPC and ARM platform. And though the same compiler is used, platform-specific changes to the compilation flags are required. In...

  2. Large-scale cross-species chemogenomic platform proposes a new drug discovery strategy of veterinary drug from herbal medicines.

    Directory of Open Access Journals (Sweden)

    Chao Huang

    Full Text Available Veterinary Herbal Medicine (VHM is a comprehensive, current, and informative discipline on the utilization of herbs in veterinary practice. Driven by chemistry but progressively directed by pharmacology and the clinical sciences, drug research has contributed more to address the needs for innovative veterinary medicine for curing animal diseases. However, research into veterinary medicine of vegetal origin in the pharmaceutical industry has reduced, owing to questions such as the short of compatibility of traditional natural-product extract libraries with high-throughput screening. Here, we present a cross-species chemogenomic screening platform to dissect the genetic basis of multifactorial diseases and to determine the most suitable points of attack for future veterinary medicines, thereby increasing the number of treatment options. First, based on critically examined pharmacology and text mining, we build a cross-species drug-likeness evaluation approach to screen the lead compounds in veterinary medicines. Second, a specific cross-species target prediction model is developed to infer drug-target connections, with the purpose of understanding how drugs work on the specific targets. Third, we focus on exploring the multiple targets interference effects of veterinary medicines by heterogeneous network convergence and modularization analysis. Finally, we manually integrate a disease pathway to test whether the cross-species chemogenomic platform could uncover the active mechanism of veterinary medicine, which is exemplified by a specific network module. We believe the proposed cross-species chemogenomic platform allows for the systematization of current and traditional knowledge of veterinary medicine and, importantly, for the application of this emerging body of knowledge to the development of new drugs for animal diseases.

  3. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  4. Positive technology: a free mobile platform for the self-management of psychological stress.

    Science.gov (United States)

    Gaggioli, Andrea; Cipresso, Pietro; Serino, Silvia; Campanaro, Danilo Marco; Pallavicini, Federica; Wiederhold, Brenda K; Riva, Giuseppe

    2014-01-01

    We describe the main features and preliminary evaluation of Positive Technology, a free mobile platform for the self-management of psychological stress (http://positiveapp.info/). The mobile platform features three main components: (i) guided relaxation, which provides the user with the opportunity of browsing a gallery of relaxation music and video-narrative resources for reducing stress; (ii) 3D biofeedback, which helps the user learning to control his/her responses, by visualizing variations of heart rate in an engaging 3D environment; (iii) stress tracking, by the recording of heart rate and self-reports. We evaluated the Positive Technology app in an online trial involving 32 participants, out of which 7 used the application in combination with the wrist sensor. Overall, feedback from users was satisfactory and the analysis of data collected online indicated the capability of the app for reducing perceived stress levels. A future goal is to improve the usability of the application and include more advanced stress monitoring features, based on the analysis of heart rate variability indexes.

  5. XML as a cross-platform representation for medical imaging with fuzzy algorithms.

    Science.gov (United States)

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2011-01-01

    Machines that perform linguistic medical image interpretation are based on fuzzy algorithms. There are several frameworks that can edit and simulate fuzzy algorithms, but they are not compatible with most of the implemented applications. This paper suggests a representation for fuzzy algorithms in XML files, and using this XML as a cross-platform between the simulation framework and the software applications. The paper presents a parsing algorithm that can convert files created by simulation framework, and converts them dynamically into an XML file keeping the original logical structure of the files.

  6. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  7. Omicseq: a web-based search engine for exploring omics datasets

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S.; Xu, Tianlei; Chen, Li; Zwick, Michael E.; Jiang, Xiaoqian; Wang, Fusheng

    2017-01-01

    Abstract The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve ‘findability’ of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. PMID:28402462

  8. Mining the archives: a cross-platform analysis of gene ...

    Science.gov (United States)

    Formalin-fixed paraffin-embedded (FFPE) tissue samples represent a potentially invaluable resource for genomic research into the molecular basis of disease. However, use of FFPE samples in gene expression studies has been limited by technical challenges resulting from degradation of nucleic acids. Here we evaluated gene expression profiles derived from fresh-frozen (FRO) and FFPE mouse liver tissues using two DNA microarray protocols and two whole transcriptome sequencing (RNA-seq) library preparation methodologies. The ribo-depletion protocol outperformed the other three methods by having the highest correlations of differentially expressed genes (DEGs) and best overlap of pathways between FRO and FFPE groups. We next tested the effect of sample time in formalin (18 hours or 3 weeks) on gene expression profiles. Hierarchical clustering of the datasets indicated that test article treatment, and not preservation method, was the main driver of gene expression profiles. Meta- and pathway analyses indicated that biological responses were generally consistent for 18-hour and 3-week FFPE samples compared to FRO samples. However, clear erosion of signal intensity with time in formalin was evident, and DEG numbers differed by platform and preservation method. Lastly, we investigated the effect of age in FFPE block on genomic profiles. RNA-seq analysis of 8-, 19-, and 26-year-old control blocks using the ribo-depletion protocol resulted in comparable quality metrics, inc

  9. Cross platform analysis of methylation, miRNA and stem cell gene expression data in germ cell tumors highlights characteristic differences by tumor histology

    International Nuclear Information System (INIS)

    Poynter, Jenny N.; Bestrashniy, Jessica R. B. M.; Silverstein, Kevin A. T.; Hooten, Anthony J.; Lees, Christopher; Ross, Julie A.; Tolar, Jakub

    2015-01-01

    Alterations in methylation patterns, miRNA expression, and stem cell protein expression occur in germ cell tumors (GCTs). Our goal is to integrate molecular data across platforms to identify molecular signatures in the three main histologic subtypes of Type I and Type II GCTs (yolk sac tumor (YST), germinoma, and teratoma). We included 39 GCTs and 7 paired adjacent tissue samples in the current analysis. Molecular data available for analysis include DNA methylation data (Illumina GoldenGate Cancer Methylation Panel I), miRNA expression (NanoString nCounter miRNA platform), and stem cell factor expression (SABiosciences Human Embryonic Stem Cell Array). We evaluated the cross platform correlations of the data features using the Maximum Information Coefficient (MIC). In analyses of individual datasets, differences were observed by tumor histology. Germinomas had higher expression of transcription factors maintaining stemness, while YSTs had higher expression of cytokines, endoderm and endothelial markers. We also observed differences in miRNA expression, with miR-371-5p, miR-122, miR-302a, miR-302d, and miR-373 showing elevated expression in one or more histologic subtypes. Using the MIC, we identified correlations across the data features, including six major hubs with higher expression in YST (LEFTY1, LEFTY2, miR302b, miR302a, miR 126, and miR 122) compared with other GCT. While prognosis for GCTs is overall favorable, many patients experience resistance to chemotherapy, relapse and/or long term adverse health effects following treatment. Targeted therapies, based on integrated analyses of molecular tumor data such as that presented here, may provide a way to secure high cure rates while reducing unintended health consequences

  10. Browser App Approach: Can It Be an Answer to the Challenges in Cross-Platform App Development?

    Directory of Open Access Journals (Sweden)

    Minh Q. Huynh

    2017-02-01

    Full Text Available Aim/Purpose: As smartphones proliferate, many different platforms begin to emerge. The challenge to developers as well as IS educators and students is how to learn the skills to design and develop apps to run on cross-platforms. Background: For developers, the purpose of this paper is to describe an alternative to the complex native app development. For IS educators and students, the paper provides a feasible way to learn and develop fully functional mobile apps without technical burdens. Methodology: The methods used in the development of browser-based apps is prototyping. Our proposed approach is browser-based, supports cross-platforms, uses open-source standards, and takes advantage of “write-once-and-run-anywhere” (WORA concept. Contribution: The paper illustrates the application of the browser-based approach to create a series of browser apps without high learning curve. Findings: The results show the potentials for using browser app approach to teach as well as to create new apps. Recommendations for Practitioners\t: Our proposed browser app development approach and example would be useful to mobile app developers/IS educators and non-technical students because the source code as well as documentations in this project are available for downloading. Future Research: For further work, we discuss the use of hybrid development framework to enhance browser apps.

  11. GUIDEseq: a bioconductor package to analyze GUIDE-Seq datasets for CRISPR-Cas nucleases.

    Science.gov (United States)

    Zhu, Lihua Julie; Lawrence, Michael; Gupta, Ankit; Pagès, Hervé; Kucukural, Alper; Garber, Manuel; Wolfe, Scot A

    2017-05-15

    Genome editing technologies developed around the CRISPR-Cas9 nuclease system have facilitated the investigation of a broad range of biological questions. These nucleases also hold tremendous promise for treating a variety of genetic disorders. In the context of their therapeutic application, it is important to identify the spectrum of genomic sequences that are cleaved by a candidate nuclease when programmed with a particular guide RNA, as well as the cleavage efficiency of these sites. Powerful new experimental approaches, such as GUIDE-seq, facilitate the sensitive, unbiased genome-wide detection of nuclease cleavage sites within the genome. Flexible bioinformatics analysis tools for processing GUIDE-seq data are needed. Here, we describe an open source, open development software suite, GUIDEseq, for GUIDE-seq data analysis and annotation as a Bioconductor package in R. The GUIDEseq package provides a flexible platform with more than 60 adjustable parameters for the analysis of datasets associated with custom nuclease applications. These parameters allow data analysis to be tailored to different nuclease platforms with different length and complexity in their guide and PAM recognition sequences or their DNA cleavage position. They also enable users to customize sequence aggregation criteria, and vary peak calling thresholds that can influence the number of potential off-target sites recovered. GUIDEseq also annotates potential off-target sites that overlap with genes based on genome annotation information, as these may be the most important off-target sites for further characterization. In addition, GUIDEseq enables the comparison and visualization of off-target site overlap between different datasets for a rapid comparison of different nuclease configurations or experimental conditions. For each identified off-target, the GUIDEseq package outputs mapped GUIDE-Seq read count as well as cleavage score from a user specified off-target cleavage score prediction

  12. An integrated dataset for in silico drug discovery

    Directory of Open Access Journals (Sweden)

    Cockell Simon J

    2010-12-01

    Full Text Available Drug development is expensive and prone to failure. It is potentially much less risky and expensive to reuse a drug developed for one condition for treating a second disease, than it is to develop an entirely new compound. Systematic approaches to drug repositioning are needed to increase throughput and find candidates more reliably. Here we address this need with an integrated systems biology dataset, developed using the Ondex data integration platform, for the in silico discovery of new drug repositioning candidates. We demonstrate that the information in this dataset allows known repositioning examples to be discovered. We also propose a means of automating the search for new treatment indications of existing compounds.

  13. Cross-Platform Development Techniques for Mobile Devices

    Science.gov (United States)

    2013-09-01

    solutions. Mobile devices run on diverse platforms requiring differing constraints that the developer must adhere to. Thus, extra time and resources...and growing market for providing solutions. Mobile devices run on diverse platforms requiring differing constraints that the developer must adhere...testing are an iOS- based Apple iPhone 4 and an Android-based Samsung Galaxy S III. For user interface analysis this chapter also includes, from both

  14. Stabilisation problem in biaxial platform

    Directory of Open Access Journals (Sweden)

    Lindner Tymoteusz

    2016-12-01

    Full Text Available The article describes investigation of rolling ball stabilization problem on a biaxial platform. The aim of the control system proposed here is to stabilize ball moving on a plane in equilibrium point. The authors proposed a control algorithm based on cascade PID and they compared it with another control method. The article shows the results of the accuracy of ball stabilization and influence of applied filter on the signal waveform. The application used to detect the ball position measured by digital camera has been written using a cross platform .Net wrapper to the OpenCV image processing library - EmguCV. The authors used the bipolar stepper motor with dedicated electronic controller. The data between the computer and the designed controller are sent with use of the RS232 standard. The control stand is based on ATmega series microcontroller.

  15. Stabilisation problem in biaxial platform

    Science.gov (United States)

    Lindner, Tymoteusz; Rybarczyk, Dominik; Wyrwał, Daniel

    2016-12-01

    The article describes investigation of rolling ball stabilization problem on a biaxial platform. The aim of the control system proposed here is to stabilize ball moving on a plane in equilibrium point. The authors proposed a control algorithm based on cascade PID and they compared it with another control method. The article shows the results of the accuracy of ball stabilization and influence of applied filter on the signal waveform. The application used to detect the ball position measured by digital camera has been written using a cross platform .Net wrapper to the OpenCV image processing library - EmguCV. The authors used the bipolar stepper motor with dedicated electronic controller. The data between the computer and the designed controller are sent with use of the RS232 standard. The control stand is based on ATmega series microcontroller.

  16. Bistatic High Frequency Radar Ocean Surface Cross Section for an FMCW Source with an Antenna on a Floating Platform

    Directory of Open Access Journals (Sweden)

    Yue Ma

    2016-01-01

    Full Text Available The first- and second-order bistatic high frequency radar cross sections of the ocean surface with an antenna on a floating platform are derived for a frequency-modulated continuous wave (FMCW source. Based on previous work, the derivation begins with the general bistatic electric field in the frequency domain for the case of a floating antenna. Demodulation and range transformation are used to obtain the range information, distinguishing the process from that used for a pulsed radar. After Fourier-transforming the autocorrelation and comparing the result with the radar range equation, the radar cross sections are derived. The new first- and second-order antenna-motion-incorporated bistatic radar cross section models for an FMCW source are simulated and compared with those for a pulsed source. Results show that, for the same radar operating parameters, the first-order radar cross section for the FMCW waveform is a little lower than that for a pulsed source. The second-order radar cross section for the FMCW waveform reduces to that for the pulsed waveform when the scattering patch limit approaches infinity. The effect of platform motion on the radar cross sections for an FMCW waveform is investigated for a variety of sea states and operating frequencies and, in general, is found to be similar to that for a pulsed waveform.

  17. Innovative Design of Agricultural Cross-border E-commerce Management Platform Construction between Hainan and Taiwan

    Science.gov (United States)

    Song, Jun; Gao, Yanli

    2018-02-01

    The essay is based on the subject research between Hainan and Tai league, by analyzing the comparison of agricultural development between Hainan and other Chinese areas, finds that Hainan agricultural develops slowly. Meanwhile, by using the experience and technology of Taiwan agricultural development for reference, taking full advantage of modern internet technology, we try to find the complementary opportunity of agricultural technology, experience in agricultural development between Hainan and Taiwan. Therefore, by combining the existing resources of Hainan and Taiwan, following the thoughts of the “Internet+ Agriculture”, the essay tries to work out an innovative designation of agricultural cross-border e-commerce management platform, integrate the resource advantages of Hainan and Taiwan, specify the functions of newly designed management platform.

  18. A Javascript GIS Platform Based on Invocable Geospatial Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Evangelidis

    2018-04-01

    Full Text Available Semantic Web technologies are being increasingly adopted by the geospatial community during last decade through the utilization of open standards for expressing and serving geospatial data. This was also dramatically assisted by the ever-increasing access and usage of geographic mapping and location-based services via smart devices in people’s daily activities. In this paper, we explore the developmental framework of a pure JavaScript client-side GIS platform exclusively based on invocable geospatial Web services. We also extend JavaScript utilization on the server side by deploying a node server acting as a bridge between open source WPS libraries and popular geoprocessing engines. The vehicle for such an exploration is a cross platform Web browser capable of interpreting JavaScript commands to achieve interaction with geospatial providers. The tool is a generic Web interface providing capabilities of acquiring spatial datasets, composing layouts and applying geospatial processes. In an ideal form the end-user will have to identify those services, which satisfy a geo-related need and put them in the appropriate row. The final output may act as a potential collector of freely available geospatial web services. Its server-side components may exploit geospatial processing suppliers composing that way a light-weight fully transparent open Web GIS platform.

  19. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. AutoCAD platform customization autolisp

    CERN Document Server

    Ambrosius, Lee

    2014-01-01

    Customize and personalize programs built on the AutoCAD platform AutoLISP is the key to unlocking the secrets of a more streamlined experience using industry leading software programs like AutoCAD, Civil 3D, Plant 3D, and more. AutoCAD Platform Customization: AutoLISP provides real-world examples that show you how to do everything from modifying graphical objects and reading and setting system variables to communicating with external programs. It also features a resources appendix and downloadable datasets and customization examples-tools that ensure swift and easy adoption. Find out how to r

  1. Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset

    Science.gov (United States)

    Hack, Dan E.; Saville, Michael A.

    2010-04-01

    This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.

  2. Review of ATLAS Open Data 8 TeV datasets, tools and activities

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The ATLAS Collaboration has released two 8 TeV datasets and relevant simulated samples to the public for educational use. A number of groups within ATLAS have used these ATLAS Open Data 8 TeV datasets, developing tools and educational material to promote particle physics. The general aim of these activities is to provide simple and user-friendly interactive interfaces to simulate the procedures used by high-energy physics researchers. International Masterclasses introduce particle physics to high school students and have been studying 8 TeV ATLAS Open Data since 2015. Inspired by this success, a new ATLAS Open Data initiative was launched in 2016 for university students. A comprehensive educational platform was thus developed featuring a second 8 TeV dataset and a new set of educational tools. The 8 TeV datasets and associated tools are presented and discussed here, as well as a selection of activities studying the ATLAS Open Data 8 TeV datasets.

  3. German crowd-investing platforms: Literature review and survey

    Directory of Open Access Journals (Sweden)

    David Grundy

    2016-12-01

    Full Text Available This article presents a comprehensive overview of the current German crowd-investing market drawing on a data-set of 31 crowd-investing platforms including the analysis of 265 completed projects. While crowd-investing market still only represents a niche in the German venture capital market, there is potential for an increase in both market volume and in average project investment. The market share is distributed among a few crowd-investing platforms with high entry barriers for new platforms although platforms that specialise in certain sectors have managed to successfully enter the market. German crowd-investing platforms are found to promote mainly internet-based enterprises (36% followed by projects in real estate (24% and green projects (19%, with the median money raised 100,000 euro.

  4. annot8r: GO, EC and KEGG annotation of EST datasets

    Directory of Open Access Journals (Sweden)

    Schmid Ralf

    2008-04-01

    Full Text Available Abstract Background The expressed sequence tag (EST methodology is an attractive option for the generation of sequence data for species for which no completely sequenced genome is available. The annotation and comparative analysis of such datasets poses a formidable challenge for research groups that do not have the bioinformatics infrastructure of major genome sequencing centres. Therefore, there is a need for user-friendly tools to facilitate the annotation of non-model species EST datasets with well-defined ontologies that enable meaningful cross-species comparisons. To address this, we have developed annot8r, a platform for the rapid annotation of EST datasets with GO-terms, EC-numbers and KEGG-pathways. Results annot8r automatically downloads all files relevant for the annotation process and generates a reference database that stores UniProt entries, their associated Gene Ontology (GO, Enzyme Commission (EC and Kyoto Encyclopaedia of Genes and Genomes (KEGG annotation and additional relevant data. For each of GO, EC and KEGG, annot8r extracts a specific sequence subset from the UniProt dataset based on the information stored in the reference database. These three subsets are then formatted for BLAST searches. The user provides the protein or nucleotide sequences to be annotated and annot8r runs BLAST searches against these three subsets. The BLAST results are parsed and the corresponding annotations retrieved from the reference database. The annotations are saved both as flat files and also in a relational postgreSQL results database to facilitate more advanced searches within the results. annot8r is integrated with the PartiGene suite of EST analysis tools. Conclusion annot8r is a tool that assigns GO, EC and KEGG annotations for data sets resulting from EST sequencing projects both rapidly and efficiently. The benefits of an underlying relational database, flexibility and the ease of use of the program make it ideally suited for non

  5. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  6. Following User Pathways: Cross Platform and Mixed Methods Analysis in Social Media Studies

    DEFF Research Database (Denmark)

    Hall, Margeret; Mazarakis, Athanasios; Peters, Isabella

    2016-01-01

    is the mixed method approach (e.g. qualitative and quantitative methods) in order to better understand how users and society interacts online. The workshop 'Following User Pathways' brings together a community of researchers and professionals to address methodological, analytical, conceptual, and technological......Social media and the resulting tidal wave of available data have changed the ways and methods researchers analyze communities at scale. But the full potential for social scientists (and others) is not yet achieved. Despite the popularity of social media analysis in the past decade, few researchers...... challenges and opportunities of cross-platform, mixed method analysis in social media ecosystems....

  7. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    Science.gov (United States)

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  8. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.

    Science.gov (United States)

    Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L

    2014-01-01

    As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Prediction potential of candidate biomarker sets identified and validated on gene expression data from multiple datasets

    Directory of Open Access Journals (Sweden)

    Karacali Bilge

    2007-10-01

    Full Text Available Abstract Background Independently derived expression profiles of the same biological condition often have few genes in common. In this study, we created populations of expression profiles from publicly available microarray datasets of cancer (breast, lymphoma and renal samples linked to clinical information with an iterative machine learning algorithm. ROC curves were used to assess the prediction error of each profile for classification. We compared the prediction error of profiles correlated with molecular phenotype against profiles correlated with relapse-free status. Prediction error of profiles identified with supervised univariate feature selection algorithms were compared to profiles selected randomly from a all genes on the microarray platform and b a list of known disease-related genes (a priori selection. We also determined the relevance of expression profiles on test arrays from independent datasets, measured on either the same or different microarray platforms. Results Highly discriminative expression profiles were produced on both simulated gene expression data and expression data from breast cancer and lymphoma datasets on the basis of ER and BCL-6 expression, respectively. Use of relapse-free status to identify profiles for prognosis prediction resulted in poorly discriminative decision rules. Supervised feature selection resulted in more accurate classifications than random or a priori selection, however, the difference in prediction error decreased as the number of features increased. These results held when decision rules were applied across-datasets to samples profiled on the same microarray platform. Conclusion Our results show that many gene sets predict molecular phenotypes accurately. Given this, expression profiles identified using different training datasets should be expected to show little agreement. In addition, we demonstrate the difficulty in predicting relapse directly from microarray data using supervised machine

  10. Charged Triazole Cross-Linkers for Hyaluronan-Based Hybrid Hydrogels

    Directory of Open Access Journals (Sweden)

    Maike Martini

    2016-09-01

    Full Text Available Polyelectrolyte hydrogels play an important role in tissue engineering and can be produced from natural polymers, such as the glycosaminoglycan hyaluronan. In order to control charge density and mechanical properties of hyaluronan-based hydrogels, we developed cross-linkers with a neutral or positively charged triazole core with different lengths of spacer arms and two terminal maleimide groups. These cross-linkers react with thiolated hyaluronan in a fast, stoichiometric thio-Michael addition. Introducing a positive charge on the core of the cross-linker enabled us to compare hydrogels with the same interconnectivity, but a different charge density. Positively charged cross-linkers form stiffer hydrogels relatively independent of the size of the cross-linker, whereas neutral cross-linkers only form stable hydrogels at small spacer lengths. These novel cross-linkers provide a platform to tune the hydrogel network charge and thus the mechanical properties of the network. In addition, they might offer a wide range of applications especially in bioprinting for precise design of hydrogels.

  11. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  12. Scientific data analysis on data-parallel platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Craig D.; Bayer, Gregory W.; Choe, Yung Ryn; Roe, Diana C.

    2010-09-01

    As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

  13. The EU-ADR Web Platform: delivering advanced pharmacovigilance tools.

    Science.gov (United States)

    Oliveira, José Luis; Lopes, Pedro; Nunes, Tiago; Campos, David; Boyer, Scott; Ahlberg, Ernst; van Mulligen, Erik M; Kors, Jan A; Singh, Bharat; Furlong, Laura I; Sanz, Ferran; Bauer-Mehren, Anna; Carrascosa, Maria C; Mestres, Jordi; Avillach, Paul; Diallo, Gayo; Díaz Acedo, Carlos; van der Lei, Johan

    2013-05-01

    Pharmacovigilance methods have advanced greatly during the last decades, making post-market drug assessment an essential drug evaluation component. These methods mainly rely on the use of spontaneous reporting systems and health information databases to collect expertise from huge amounts of real-world reports. The EU-ADR Web Platform was built to further facilitate accessing, monitoring and exploring these data, enabling an in-depth analysis of adverse drug reactions risks. The EU-ADR Web Platform exploits the wealth of data collected within a large-scale European initiative, the EU-ADR project. Millions of electronic health records, provided by national health agencies, are mined for specific drug events, which are correlated with literature, protein and pathway data, resulting in a rich drug-event dataset. Next, advanced distributed computing methods are tailored to coordinate the execution of data-mining and statistical analysis tasks. This permits obtaining a ranked drug-event list, removing spurious entries and highlighting relationships with high risk potential. The EU-ADR Web Platform is an open workspace for the integrated analysis of pharmacovigilance datasets. Using this software, researchers can access a variety of tools provided by distinct partners in a single centralized environment. Besides performing standalone drug-event assessments, they can also control the pipeline for an improved batch analysis of custom datasets. Drug-event pairs can be substantiated and statistically analysed within the platform's innovative working environment. A pioneering workspace that helps in explaining the biological path of adverse drug reactions was developed within the EU-ADR project consortium. This tool, targeted at the pharmacovigilance community, is available online at https://bioinformatics.ua.pt/euadr/. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Development of a SPARK Training Dataset

    International Nuclear Information System (INIS)

    Sayre, Amanda M.; Olson, Jarrod R.

    2015-01-01

    In its first five years, the National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK's intended analysis capability. The analysis demonstration sought to answer

  15. RetroTransformDB: A Dataset of Generic Transforms for Retrosynthetic Analysis

    Directory of Open Access Journals (Sweden)

    Svetlana Avramova

    2018-04-01

    Full Text Available Presently, software tools for retrosynthetic analysis are widely used by organic, medicinal, and computational chemists. Rule-based systems extensively use collections of retro-reactions (transforms. While there are many public datasets with reactions in synthetic direction (usually non-generic reactions, there are no publicly-available databases with generic reactions in computer-readable format which can be used for the purposes of retrosynthetic analysis. Here we present RetroTransformDB—a dataset of transforms, compiled and coded in SMIRKS line notation by us. The collection is comprised of more than 100 records, with each one including the reaction name, SMIRKS linear notation, the functional group to be obtained, and the transform type classification. All SMIRKS transforms were tested syntactically, semantically, and from a chemical point of view in different software platforms. The overall dataset design and the retrosynthetic fitness were analyzed and curated by organic chemistry experts. The RetroTransformDB dataset may be used by open-source and commercial software packages, as well as chemoinformatics tools.

  16. Positive Noise Cross Correlation in a Copper Pair Splitter.

    Science.gov (United States)

    Das, Anindya; Ronen, Yuval; Heiblum, Moty; Shtrikman, Hadas; Mahalu, Diana

    2012-02-01

    Entanglement is in heart of the Einstein-Podolsky-Rosen (EPR) paradox, in which non-locality is a fundamental property. Up to date spin entanglement of electrons had not been demonstrated. Here, we provide direct evidence of such entanglement by measuring: non-local positive current correlation and positive cross correlation among current fluctuations, both of separated electrons born by a Cooper-pair-beam-splitter. The realization of the splitter is provided by injecting current from an Al superconductor contact into two, single channel, pure InAs nanowires - each intercepted by a Coulomb blockaded quantum dot (QD). The QDs impedes strongly the flow of Cooper pairs allowing easy single electron transport. The passage of electron in one wire enables the simultaneous passage of the other in the neighboring wire. The splitting efficiency of the Cooper pairs (relative to Cooper pairs actual current) was found to be ˜ 40%. The positive cross-correlations in the currents and their fluctuations (shot noise) are fully consistent with entangled electrons produced by the beam splitter.

  17. Cross-platform validation and analysis environment for particle physics

    Science.gov (United States)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  18. Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.

    Science.gov (United States)

    Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang

    2015-01-01

    Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for

  19. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  20. Aorta cross-section calculation and 3D visualization from CT or MRT data using VRML

    Science.gov (United States)

    Grabner, Guenther; Modritsch, Robert; Stiegmaier, Wolfgang; Grasser, Simon; Klinger, Thomas

    2005-04-01

    Quantification of vessel diameters of artherosclerotic or congenital stenosis is very important for the diagnosis of vascular diseases. The aorta extraction and cross-section calculation is a software-based application that offers a three-dimensional, platform-independent, colorized visualization of the extracted aorta with augmented reality information of MRT or CT datasets. This project is based on different types of specialized image processing algorithms, dynamical particle filtering and complex mathematical equations. From this three-dimensional model a calculation of minimal cross sections is performed. In user specified distances, the aorta is cut in differently defined directions which are created through vectors with varying length. The extracted aorta and the derived minimal cross-sections are then rendered with the marching cube algorithm and represented together in a three-dimensional virtual reality with a very high degree of immersion. The aim of this study was to develop an imaging software that delivers cardiologists the possibility of (i) furnishing fast vascular diagnosis, (ii) getting precise diameter information, (iii) being able to process exact, local stenosis detection (iv) having permanent data storing and easy access to former datasets, and (v) reliable documentation of results in form of tables and graphical printouts.

  1. Absorption-Modulated Crossed-Optical Fiber-Sensor Platform for Measurements in Liquid Environments and Flow Streams

    Directory of Open Access Journals (Sweden)

    Paul E. Henning

    2017-01-01

    Full Text Available A new evanescent-wave fiber sensor is described that utilizes absorption-modulated luminescence (AML in combination with a crossed-fiber sensor platform. The luminescence signals of two crossed-fiber reference regions, placed on opposite sides of the stretch of fiber supporting the absorbance sensor, monitor the optical intensity in the fiber core. Evanescent absorption of the sensor reduces a portion of the excitation light and modulates the luminescence of the second reference region. The attenuation is determined from the luminescence intensity of both reference regions similar to the Beer-Lambert Law. The AML-Crossed-Fiber technique was demonstrated using the absorbance of the Zn(II-PAN2 complex at 555 nm. A linear response was obtained over a zinc(II concentration range of 0 to 20 μM (approximately 0 to 1.3 ppm. A nonlinear response was observed at higher zinc(II concentrations and was attributed to depletion of higher-order modes in the fiber. This was corroborated by the measured induced repopulation of these modes.

  2. CrossWork: Software-assisted identification of cross-linked peptides

    DEFF Research Database (Denmark)

    Rasmussen, Morten; Refsgaard, Jan; Peng, Li

    2011-01-01

    Work searches batches of tandem mass-spectrometric data, and identifies cross-linked and non-cross-linked peptides using a standard PC. We tested CrossWork by searching mass-spectrometric datasets of cross-linked complement factor C3 against small (1 protein) and large (1000 proteins) search spaces, and show...

  3. Geoseq: a tool for dissecting deep-sequencing datasets

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2010-10-01

    Full Text Available Abstract Background Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO, Sequence Read Archive (SRA hosted by the NCBI, or the DNA Data Bank of Japan (ddbj. Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Results Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Conclusions Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a identify differential isoform expression in mRNA-seq datasets, b identify miRNAs (microRNAs in libraries, and identify mature and star sequences in miRNAS and c to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  4. Consistency of two global MODIS aerosol products over ocean on Terra and Aqua CERES SSF datasets

    Science.gov (United States)

    Ignatov, Alexander; Minnis, Patrick; Wielicki, Bruce; Loeb, Norman G.; Remer, Lorraine A.; Kaufman, Yoram J.; Miller, Walter F.; Sun-Mack, Sunny; Laszlo, Istvan; Geier, Erika B.

    2004-12-01

    MODIS aerosol retrievals over ocean from Terra and Aqua platforms are available from the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint (SSF) datasets generated at NASA Langley Research Center (LaRC). Two aerosol products are reported side by side. The primary M product is generated by subsetting and remapping the multi-spectral (0.44 - 2.1 μm) MOD04 aerosols onto CERES footprints. MOD04 processing uses cloud screening and aerosol algorithms developed by the MODIS science team. The secondary (AVHRR-like) A product is generated in only two MODIS bands: 1 and 6 on Terra, and ` and 7 on Aqua. The A processing uses NASA/LaRC cloud-screening and NOAA/NESDIS single channel aerosol algorthm. The M and A products have been documented elsewhere and preliminarily compared using two weeks of global Terra CERES SSF (Edition 1A) data in December 2000 and June 2001. In this study, the M and A aerosol optical depths (AOD) in MODIS band 1 and (0.64 μm), τ1M and τ1A, are further checked for cross-platform consistency using 9 days of global Terra CERES SSF (Edition 2A) and Aqua CERES SSF (Edition 1A) data from 13 - 21 October 2002.

  5. The distribution characteristics of pollutants released at different cross-sectional positions of a river

    International Nuclear Information System (INIS)

    Huang Heqing; Chen Guang; Zhang Qianfeng

    2010-01-01

    The distribution characteristics of heavier or lighter pollutants released at different cross-sectional positions of a wide river is investigated with a well-tested three-dimensional numerical model of gravity flows based on Reynolds-Averaged Navier-Stokes equations and turbulence k-ε model. By focusing on investigating the influences of flow and buoyancy on pollutants, it is found that while carrying by the river flow downstream: i) a heavier pollutant released from the cross-sectional side position, forms transverse oscillation between two banks with decreased amplitude, i.e. forms kind of helical flow pattern along the straight part of channel bed; ii) a heavier pollutant released from the cross-sectional middle position, forms collapse oscillation in the middle of the straight channel part with reduced amplitude; iii) in the downstream sinuous channel, heavier pollutant is of higher concentration on the outer side of channel bends; iv) a light pollutant released from the cross-sectional side position, slips partly to the other side of the river, resulting in higher concentrations on two sides of the channel top; v) a light pollutant released from the cross-sectional middle position, splits into two parts symmetrically along two sides of the channel top; vi) in the downstream sinuous channel, light pollutant presents higher concentration on the inner side of channel bends. These findings may assist in cost-effective scientific countermeasures to be taken for accidental or planned pollutant releases into a river. - The distribution characteristics of heavier or lighter pollutants released at different cross-sectional positions of a river.

  6. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  7. CrossLink: a novel method for cross-condition classification of cancer subtypes.

    Science.gov (United States)

    Ma, Chifeng; Sastry, Konduru S; Flore, Mario; Gehani, Salah; Al-Bozom, Issam; Feng, Yusheng; Serpedin, Erchin; Chouchane, Lotfi; Chen, Yidong; Huang, Yufei

    2016-08-22

    We considered the prediction of cancer classes (e.g. subtypes) using patient gene expression profiles that contain both systematic and condition-specific biases when compared with the training reference dataset. The conventional normalization-based approaches cannot guarantee that the gene signatures in the reference and prediction datasets always have the same distribution for all different conditions as the class-specific gene signatures change with the condition. Therefore, the trained classifier would work well under one condition but not under another. To address the problem of current normalization approaches, we propose a novel algorithm called CrossLink (CL). CL recognizes that there is no universal, condition-independent normalization mapping of signatures. In contrast, it exploits the fact that the signature is unique to its associated class under any condition and thus employs an unsupervised clustering algorithm to discover this unique signature. We assessed the performance of CL for cross-condition predictions of PAM50 subtypes of breast cancer by using a simulated dataset modeled after TCGA BRCA tumor samples with a cross-validation scheme, and datasets with known and unknown PAM50 classification. CL achieved prediction accuracy >73 %, highest among other methods we evaluated. We also applied the algorithm to a set of breast cancer tumors derived from Arabic population to assign a PAM50 classification to each tumor based on their gene expression profiles. A novel algorithm CrossLink for cross-condition prediction of cancer classes was proposed. In all test datasets, CL showed robust and consistent improvement in prediction performance over other state-of-the-art normalization and classification algorithms.

  8. ScaMo: Realisation of an OO-functional DSL for cross platform mobile applications development

    Science.gov (United States)

    Macos, Dragan; Solymosi, Andreas

    2013-10-01

    The software market is dynamically changing: the Internet is going mobile, the software applications are shifting from the desktop hardware onto the mobile devices. The largest markets are the mobile applications for iOS, Android and Windows Phone and for the purpose the typical programming languages include Objective-C, Java and C ♯. The realization of the native applications implies the integration of the developed software into the environments of mentioned mobile operating systems to enable the access to different hardware components of the devices: GPS module, display, GSM module, etc. This paper deals with the definition and possible implementation of an environment for the automatic application generation for multiple mobile platforms. It is based on a DSL for mobile application development, which includes the programming language Scala and a DSL defined in Scala. As part of a multi-stage cross-compiling algorithm, this language is translated into the language of the affected mobile platform. The advantage of our method lies in the expressiveness of the defined language and the transparent source code translation between different languages, which implies, for example, the advantages of debugging and development of the generated code.

  9. Earth observation from the manned low Earth orbit platforms

    Science.gov (United States)

    Guo, Huadong; Dou, Changyong; Zhang, Xiaodong; Han, Chunming; Yue, Xijuan

    2016-05-01

    The manned low Earth orbit platforms (MLEOPs), e.g., the U.S. and Russia's human space vehicles, the International Space Station (ISS) and Chinese Tiangong-1 experimental space laboratory not only provide laboratories for scientific experiments in a wide range of disciplines, but also serve as exceptional platforms for remote observation of the Earth, astronomical objects and space environment. As the early orbiting platforms, the MLEOPs provide humans with revolutionary accessibility to the regions on Earth never seen before. Earth observation from MLEOPs began in early 1960s, as a part of manned space flight programs, and will continue with the ISS and upcoming Chinese Space Station. Through a series of flight missions, various and a large amount of Earth observing datasets have been acquired using handheld cameras by crewmembers as well as automated sophisticated sensors onboard these space vehicles. Utilizing these datasets many researches have been conducted, demonstrating the importance and uniqueness of studying Earth from a vantage point of MLEOPs. For example, the first, near-global scale digital elevation model (DEM) was developed from data obtained during the shuttle radar topography mission (SRTM). This review intends to provide an overview of Earth observations from MLEOPs and present applications conducted by the datasets collected by these missions. As the ISS is the most typical representative of MLEOPs, an introduction to it, including orbital characteristics, payload accommodations, and current and proposed sensors, is emphasized. The advantages and challenges of Earth observation from MLEOPs, using the ISS as an example, is also addressed. At last, a conclusive note is drawn.

  10. Behind the scenes of GS: cross-platform

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    The year was 1989: the dawn of administrative computing. In a laboratory filled to the rafters with paperwork, CERN's then Director-General Carlo Rubbia saw an opportunity for a complete administrative overhaul. He established the Advanced Information Systems (AIS) project to analyse CERN's administration, which in turn suggested the Electronic Document Handling (EDH) system. By 1992, EDH was up and running - the start of a new chapter in CERN history.   If you think you've never come accross EDH, think again. The system is an integral part of CERN life, handling everything from the purchase of materials to leave requests. EDH sees you through your entire CERN life: from your first CERN job application to your final retirement checklist. One platform, sixty-five functions What makes EDH so special is its solitary nature: it is one platform that carries out dozens of varied functions. "Most companies organise their administration in 'vertical' ...

  11. Modular Track System For Positioning Mobile Robots

    Science.gov (United States)

    Miller, Jeff

    1995-01-01

    Conceptual system for positioning mobile robotic manipulators on large main structure includes modular tracks and ancillary structures assembled easily along with main structure. System, called "tracked robotic location system" (TROLS), originally intended for application to platforms in outer space, but TROLS concept might also prove useful on Earth; for example, to position robots in factories and warehouses. T-cross-section rail keeps mobile robot on track. Bar codes mark locations along track. Each robot equipped with bar-code-recognizing circuitry so it quickly finds way to assigned location.

  12. Cross-platform development with React Native

    OpenAIRE

    Beshir, Aymen

    2016-01-01

    In this project a mobile application for dog owners is built, whichallows dog owners to create their own profile. The customer is a dogwhisperer with the aspiration to create a platform for dog ownerswhere they can share and access articles and experiences and structuretheir dog's life.This mobile application is built for both Android and iOS. Buildingnative mobile applications has never been easier given the manyresources and frameworks available for developers. But since theframeworks are o...

  13. AuTom: a novel automatic platform for electron tomography reconstruction

    KAUST Repository

    Han, Renmin

    2017-07-26

    We have developed a software package towards automatic electron tomography (ET): Automatic Tomography (AuTom). The presented package has the following characteristics: accurate alignment modules for marker-free datasets containing substantial biological structures; fully automatic alignment modules for datasets with fiducial markers; wide coverage of reconstruction methods including a new iterative method based on the compressed-sensing theory that suppresses the “missing wedge” effect; and multi-platform acceleration solutions that support faster iterative algebraic reconstruction. AuTom aims to achieve fully automatic alignment and reconstruction for electron tomography and has already been successful for a variety of datasets. AuTom also offers user-friendly interface and auxiliary designs for file management and workflow management, in which fiducial marker-based datasets and marker-free datasets are addressed with totally different subprocesses. With all of these features, AuTom can serve as a convenient and effective tool for processing in electron tomography.

  14. Virtual network computing: cross-platform remote display and collaboration software.

    Science.gov (United States)

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  15. Improved Stewart platform state estimation using inertial and actuator position measurements

    NARCIS (Netherlands)

    MiletoviC, I.; Pool, D.M.; Stroosma, O.; van Paassen, M.M.; Chu, Q.

    2017-01-01

    Accurate and reliable estimation of the kinematic state of a six degrees-of-freedom Stewart platform is a problem of interest in various engineering disciplines. Particularly so in the area of flight simulation, where the Stewart platform is in widespread use for the generation of motion similar

  16. Designing platform independent mobile apps and services

    CERN Document Server

    Heckman, Rocky

    2016-01-01

    This book explains how to help create an innovative and future proof architecture for mobile apps by introducing practical approaches to increase the value and flexibility of their service layers and reduce their delivery time. Designing Platform Independent Mobile Apps and Services begins by describing the mobile computing landscape and previous attempts at cross platform development. Platform independent mobile technologies and development strategies are described in chapter two and three. Communication protocols, details of a recommended five layer architecture, service layers, and the data abstraction layer are also introduced in these chapters. Cross platform languages and multi-client development tools for the User Interface (UI) layer, as well as message processing patterns and message routing of the Service Int rface (SI) layer are explained in chapter four and five. Ways to design the service layer for mobile computing, using Command Query Responsibility Segregation (CQRS) and the Data Abstraction La...

  17. Comparisons of Supergranule Properties from SDO/HMI with Other Datasets

    Science.gov (United States)

    Pesnell, William Dean; Williams, Peter E.

    2010-01-01

    While supergranules, a component of solar convection, have been well studied through the use of Dopplergrams, other datasets also exhibit these features. Quiet Sun magnetograms show local magnetic field elements distributed around the boundaries of supergranule cells, notably clustering at the common apex points of adjacent cells, while more solid cellular features are seen near active regions. Ca II K images are notable for exhibiting the chromospheric network representing a cellular distribution of local magnetic field lines across the solar disk that coincides with supergranulation boundaries. Measurements at 304 A further above the solar surface also show a similar pattern to the chromospheric network, but the boundaries are more nebulous in nature. While previous observations of these different solar features were obtained with a variety of instruments, SDO provides a single platform, from which the relevant data products at a high cadence and high-definition image quality are delivered. The images may also be cross-referenced due to their coincidental time of observation. We present images of these different solar features from HMI & AIA and use them to make composite images of supergranules at different atmospheric layers in which they manifest. We also compare each data product to equivalent data from previous observations, for example HMI magnetograms with those from MDI.

  18. Border Crossing/Entry Data - Border Crossing/Entry Data Time Series tool

    Data.gov (United States)

    Department of Transportation — The dataset is known as “Border Crossing/Entry Data.” The Bureau of Transportation Statistics (BTS) Border Crossing/Entry Data provides summary statistics to the...

  19. {cross-disciplinary} Data CyberInfrastructure: A Different Approach to Developing Collaborative Earth and Environmental Science Research Platforms

    Science.gov (United States)

    Lenhardt, W. C.; Krishnamurthy, A.; Blanton, B.; Conway, M.; Coposky, J.; Castillo, C.; Idaszak, R.

    2017-12-01

    An integrated science cyberinfrastructure platform is fast becoming a norm in science, particularly where access to distributed resources, access to compute, data management tools, and collaboration tools are accessible to the end-user scientist without the need to spin up these services on their own. There platforms have various types of labels ranging from data commons to science-as-a-service. They tend to share common features, as outlined above. What tends to distinguish these platforms, however, is their affinity for particular domains, NanoHub - nanomaterials, iPlant - plant biology, Hydroshare - hydrology, and so on. The challenge still remains how to enable these platforms to be more easily adopted for use by other domains. This paper will provide an overview of RENCI's approach to creating a science platform that can be more easily adopted by new communities while also endeavoring to accelerate their research. At RENCI, we started with Hydroshare, but have now worked to generalize the methodology for application to other domains. This new effort is called xDCi, or {cross-disciplinary} Data CyberInfrastructure. We have adopted a broader approach to the challenge of domain adoption and includes two key elements in addition to the technology component. The first of these is how development is operationalized. RENCI implements a DevOps model of continuous development and deployment. This greatly increases the speed by which a new platform can come online and be refined to meet domain needs. DevOps also allows for migration over time, i.e. sustainability. The second element is a concierge model. In addition to the technical elements, and the more responsive development process, RENCI also supports domain adoption of the platform by providing a concierge service— dedicated expertise- in the following areas, Information Technology, Sustainable Software, Data Science, and Sustainability. The success of the RENCI methodology is illustrated by the adoption of the

  20. Barium-cross-linked alginate-gelatine microcapsule as a potential platform for stem cell production and modular tissue formation.

    Science.gov (United States)

    Alizadeh Sardroud, Hamed; Nemati, Sorour; Baradar Khoshfetrat, Ali; Nabavinia, Mahbobeh; Beygi Khosrowshahi, Younes

    2017-08-01

    Influence of gelatine concentration and cross-linker ions of Ca 2+ and Ba 2+ was evaluated on characteristics of alginate hydrogels and proliferation behaviours of model adherent and suspendable stem cells of fibroblast and U937 embedded in alginate microcapsules. Increasing gelatine concentration to 2.5% increased extent of swelling to 15% and 25% for barium- and calcium-cross-linked hydrogels, respectively. Mechanical properties also decreased with increasing swelling of hydrogels. Both by increasing gelatine concentration and using barium ions increased considerably the proliferation of encapsulated model stem cells. Barium-cross-linked alginate-gelatine microcapsule tested for bone building block showed a 13.5 ± 1.5-fold expansion for osteoblast cells after 21 days with deposition of bone matrix. The haematopoietic stem cells cultured in the microcapsule after 7 days also showed up to 2-fold increase without adding any growth factor. The study demonstrates that barium-cross-linked alginate-gelatine microcapsule has potential for use as a simple and efficient 3D platform for stem cell production and modular tissue formation.

  1. Simple Approaches to Improve the Automatic Inventory of ZEBRA Crossing from Mls Data

    Science.gov (United States)

    Arias, P.; Riveiro, B.; Soilán, M.; Díaz-Vilariño, L.; Martínez-Sánchez, J.

    2015-08-01

    The city management is increasingly supported by information technologies, leading to paradigms such as smart cities, where decision-makers, companies and citizens are continuously interconnected. 3D modelling turns of great relevance when the city has to be managed making use of geospatial databases or Geographic Information Systems. On the other hand, laser scanning technology has experienced a significant growth in the last years, and particularly, terrestrial mobile laser scanning platforms are being more and more used with inventory purposes in both cities and road environments. Consequently, large datasets are available to produce the geometric basis for the city model; however, this data is not directly exploitable by management systems constraining the implementation of the technology for such applications. This paper presents a new algorithm for the automatic detection of zebra crossing. The algorithm is divided in three main steps: road segmentation (based on a PCA analysis of the points contained in each cycle of collected by a mobile laser system), rasterization (conversion of the point cloud to a raster image coloured as a function of intensity data), and zebra crossing detection (using the Hough Transform and logical constrains for line classification). After evaluating different datasets collected in three cities located in Northwest Spain (comprising 25 strips with 30 visible zebra crossings) a completeness of 83% was achieved.

  2. Facial Expression Recognition Based on TensorFlow Platform

    Directory of Open Access Journals (Sweden)

    Xia Xiao-Ling

    2017-01-01

    Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.

  3. K, L, and M shell datasets for PIXE spectrum fitting and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, David D., E-mail: dcz@ansto.gov.au; Crawford, Jagoda; Siegele, Rainer

    2015-11-15

    Highlights: • Differences between several datasets commonly used by PIXE codes for spectrum fitting and concentration estimates have been highlighted. • A preferred option dataset was selected which includes ionisation cross sections, fluorescence yield, Coster–Kronig probabilities and X-ray line emission rates for K, L and M subshells. • For PIXE codes differences of several tens of percent can be seen for selected elements for L and M lines depending on the data sets selected. - Abstract: Routine PIXE analysis programs, like GUPIX, GEOPIXE and PIXAN generally perform at least two key functions firstly, the fitting of K, L and M characteristic lines X-ray lines to a background, including unfolding of overlapping lines and secondly, the use of a fitted primary Kα, Lα or Mα line area to determine the elemental concentration in a given matrix. To achieve these two results to better than 3–5% the data sets for fluorescence yields, emission rates, Coster–Kronig transitions and ionisation cross sections should be determined to better than 3%. There are many different theoretical and experimental K, L and M datasets for these parameters. How they are applied and used in analysis programs can vary the results obtained for both fitting and concentration determinations. Here we discuss several commonly used datasets for fluorescence yields, emission rates, Coster–Kronig transitions and ionisation cross sections for K, L and M subshells and suggests an optimum set to obtain consistent results for PIXE analyses across a range of elements with atomic numbers from 5 ⩽ Z ⩽ 100.

  4. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  5. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    Directory of Open Access Journals (Sweden)

    Mingjun Deng

    2017-12-01

    Full Text Available The Chinese Gaofen-3 (GF-3 mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method.

  6. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  7. Cross-cultural validation of the positivity-scale in five European countries

    OpenAIRE

    Heikamp, Tobias; Alessandri, Guido; Laguna, Mariola; Petrovic, Vesna; Caprara, Maria Giovanna; Trommsdorff, Gisela

    2014-01-01

    The aim of the present paper was to test the cross-cultural validity of the Positivity-Scale (P-Scale), a new nquestionnaire designed for the measurement of positivity (i.e., general tendency to evaluate self, life, and future in a positive way). Participants (N = 3544) from Italy, Germany, Spain, Poland, and Serbia answered eight items of the P-Scale and responded to items from other well-validated measures. Confirmatory Factor Analysis supported the assumed one-factor structure of the P-Sca...

  8. Cross-Cultural Detection of Depression from Nonverbal Behaviour.

    Science.gov (United States)

    Alghowinem, Sharifa; Goecke, Roland; Cohn, Jeffrey F; Wagner, Michael; Parker, Gordon; Breakspear, Michael

    2015-05-01

    Millions of people worldwide suffer from depression. Do commonalities exist in their nonverbal behavior that would enable cross-culturally viable screening and assessment of severity? We investigated the generalisability of an approach to detect depression severity cross-culturally using video-recorded clinical interviews from Australia, the USA and Germany. The material varied in type of interview, subtypes of depression and inclusion healthy control subjects, cultural background, and recording environment. The analysis focussed on temporal features of participants' eye gaze and head pose. Several approaches to training and testing within and between datasets were evaluated. The strongest results were found for training across all datasets and testing across datasets using leave-one-subject-out cross-validation. In contrast, generalisability was attenuated when training on only one or two of the three datasets and testing on subjects from the dataset(s) not used in training. These findings highlight the importance of using training data exhibiting the expected range of variability.

  9. Cross-Platform Learning: On the Nature of Children's Learning from Multiple Media Platforms

    Science.gov (United States)

    Fisch, Shalom M.

    2013-01-01

    It is increasingly common for an educational media project to span several media platforms (e.g., TV, Web, hands-on materials), assuming that the benefits of learning from multiple media extend beyond those gained from one medium alone. Yet research typically has investigated learning from a single medium in isolation. This paper reviews several…

  10. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.

    Science.gov (United States)

    Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan

    2017-01-01

    Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.

  11. Border Crossing/Entry Data

    Data.gov (United States)

    Department of Transportation — The dataset is known as “Border Crossing/Entry Data.” The Bureau of Transportation Statistics (BTS) Border Crossing/Entry Data provides summary statistics to the...

  12. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  13. Evaluation of false positivity and cross reactivity in the investigation ...

    African Journals Online (AJOL)

    This study evaluated the causes of false positive Human Immunodeficiency Virus test results (F+HIV), cross reactivity of HIV antibodies with other non HIV antibodies, and efficiency of the serial and parallel testing algorithms. 100 blood samples randomly collected from clients attending the Heart to Heart HIV counseling and ...

  14. Significance of buccopalatal implant position, biotype, platform switching, and pre-implant bone augmentation on the level of the midbuccal mucosa

    NARCIS (Netherlands)

    Zuiderveld, Elise G; den Hartog, Laurens; Vissink, Arjan; Raghoebar, Gerry M; Meijer, Henny J A

    2014-01-01

    This study assessed whether buccopalatal implant position, biotype, platform switching, and pre-implant bone augmentation affects the level of the midbuccal mucosa (MBM). Ninety patients with a single-tooth implant in the esthetic zone were included. The level of the MBM was measured on photographs

  15. Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective.

    Science.gov (United States)

    Chen, Chen; Tong, Hanghang; Xie, Lei; Ying, Lei; He, Qing

    2017-08-01

    The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model-multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm Fascinate that can reveal unobserved dependencies with linear complexity. Moreover, we derive Fascinate-ZERO, an online variant of Fascinate that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.

  16. An Open Source Software and Web-GIS Based Platform for Airborne SAR Remote Sensing Data Management, Distribution and Sharing

    Science.gov (United States)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu

    2014-03-01

    With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.

  17. Polymer-based platform for microfluidic systems

    Science.gov (United States)

    Benett, William [Livermore, CA; Krulevitch, Peter [Pleasanton, CA; Maghribi, Mariam [Livermore, CA; Hamilton, Julie [Tracy, CA; Rose, Klint [Boston, MA; Wang, Amy W [Oakland, CA

    2009-10-13

    A method of forming a polymer-based microfluidic system platform using network building blocks selected from a set of interconnectable network building blocks, such as wire, pins, blocks, and interconnects. The selected building blocks are interconnectably assembled and fixedly positioned in precise positions in a mold cavity of a mold frame to construct a three-dimensional model construction of a microfluidic flow path network preferably having meso-scale dimensions. A hardenable liquid, such as poly (dimethylsiloxane) is then introduced into the mold cavity and hardened to form a platform structure as well as to mold the microfluidic flow path network having channels, reservoirs and ports. Pre-fabricated elbows, T's and other joints are used to interconnect various building block elements together. After hardening the liquid the building blocks are removed from the platform structure to make available the channels, cavities and ports within the platform structure. Microdevices may be embedded within the cast polymer-based platform, or bonded to the platform structure subsequent to molding, to create an integrated microfluidic system. In this manner, the new microfluidic platform is versatile and capable of quickly generating prototype systems, and could easily be adapted to a manufacturing setting.

  18. Integrated remotely sensed datasets for disaster management

    Science.gov (United States)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  19. Pyrolytic graphite as an efficient second-order neutron filter at tuned positions of boundary crossing

    International Nuclear Information System (INIS)

    Adib, M.; Abdel Kawy, A.; Habib, N.; El Mesiry, M.

    2010-01-01

    An investigation of pyrolytic graphite (PG) crystal as an efficient second order neutron filter at tuned boundary crossings has been carried out. The neutron transmission through PG crystal at these tuned crossing points as a function of first- and second-order wavelengths were calculated in terms of PG mosaic spread and thickness. The filtering features of PG crystals at these tuned boundary crossings were deduced. It was shown that, there are a large number of tuned positions at double and triple boundary crossings of the curves (hkl) are very promising as tuned filter positions. However, only fourteen of them are found to be most promising ones. These tuned positions are found to be within the neutron wavelengths from 0.133 up to 0.4050 nm. A computer package GRAPHITE has been used in order to provide the required calculations in the whole neutron wavelength range in terms of PG mosaic spread and its orientation with respect to incident neutron beam direction. It was shown that 0.5 cm thick PG crystal with angular mosaic spread of 2 0 is sufficient to remove 2nd-order neutrons at the wavelengths corresponding to the positions of the intersection boundaries curves (hkl).

  20. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    Science.gov (United States)

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  1. Using the eServices platform for detecting behavior patterns deviation in the elderly assisted living: a case study.

    Science.gov (United States)

    Marcelino, Isabel; Lopes, David; Reis, Michael; Silva, Fernando; Laza, Rosalía; Pereira, António

    2015-01-01

    World's aging population is rising and the elderly are increasingly isolated socially and geographically. As a consequence, in many situations, they need assistance that is not granted in time. In this paper, we present a solution that follows the CRISP-DM methodology to detect the elderly's behavior pattern deviations that may indicate possible risk situations. To obtain these patterns, many variables are aggregated to ensure the alert system reliability and minimize eventual false positive alert situations. These variables comprehend information provided by body area network (BAN), by environment sensors, and also by the elderly's interaction in a service provider platform, called eServices--Elderly Support Service Platform. eServices is a scalable platform aggregating a service ecosystem developed specially for elderly people. This pattern recognition will further activate the adequate response. With the system evolution, it will learn to predict potential danger situations for a specified user, acting preventively and ensuring the elderly's safety and well-being. As the eServices platform is still in development, synthetic data, based on real data sample and empiric knowledge, is being used to populate the initial dataset. The presented work is a proof of concept of knowledge extraction using the eServices platform information. Regardless of not using real data, this work proves to be an asset, achieving a good performance in preventing alert situations.

  2. Using the eServices Platform for Detecting Behavior Patterns Deviation in the Elderly Assisted Living: A Case Study

    Directory of Open Access Journals (Sweden)

    Isabel Marcelino

    2015-01-01

    Full Text Available World’s aging population is rising and the elderly are increasingly isolated socially and geographically. As a consequence, in many situations, they need assistance that is not granted in time. In this paper, we present a solution that follows the CRISP-DM methodology to detect the elderly’s behavior pattern deviations that may indicate possible risk situations. To obtain these patterns, many variables are aggregated to ensure the alert system reliability and minimize eventual false positive alert situations. These variables comprehend information provided by body area network (BAN, by environment sensors, and also by the elderly’s interaction in a service provider platform, called eServices—Elderly Support Service Platform. eServices is a scalable platform aggregating a service ecosystem developed specially for elderly people. This pattern recognition will further activate the adequate response. With the system evolution, it will learn to predict potential danger situations for a specified user, acting preventively and ensuring the elderly’s safety and well-being. As the eServices platform is still in development, synthetic data, based on real data sample and empiric knowledge, is being used to populate the initial dataset. The presented work is a proof of concept of knowledge extraction using the eServices platform information. Regardless of not using real data, this work proves to be an asset, achieving a good performance in preventing alert situations.

  3. Becoming at the Borders: the Role of Positioning in Boundary-Crossing between University and Workplaces

    Directory of Open Access Journals (Sweden)

    Amenduni F.,

    2017-08-01

    Full Text Available Boundaries-crossing from university to workplaces is one of the most meaningful crisis for the professional development of the young people. Students need to develop cultural tools to solve their inner conflicts typical of this phase. In this study, the Dialogical Self Theory is used, inspired by Bakhtin, to define cross-boundaries in terms of identity positions development. The Trialogical Learning Approach is applied to design collaborative activities implemented during the course, aimed at building professional objects, designed together with some companies. During the course, students are required to build and maintain e-portfolios, which we consider as the place where cross-boundaries of I-positions can be observed. One case is selected as representative of the trajectories toward the so-called trialogical position that has a professional nature and takes into account the objects built. The results show an expansion of the student identity position repertoire, including future, professional and collective positions. Furthermore, the object designed with the company is perceived as a boundary-object that supports the shift from present to future positions and from university to professional communities.

  4. Interobserver variability of patient positioning using four different CT datasets for image registration in lung stereotactic body radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Oechsner, Markus [Technical University of Munich, Department of Radiation Oncology, Klinikum rechts der Isar, Muenchen (Germany); Technical University of Munich, Zentrum fuer Stereotaxie und personalisierte Hochpraezisionsstrahlentherapie (StereotakTUM), Munich (Germany); Chizzali, Barbara; Devecka, Michal; Muench, Stefan [Technical University of Munich, Department of Radiation Oncology, Klinikum rechts der Isar, Muenchen (Germany); Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona [Technical University of Munich, Department of Radiation Oncology, Klinikum rechts der Isar, Muenchen (Germany); Technical University of Munich, Zentrum fuer Stereotaxie und personalisierte Hochpraezisionsstrahlentherapie (StereotakTUM), Munich (Germany); Helmholtz Zentrum Muenchen, Institute of Innovative Radiotherapy (iRT), Munich (Germany)

    2017-10-15

    To assess the impact of different reference CT datasets on manual image registration with free-breathing three-dimensional (3D) cone beam CTs (FB-CBCT) for patient positioning by several observers. For 48 patients with lung lesions, manual image registration with FB-CBCTs was performed by four observers. A slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP), and midventilation CT (MidV) were used as reference images. Couch shift differences between the four reference CT datasets for each observer as well as shift differences between the observers for the same reference CT dataset were determined. Statistical analyses were performed and correlations between the registration differences and the 3D tumor motion and the CBCT score were calculated. The mean 3D shift difference between different reference CT datasets was the smallest for AIPvsMIP (range 1.1-2.2 mm) and the largest for MidVvsPCT (2.8-3.5 mm) with differences >10 mm. The 3D shifts showed partially significant correlations to 3D tumor motion and CBCT score. The interobserver comparison for the same reference CTs resulted in the smallest ∇3D mean differences and mean ∇3D standard deviation for ∇AIP (1.5 ± 0.7 mm, 0.7 ± 0.4 mm). The maximal 3D shift difference between observers was 10.4 mm (∇MidV). Both 3D tumor motion and mean CBCT score correlated with the shift differences (R{sub s} = 0.336-0.740). The applied reference CT dataset impacts image registration and causes interobserver variabilities. The 3D tumor motion and CBCT quality affect shift differences. The smallest differences were found for AIP which might be the most appropriate CT dataset for image registration with FB-CBCT. (orig.) [German] Untersuchung des Einflusses verschiedener Referenz-CT-Datensaetze auf die manuelle Bildregistrierung mit dreidimensionaler (3D) ConeBeam-Computertomographie in freier Atmung (FB-CBCT) zur Patientenpositionierung durch verschiedene Observer. Bei 48 Patienten

  5. Analytical expression for position sensitivity of linear response beam position monitor having inter-electrode cross talk

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Mukesh, E-mail: mukeshk@rrcat.gov.in [Beam Diagnostics Section, Indus Operations, Beam Dynamics & Diagnostics Division, Raja Ramanna Centre for Advanced Technology, Indore, 452013 MP (India); Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400 094 (India); Ojha, A.; Garg, A.D.; Puntambekar, T.A. [Beam Diagnostics Section, Indus Operations, Beam Dynamics & Diagnostics Division, Raja Ramanna Centre for Advanced Technology, Indore, 452013 MP (India); Senecha, V.K. [Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400 094 (India); Ion Source Lab., Proton Linac & Superconducting Cavities Division, Raja Ramanna Centre for Advanced Technology, Indore, 452013 MP (India)

    2017-02-01

    According to the quasi electrostatic model of linear response capacitive beam position monitor (BPM), the position sensitivity of the device depends only on the aperture of the device and it is independent of processing frequency and load impedance. In practice, however, due to the inter-electrode capacitive coupling (cross talk), the actual position sensitivity of the device decreases with increasing frequency and load impedance. We have taken into account the inter-electrode capacitance to derive and propose a new analytical expression for the position sensitivity as a function of frequency and load impedance. The sensitivity of a linear response shoe-box type BPM has been obtained through simulation using CST Studio Suite to verify and confirm the validity of the new analytical equation. Good agreement between the simulation results and the new analytical expression suggest that this method can be exploited for proper designing of BPM.

  6. Task Characterisation and Cross-Platform Programming Through System Identification

    Directory of Open Access Journals (Sweden)

    Theocharis Kyriacou

    2005-12-01

    Full Text Available Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, it is subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour, based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process. Such iterative refinement could be reduced, we argue, if a more profound theoretical understanding of robot-environment interaction existed. In this paper, we therefore present a modelling method that generates a faithful model of a robot's interaction with its environment, based on data logged while observing a physical robot's behaviour. Because this modelling method — nonlinear modelling using polynomials — is commonly used in the engineering discipline of system identification, we refer to it here as “robot identification”. We show in this paper that using robot identification to obtain a computer model of robot-environment interaction offers several distinct advantages: Very compact representations (one-line programs of the robot control program are generated The model can be analysed, for example through sensitivity analysis, leading to a better understanding of the essential parameters underlying the robot's behaviour, and The generated, compact robot code can be used for cross-platform robot programming, allowing fast transfer of robot code from one type of robot to another. We demonstrate these points through experiments with a Magellan Pro and a Nomad 200 mobile robot.

  7. Universal happiness? Cross-cultural measurement invariance of scales assessing positive mental health.

    Science.gov (United States)

    Bieda, Angela; Hirschfeld, Gerrit; Schönfeld, Pia; Brailovskaia, Julia; Zhang, Xiao Chi; Margraf, Jürgen

    2017-04-01

    Research into positive aspects of the psyche is growing as psychologists learn more about the protective role of positive processes in the development and course of mental disorders, and about their substantial role in promoting mental health. With increasing globalization, there is strong interest in studies examining positive constructs across cultures. To obtain valid cross-cultural comparisons, measurement invariance for the scales assessing positive constructs has to be established. The current study aims to assess the cross-cultural measurement invariance of questionnaires for 6 positive constructs: Social Support (Fydrich, Sommer, Tydecks, & Brähler, 2009), Happiness (Subjective Happiness Scale; Lyubomirsky & Lepper, 1999), Life Satisfaction (Diener, Emmons, Larsen, & Griffin, 1985), Positive Mental Health Scale (Lukat, Margraf, Lutz, van der Veld, & Becker, 2016), Optimism (revised Life Orientation Test [LOT-R]; Scheier, Carver, & Bridges, 1994) and Resilience (Schumacher, Leppert, Gunzelmann, Strauss, & Brähler, 2004). Participants included German (n = 4,453), Russian (n = 3,806), and Chinese (n = 12,524) university students. Confirmatory factor analyses and measurement invariance testing demonstrated at least partial strong measurement invariance for all scales except the LOT-R and Subjective Happiness Scale. The latent mean comparisons of the constructs indicated differences between national groups. Potential methodological and cultural explanations for the intergroup differences are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Sinking offshore platform. Nedsenkbar fralandsplatform

    Energy Technology Data Exchange (ETDEWEB)

    Einstabland, T.B.; Olsen, O.

    1988-12-19

    The invention deals with a sinking offshore platform of the gravitational type designed for being installed on the sea bed on great depths. The platform consists of at least three inclining pillars placed on a foundation unit. The pillars are at the upper end connected to a tower structure by means of a rigid construction. The tower supports the platform deck. The rigid construction comprises a centre-positioned cylinder connected to the foundation. 11 figs.

  9. SeqKit: A Cross-Platform and Ultrafast Toolkit for FASTA/Q File Manipulation.

    Directory of Open Access Journals (Sweden)

    Wei Shen

    Full Text Available FASTA and FASTQ are basic and ubiquitous formats for storing nucleotide and protein sequences. Common manipulations of FASTA/Q file include converting, searching, filtering, deduplication, splitting, shuffling, and sampling. Existing tools only implement some of these manipulations, and not particularly efficiently, and some are only available for certain operating systems. Furthermore, the complicated installation process of required packages and running environments can render these programs less user friendly. This paper describes a cross-platform ultrafast comprehensive toolkit for FASTA/Q processing. SeqKit provides executable binary files for all major operating systems, including Windows, Linux, and Mac OSX, and can be directly used without any dependencies or pre-configurations. SeqKit demonstrates competitive performance in execution time and memory usage compared to similar tools. The efficiency and usability of SeqKit enable researchers to rapidly accomplish common FASTA/Q file manipulations. SeqKit is open source and available on Github at https://github.com/shenwei356/seqkit.

  10. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  11. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  12. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    International Nuclear Information System (INIS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was

  13. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Directory of Open Access Journals (Sweden)

    Douglas Teodoro

    Full Text Available The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  14. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers

    Science.gov (United States)

    Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio

    2018-01-01

    The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556

  15. ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.

    Science.gov (United States)

    Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio

    2018-01-01

    The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.

  16. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    Science.gov (United States)

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  17. Spatially varying cross-correlation coefficients in the presence of nugget effects

    KAUST Repository

    Kleiber, William; Genton, Marc G.

    2012-01-01

    We derive sufficient conditions for the cross-correlation coefficient of a multivariate spatial process to vary with location when the spatial model is augmented with nugget effects. The derived class is valid for any choice of covariance functions, and yields substantial flexibility between multiple processes. The key is to identify the cross-correlation coefficient matrix with a contraction matrix, which can be either diagonal, implying a parsimonious formulation, or a fully general contraction matrix, yielding greater flexibility but added model complexity. We illustrate the approach with a bivariate minimum and maximum temperature dataset in Colorado, allowing the two variables to be positively correlated at low elevations and nearly independent at high elevations, while still yielding a positive definite covariance matrix. © 2012 Biometrika Trust.

  18. Spatially varying cross-correlation coefficients in the presence of nugget effects

    KAUST Repository

    Kleiber, William

    2012-11-29

    We derive sufficient conditions for the cross-correlation coefficient of a multivariate spatial process to vary with location when the spatial model is augmented with nugget effects. The derived class is valid for any choice of covariance functions, and yields substantial flexibility between multiple processes. The key is to identify the cross-correlation coefficient matrix with a contraction matrix, which can be either diagonal, implying a parsimonious formulation, or a fully general contraction matrix, yielding greater flexibility but added model complexity. We illustrate the approach with a bivariate minimum and maximum temperature dataset in Colorado, allowing the two variables to be positively correlated at low elevations and nearly independent at high elevations, while still yielding a positive definite covariance matrix. © 2012 Biometrika Trust.

  19. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  20. ISC-EHB: Reconstruction of a robust earthquake dataset

    Science.gov (United States)

    Weston, J.; Engdahl, E. R.; Harris, J.; Di Giacomo, D.; Storchak, D. A.

    2018-04-01

    The EHB Bulletin of hypocentres and associated travel-time residuals was originally developed with procedures described by Engdahl, Van der Hilst and Buland (1998) and currently ends in 2008. It is a widely used seismological dataset, which is now expanded and reconstructed, partly by exploiting updated procedures at the International Seismological Centre (ISC), to produce the ISC-EHB. The reconstruction begins in the modern period (2000-2013) to which new and more rigorous procedures for event selection, data preparation, processing, and relocation are applied. The selection criteria minimise the location bias produced by unmodelled 3D Earth structure, resulting in events that are relatively well located in any given region. Depths of the selected events are significantly improved by a more comprehensive review of near station and secondary phase travel-time residuals based on ISC data, especially for the depth phases pP, pwP and sP, as well as by a rigorous review of the event depths in subduction zone cross sections. The resulting cross sections and associated maps are shown to provide details of seismicity in subduction zones in much greater detail than previously achievable. The new ISC-EHB dataset will be especially useful for global seismicity studies and high-frequency regional and global tomographic inversions.

  1. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  2. A novel tripod-driven platform for in-situ positioning of samples and electrical probes in a TEM

    International Nuclear Information System (INIS)

    Medford, B D; Rogers, B L; Laird, D; Berdunov, N; Beton, P H; Lockwood, A J; Gnanavel, T; Guan, W; Wang, J; Moebus, G; Inkson, B J

    2010-01-01

    We present a design for a novel coarse positioning system based on a tilting platform which is positioned using linear slip/stick motors. The design differs from common arrangements of stacked x, y, and z motors, and also ball mounted slip/stick motors, by allowing easy access along the central axis of the microscope holder. The drive motors are highly compact and co-linear and may be easily incorporated in an off-axis configuration, leaving a central cylindrical region with an approximate diameter of 3mm which is available to accommodate screened electrical wiring and optical fibres. We show that the tripod can be used to manoeuvre two metallic tips towards each other in-situ in a TEM in nanometre-scale lateral steps.

  3. Mock Quasar-Lyman-α forest data-sets for the SDSS-III Baryon Oscillation Spectroscopic Survey

    Energy Technology Data Exchange (ETDEWEB)

    Bautista, Julian E.; Busca, Nicolas G. [APC, Université Paris Diderot-Paris 7, CNRS/IN2P3, CEA, Observatoire de Paris, 10, rue A. Domon and L. Duquet, Paris (France); Bailey, Stephen; Font-Ribera, Andreu; Schlegel, David [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA (United States); Pieri, Matthew M. [Aix Marseille Université, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, 38 rue Frédéric Joliot-Curie, 13388, Marseille (France); Miralda-Escudé, Jordi; Gontcho, Satya Gontcho A. [Institut de Ciències del Cosmos, Universitat de Barcelona/IEEC, 1 Martí i Franquès, Barcelona 08028, Catalonia (Spain); Palanque-Delabrouille, Nathalie; Rich, James; Goff, Jean Marc Le [CEA, Centre de Saclay, Irfu/SPP, D128, F-91191 Gif-sur-Yvette (France); Dawson, Kyle [Department of Physics and Astronomy, University of Utah, 115 S 100 E, RM 201, Salt Lake City, UT 84112 (United States); Feng, Yu; Ho, Shirley [McWilliams Center for Cosmology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213 (United States); Ge, Jian [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Noterdaeme, Pasquier; Pâris, Isabelle [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); Rossi, Graziano, E-mail: bautista@astro.utah.edu [Department of Astronomy and Space Science, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul, 143-747 (Korea, Republic of)

    2015-05-01

    We describe mock data-sets generated to simulate the high-redshift quasar sample in Data Release 11 (DR11) of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). The mock spectra contain Lyα forest correlations useful for studying the 3D correlation function including Baryon Acoustic Oscillations (BAO). They also include astrophysical effects such as quasar continuum diversity and high-density absorbers, instrumental effects such as noise and spectral resolution, as well as imperfections introduced by the SDSS pipeline treatment of the raw data. The Lyα forest BAO analysis of the BOSS collaboration, described in Delubac et al. 2014, has used these mock data-sets to develop and cross-check analysis procedures prior to performing the BAO analysis on real data, and for continued systematic cross checks. Tests presented here show that the simulations reproduce sufficiently well important characteristics of real spectra. These mock data-sets will be made available together with the data at the time of the Data Release 11.

  4. Detrended cross-correlation coefficient: Application to predict apoptosis protein subcellular localization.

    Science.gov (United States)

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2016-12-01

    Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Regenerator cross arm seal assembly

    Science.gov (United States)

    Jackman, Anthony V.

    1988-01-01

    A seal assembly for disposition between a cross arm on a gas turbine engine block and a regenerator disc, the seal assembly including a platform coextensive with the cross arm, a seal and wear layer sealingly and slidingly engaging the regenerator disc, a porous and compliant support layer between the platform and the seal and wear layer porous enough to permit flow of cooling air therethrough and compliant to accommodate relative thermal growth and distortion, a dike between the seal and wear layer and the platform for preventing cross flow through the support layer between engine exhaust and pressurized air passages, and air diversion passages for directing unregenerated pressurized air through the support layer to cool the seal and wear layer and then back into the flow of regenerated pressurized air.

  6. A Model Collaborative Platform for Geoscience Education

    Science.gov (United States)

    Fox, S.; Manduca, C. A.; Iverson, E. A.

    2012-12-01

    Over the last decade SERC at Carleton College has developed a collaborative platform for geoscience education that has served dozens of projects, thousands of community authors and millions of visitors. The platform combines a custom technical infrastructure: the SERC Content Management system (CMS), and a set of strategies for building web-resources that can be disseminated through a project site, reused by other projects (with attribution) or accessed via an integrated geoscience education resource drawing from all projects using the platform. The core tools of the CMS support geoscience education projects in building project-specific websites. Each project uses the CMS to engage their specific community in collecting, authoring and disseminating the materials of interest to them. At the same time the use of a shared central infrastructure allows cross-fertilization among these project websites. Projects are encouraged to use common templates and common controlled vocabularies for organizing and displaying their resources. This standardization is then leveraged through cross-project search indexing which allow projects to easily incorporate materials from other projects within their own collection in ways that are relevant and automated. A number of tools are also in place to help visitors move among project websites based on their personal interests. Related links help visitors discover content related topically to their current location that is in a 'separate' project. A 'best bets' feature in search helps guide visitors to pages that are good starting places to explore resources on a given topic across the entire range of hosted projects. In many cases these are 'site guide' pages created specifically to promote a cross-project view of the available resources. In addition to supporting the cross-project exploration of specific themes the CMS also allows visitors to view the combined suite of resources authored by any particular community member. Automatically

  7. Connecting societal issues, users and data : Scenario-based design of open data platforms

    NARCIS (Netherlands)

    Ruijer, Erna; Grimmelikhuijsen, Stephan; Hogan, Michael; Enzerink, Sem; Ojo, Adegboyega; Meijer, Albert

    Governments around the world make their data available through platforms but, disappointingly, the use of this data is lagging behind. This problem has been recognized in the literature and to facilitate use of open datasets, scholars have focused on identifying general user requirements for open

  8. Digital Astronaut Photography: A Discovery Dataset for Archaeology

    Science.gov (United States)

    Stefanov, William L.

    2010-01-01

    Astronaut photography acquired from the International Space Station (ISS) using commercial off-the-shelf cameras offers a freely-accessible source for high to very high resolution (4-20 m/pixel) visible-wavelength digital data of Earth. Since ISS Expedition 1 in 2000, over 373,000 images of the Earth-Moon system (including land surface, ocean, atmospheric, and lunar images) have been added to the Gateway to Astronaut Photography of Earth online database (http://eol.jsc.nasa.gov ). Handheld astronaut photographs vary in look angle, time of acquisition, solar illumination, and spatial resolution. These attributes of digital astronaut photography result from a unique combination of ISS orbital dynamics, mission operations, camera systems, and the individual skills of the astronaut. The variable nature of astronaut photography makes the dataset uniquely useful for archaeological applications in comparison with more traditional nadir-viewing multispectral datasets acquired from unmanned orbital platforms. For example, surface features such as trenches, walls, ruins, urban patterns, and vegetation clearing and regrowth patterns may be accentuated by low sun angles and oblique viewing conditions (Fig. 1). High spatial resolution digital astronaut photographs can also be used with sophisticated land cover classification and spatial analysis approaches like Object Based Image Analysis, increasing the potential for use in archaeological characterization of landscapes and specific sites.

  9. A cross-platform GUI to control instruments compliant with SCPI through VISA

    Science.gov (United States)

    Roach, Eric; Liu, Jing

    2015-10-01

    In nuclear physics experiments, it is necessary and important to control instruments from a PC, which automates many tasks that require human operations otherwise. Not only does this make long term measurements possible, but it also makes repetitive operations less error-prone. We created a graphical user interface (GUI) to control instruments connected to a PC through RS232, USB, LAN, etc. The GUI is developed using Qt Creator, a cross-platform integrated development environment, which makes it portable to various operating systems, including those commonly used in mobile devices. NI-VISA library is used in the back end so that the GUI can be used to control instruments connected through various I/O interfaces without any modification. Commonly used SCPI commands can be sent to different instruments using buttons, sliders, knobs, and other various widgets provided by Qt Creator. As an example, we demonstrate how we set and fetch parameters and how to retrieve and display data from an Agilent Digital Storage Oscilloscope X3034A with the GUI. Our GUI can be easily used for other instruments compliant with SCPI and VISA with little or no modification.

  10. Rational Design, Synthesis and Evaluation of γ-CD-Containing Cross-Linked Polyvinyl Alcohol Hydrogel as a Prednisone Delivery Platform

    Directory of Open Access Journals (Sweden)

    Adolfo Marican

    2018-03-01

    Full Text Available This study describes the in-silico rational design, synthesis and evaluation of cross-linked polyvinyl alcohol hydrogels containing γ-cyclodextrin (γ-CDHSAs as platforms for the sustained release of prednisone (PDN. Through in-silico studies using semi-empirical quantum mechanical calculations, the effectiveness of 20 dicarboxylic acids to generate a specific cross-linked hydrogel capable of supporting different amounts of γ-cyclodextrin (γ-CD was evaluated. According to the interaction energies calculated with the in-silico studies, the hydrogel made from PVA cross-linked with succinic acids (SA was shown to be the best candidate for containing γ-CD. Later, molecular dynamics simulation studies were performed in order to evaluate the intermolecular interactions between PDN and three cross-linked hydrogel formulations with different proportions of γ-CD (2.44%, 4.76% and 9.1%. These three cross-linked hydrogels were synthesized and characterized. The loading and the subsequent release of PDN from the hydrogels were investigated. The in-silico and experimental results showed that the interaction between PDN and γ-CDHSA was mainly produced with the γ-CDs linked to the hydrogels. Thus, the unique structures and properties of γ-CDHSA demonstrated an interesting multiphasic profile that could be utilized as a promising drug carrier for controlled, sustained and localized release of PDN.

  11. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  12. Spatially continuous dataset at local scale of Taita Hills in Kenya and Mount Kilimanjaro in Tanzania

    Directory of Open Access Journals (Sweden)

    Sizah Mwalusepo

    2016-09-01

    Full Text Available Climate change is a global concern, requiring local scale spatially continuous dataset and modeling of meteorological variables. This dataset article provided the interpolated temperature, rainfall and relative humidity dataset at local scale along Taita Hills and Mount Kilimanjaro altitudinal gradients in Kenya and Tanzania, respectively. The temperature and relative humidity were recorded hourly using automatic onset THHOBO data loggers and rainfall was recorded daily using GENERALR wireless rain gauges. Thin plate spline (TPS was used to interpolate, with the degree of data smoothing determined by minimizing the generalized cross validation. The dataset provide information on the status of the current climatic conditions along the two mountainous altitudinal gradients in Kenya and Tanzania. The dataset will, thus, enhance future research. Keywords: Spatial climate data, Climate change, Modeling, Local scale

  13. PID Controllers Design Applied to Positioning of Ball on the Stewart Platform

    Directory of Open Access Journals (Sweden)

    Koszewnik Andrzej

    2014-12-01

    Full Text Available The paper presents the design and practical implementation of PID controllers for a Stewart platform. The platform uses a resistance touch panel as a sensor and servo motors as actuators. The complete control system stabilizing the ball on the platform is realized with the Arduino microcontroller and the Matlab/Simulink software. Two processes required to acquire measurement signals from the touch panel in two perpendicular directions X and Y, are discussed. The first process includes the calibration of the touch panel, and the second process - the filtering of measurement signals with the low pass Butterworth filter. The obtained signals are used to design the algorithm of the ball stabilization by decoupling the global system into two local subsystems. The algorithm is implemented in a soft real time system. The parameters of both PID controllers (PIDx and PIDy are tuned by the trial-error method and implemented in the microcontroller. Finally, the complete control system is tested at the laboratory stand.

  14. Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset.

    Science.gov (United States)

    Ha, Richard; Chang, Peter; Karcich, Jenika; Mutasa, Simukayi; Fardanesh, Reza; Wynn, Ralph T; Liu, Michael Z; Jambawalikar, Sachin

    2018-04-25

    The aim of this study is to evaluate the role of convolutional neural network (CNN) in predicting axillary lymph node metastasis, using a breast MRI dataset. An institutional review board (IRB)-approved retrospective review of our database from 1/2013 to 6/2016 identified 275 axillary lymph nodes for this study. Biopsy-proven 133 metastatic axillary lymph nodes and 142 negative control lymph nodes were identified based on benign biopsies (100) and from healthy MRI screening patients (42) with at least 3 years of negative follow-up. For each breast MRI, axillary lymph node was identified on first T1 post contrast dynamic images and underwent 3D segmentation using an open source software platform 3D Slicer. A 32 × 32 patch was then extracted from the center slice of the segmented tumor data. A CNN was designed for lymph node prediction based on each of these cropped images. The CNN consisted of seven convolutional layers and max-pooling layers with 50% dropout applied in the linear layer. In addition, data augmentation and L2 regularization were performed to limit overfitting. Training was implemented using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Code for this study was written in Python using the TensorFlow module (1.0.0). Experiments and CNN training were done on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. Two class axillary lymph node metastasis prediction models were evaluated. For each lymph node, a final softmax score threshold of 0.5 was used for classification. Based on this, CNN achieved a mean five-fold cross-validation accuracy of 84.3%. It is feasible for current deep CNN architectures to be trained to predict likelihood of axillary lymph node metastasis. Larger dataset will likely improve our prediction model and can potentially be a non-invasive alternative to core needle biopsy and even sentinel lymph node

  15. Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    2018-01-01

    Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.

  16. Secondary markets for transmission rights in the North West European Market. Position Paper of the North West European Market Parties Platform

    International Nuclear Information System (INIS)

    Van Haaster, G.

    2006-06-01

    The most important way to acquire cross border transmission rights in the North West European electricity market is through explicit auctions. Although market driven flexibility and therefore efficiency can be further enhanced. One way to this is to introduce a secondary market for transmission rights. In this paper the North West European Market Parties Platform (NWE MPP) proposes a model that is developed and preferred by the market parties. The paper will provide a converging contribution to the congestion management discussions in the North Western European region

  17. Query Health: standards-based, cross-platform population health surveillance.

    Science.gov (United States)

    Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N

    2014-01-01

    Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under

  18. MASPECTRAS: a platform for management and analysis of proteomics LC-MS/MS data

    Directory of Open Access Journals (Sweden)

    Rader Robert

    2007-06-01

    Full Text Available Abstract Background The advancements of proteomics technologies have led to a rapid increase in the number, size and rate at which datasets are generated. Managing and extracting valuable information from such datasets requires the use of data management platforms and computational approaches. Results We have developed the MAss SPECTRometry Analysis System (MASPECTRAS, a platform for management and analysis of proteomics LC-MS/MS data. MASPECTRAS is based on the Proteome Experimental Data Repository (PEDRo relational database schema and follows the guidelines of the Proteomics Standards Initiative (PSI. Analysis modules include: 1 import and parsing of the results from the search engines SEQUEST, Mascot, Spectrum Mill, X! Tandem, and OMSSA; 2 peptide validation, 3 clustering of proteins based on Markov Clustering and multiple alignments; and 4 quantification using the Automated Statistical Analysis of Protein Abundance Ratios algorithm (ASAPRatio. The system provides customizable data retrieval and visualization tools, as well as export to PRoteomics IDEntifications public repository (PRIDE. MASPECTRAS is freely available at http://genome.tugraz.at/maspectras Conclusion Given the unique features and the flexibility due to the use of standard software technology, our platform represents significant advance and could be of great interest to the proteomics community.

  19. Internet and Cross Media Productions

    DEFF Research Database (Denmark)

    Petersen, Anja Bechmann

    2006-01-01

    , the Internet continues to play a minor role when compared to older media. The content of the cross media concepts and organizations' history are crucial elements in deciding the priority and use of platforms. Methodologically, the article approaches cross media and the roles of the Internet on a micro......Convergence is one of the hot topics in Internet studies. Recently, however, media organizations have turned their focus to cross media communication. Media organizations are interested in optimizing communication across platforms such as TV, radio, websites, mobile telephones and newspapers....... The aim of this article is to examine the roles of the Internet when emphasis is put on cross media rather than convergence. This article proposes not one unidirectional convergent tendency but manifold roles of the Internet in cross media communication. Inside the media organizations, however...

  20. Comparison and evaluation of datasets for off-angle iris recognition

    Science.gov (United States)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  1. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  2. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  3. CAVAREV-an open platform for evaluating 3D and 4D cardiac vasculature reconstruction

    International Nuclear Information System (INIS)

    Rohkohl, Christopher; Hornegger, Joachim; Lauritsch, Guenter; Keil, Andreas

    2010-01-01

    The 3D reconstruction of cardiac vasculature, e.g. the coronary arteries, using C-arm CT (rotational angiography) is an active and challenging field of research. There are numerous publications on different reconstruction techniques. However, there is still a lack of comparability of achieved results for several reasons: foremost, datasets used in publications are not open to public and thus experiments are not reproducible by other researchers. Further, the results highly depend on the vasculature motion, i.e. cardiac and breathing motion patterns which are also not comparable across publications. We aim to close this gap by providing an open platform, called Cavarev (CArdiac VAsculature Reconstruction EValuation). It features two simulated dynamic projection datasets based on the 4D XCAT phantom with contrasted coronary arteries which was derived from patient data. In the first dataset, the vasculature undergoes a continuous periodic motion. The second dataset contains aperiodic heart motion by including additional breathing motion. The geometry calibration and acquisition protocol were obtained from a real-world C-arm system. For qualitative evaluation of the reconstruction results, the correlation of the morphology is used. Two segmentation-based quality measures are introduced which allow us to assess the 3D and 4D reconstruction quality. They are based on the spatial overlap of the vasculature reconstruction with the ground truth. The measures enable a comprehensive analysis and comparison of reconstruction results independent from the utilized reconstruction algorithm. An online platform (www.cavarev.com) is provided where the datasets can be downloaded, researchers can manage and publish algorithm results and download a reference C++ and Matlab implementation.

  4. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X

    2015-01-01

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross-platform

  5. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2015-06-15

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross-platform

  6. USA Hire Testing Platform

    Data.gov (United States)

    Office of Personnel Management — The USA Hire Testing Platform delivers tests used in hiring for positions in the Federal Government. To safeguard the integrity of the hiring processes and ensure...

  7. Comparison of trends and abrupt changes of the South Asia high from 1979 to 2014 in reanalysis and radiosonde datasets

    Science.gov (United States)

    Shi, Chunhua; Huang, Ying; Guo, Dong; Zhou, Shunwu; Hu, Kaixi; Liu, Yu

    2018-05-01

    The South Asian High (SAH) has an important influence on atmospheric circulation and the Asian climate in summer. However, current comparative analyses of the SAH are mostly between reanalysis datasets and there is a lack of sounding data. We therefore compared the climatology, trends and abrupt changes in the SAH in the Japanese 55-year Reanalysis (JRA-55) dataset, the National Centers for Environmental Prediction Climate Forecast System Reanalysis (NCEP-CFSR) dataset, the European Center for Medium-Range Weather Forecasts Reanalysis Interim (ERA-interim) dataset and radiosonde data from China using linear analysis and a sliding t-test. The trends in geopotential height in the control area of the SAH were positive in the JRA-55, NCEP-CFSR and ERA-interim datasets, but negative in the radiosonde data in the time period 1979-2014. The negative trends for the SAH were significant at the 90% confidence level in the radiosonde data from May to September. The positive trends in the NCEP-CFSR dataset were significant at the 90% confidence level in May, July, August and September, but the positive trends in the JRA-55 and ERA-Interim were only significant at the 90% confidence level in September. The reasons for the differences in the trends of the SAH between the radiosonde data and the three reanalysis datasets in the time period 1979-2014 were updates to the sounding systems, changes in instrumentation and improvements in the radiation correction method for calculations around the year 2000. We therefore analyzed the trends in the two time periods of 1979-2000 and 2001-2014 separately. From 1979 to 2000, the negative SAH trends in the radiosonde data mainly agreed with the negative trends in the NCEP-CFSR dataset, but were in contrast with the positive trends in the JRA-55 and ERA-Interim datasets. In 2001-2014, however, the trends in the SAH were positive in all four datasets and most of the trends in the radiosonde and NCEP-CFSR datasets were significant. It is

  8. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    Science.gov (United States)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through

  9. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    Science.gov (United States)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  10. Flood Modeling Using a Synthesis of Multi-Platform LiDAR Data

    Directory of Open Access Journals (Sweden)

    Ryan M. Csontos

    2013-09-01

    Full Text Available This study examined the utility of a high resolution ground-based (mobile and terrestrial Light Detection and Ranging (LiDAR dataset (0.2 m point-spacing supplemented with a coarser resolution airborne LiDAR dataset (5 m point-spacing for use in a flood inundation analysis. The techniques for combining multi-platform LiDAR data into a composite dataset in the form of a triangulated irregular network (TIN are described, and quantitative comparisons were made to a TIN generated solely from the airborne LiDAR dataset. For example, a maximum land surface elevation difference of 1.677 m and a mean difference of 0.178 m were calculated between the datasets based on sample points. Utilizing the composite and airborne LiDAR-derived TINs, a flood inundation comparison was completed using a one-dimensional steady flow hydraulic modeling analysis. Quantitative comparisons of the water surface profiles and depth grids indicated an underestimation of flooding extent, volume, and maximum flood height using the airborne LiDAR data alone. A 35% increase in maximum flood height was observed using the composite LiDAR dataset. In addition, the extents of the water surface profiles generated from the two datasets were found to be statistically significantly different. The urban and mountainous characteristics of the study area as well as the density (file size of the high resolution ground based LiDAR data presented both opportunities and challenges for flood modeling analyses.

  11. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  12. Psynteract: A flexible, cross-platform, open framework for interactive experiments.

    Science.gov (United States)

    Henninger, Felix; Kieslich, Pascal J; Hilbig, Benjamin E

    2017-10-01

    We introduce a novel platform for interactive studies, that is, any form of study in which participants' experiences depend not only on their own responses, but also on those of other participants who complete the same study in parallel, for example a prisoner's dilemma or an ultimatum game. The software thus especially serves the rapidly growing field of strategic interaction research within psychology and behavioral economics. In contrast to all available software packages, our platform does not handle stimulus display and response collection itself. Instead, we provide a mechanism to extend existing experimental software to incorporate interactive functionality. This approach allows us to draw upon the capabilities already available, such as accuracy of temporal measurement, integration with auxiliary hardware such as eye-trackers or (neuro-)physiological apparatus, and recent advances in experimental software, for example capturing response dynamics through mouse-tracking. Through integration with OpenSesame, an open-source graphical experiment builder, studies can be assembled via a drag-and-drop interface requiring little or no further programming skills. In addition, by using the same communication mechanism across software packages, we also enable interoperability between systems. Our source code, which provides support for all major operating systems and several popular experimental packages, can be freely used and distributed under an open source license. The communication protocols underlying its functionality are also well documented and easily adapted to further platforms. Code and documentation are available at https://github.com/psynteract/ .

  13. Responses to positive affect, life satisfaction and self-esteem: A cross-lagged panel analysis during middle adolescence.

    Science.gov (United States)

    Gomez-Baya, Diego; Mendoza, Ramon; Gaspar, Tania; Gomes, Paulo

    2018-05-11

    During middle adolescence, elevated stress and a greater presence of psychological disorders have been documented. The research has paid little attention to the regulation of positive affective states. Fredrickson's broaden-and-build theory suggests that cultivating positive emotions helps to build resources that boost well-being. The current research aimed to examine the longitudinal associations between responses to positive affect (emotion-focused positive rumination, self-focused positive rumination, and dampening) and psychological adjustment (self-esteem and life satisfaction) during middle adolescence. A longitudinal study with two waves separated by one year was conducted, assessing 977 adolescents (M = 13.81, SD = 0.79; 51.5% boys) with self-report measures. A cross-lagged panel analysis was performed by including within the same model the relationships between all of the variables in the two assessment points. The results indicated cross-lagged positive relationships of self-focused positive rumination with both self-esteem and life satisfaction, while dampening showed a negative cross-lagged relationship with self-esteem. Moreover, higher self-esteem predicted more emotion-focused positive rumination, and more dampening predicted lower life satisfaction. Thus, the use of adaptive responses to positive affect and a better psychological adjustment were found to be prospectively interrelated at the one-year follow-up during middle adolescence. The discussion argues for the need to implement programmes to promote more adaptive responses to positive affect to enhance psychological adjustment in the adolescent transition to adulthood. © 2018 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  14. The European Photovoltaic Technology Platform

    International Nuclear Information System (INIS)

    Nowak, S.; Aulich, H.; Bal, J.L.; Dimmler, B.; Garnier, A.; Jongerden, G.; Luther, J.; Luque, A.; Milner, A.; Nelson, D.; Pataki, I.; Pearsall, N.; Perezagua, E.; Pietruszko, S.; Rehak, J.; Schellekens, E.; Shanker, A.; Silvestrini, G.; Sinke, W.; Willemsen, H.

    2006-05-01

    The European Photovoltaic Technology Platform is one of the European Technology Platforms, a new instrument proposed by the European Commission. European Technology Platforms (ETPs) are a mechanism to bring together all interested stakeholders to develop a long-term vision to address a specific challenge, create a coherent, dynamic strategy to achieve that vision and steer the implementation of an action plan to deliver agreed programmes of activities and optimise the benefits for all parties. The European Photovoltaic Technology Platform has recently been established to define, support and accompany the implementation of a coherent and comprehensive strategic plan for photovoltaics. The platform will mobilise all stakeholders sharing a long-term European vision for PV, helping to ensure that Europe maintains and improves its industrial position. The platform will realise a European Strategic Research Agenda for PV for the next decade(s). Guided by a Steering Committee of 20 high level decision-makers representing all relevant European PV Stakeholders, the European PV Technology Platform comprises 4 Working Groups dealing with the subjects policy and instruments; market deployment; science, technology and applications as well as developing countries and is supported by a secretariat

  15. The COMET Sleep Research Platform.

    Science.gov (United States)

    Nichols, Deborah A; DeSalvo, Steven; Miller, Richard A; Jónsson, Darrell; Griffin, Kara S; Hyde, Pamela R; Walsh, James K; Kushida, Clete A

    2014-01-01

    The Comparative Outcomes Management with Electronic Data Technology (COMET) platform is extensible and designed for facilitating multicenter electronic clinical research. Our research goals were the following: (1) to conduct a comparative effectiveness trial (CET) for two obstructive sleep apnea treatments-positive airway pressure versus oral appliance therapy; and (2) to establish a new electronic network infrastructure that would support this study and other clinical research studies. The COMET platform was created to satisfy the needs of CET with a focus on creating a platform that provides comprehensive toolsets, multisite collaboration, and end-to-end data management. The platform also provides medical researchers the ability to visualize and interpret data using business intelligence (BI) tools. COMET is a research platform that is scalable and extensible, and which, in a future version, can accommodate big data sets and enable efficient and effective research across multiple studies and medical specialties. The COMET platform components were designed for an eventual move to a cloud computing infrastructure that enhances sustainability, overall cost effectiveness, and return on investment.

  16. The Platform Architecture and Key Technology of Cloud Service that Support Wisdom City Management

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2013-05-01

    Full Text Available According to the new requirement of constructing “resource sharing and service on demand” wisdom city system, this paper put forward the platform architecture of cloud service for wisdom city management which support IaaS, PaaS and SaaS three types of service model on the basis of researching the operation mode of the wisdom city which under cloud computing environment and through the research of mass storing technology of cloud data, building technology of cloud resource pool, scheduling management methods and monitoring technology of cloud resource, security management and control technology of cloud platform and other key technologies. The platform supports wisdom city system to achieve business or resource scheduling management optimization and the unified and efficient management of large-scale hardware and software, which has the characteristics of cross-domain resource scheduling, cross-domain data sharing, cross-domain facilities integration and cross-domain service integration.

  17. The universal modular platform

    International Nuclear Information System (INIS)

    North, R.B.

    1995-01-01

    A new and patented design for offshore wellhead platforms has been developed to meet a 'fast track' requirement for increased offshore production, from field locations not yet identified. The new design uses modular construction to allow for radical changes in the water depth of the final location and assembly line efficiency in fabrication. By utilizing high strength steels and structural support from the well conductors the new design accommodates all planned production requirements on a support structure significantly lighter and less expensive than the conventional design it replaces. Twenty two platforms based on the new design were ready for installation within 18 months of the project start. Installation of the new platforms began in 1992 for drilling support and 1993 for production support. The new design has become the Company standard for all future production platforms. Large saving and construction costs have been realized through its light weight, flexibility in both positioning and water depth, and its modular construction

  18. Developing a Web Platform to Support a Community of Practice: A Mixed Methods Study in Pediatric Physiotherapy.

    Science.gov (United States)

    Pratte, Gabrielle; Hurtubise, Karen; Rivard, Lisa; Berbari, Jade; Camden, Chantal

    2018-01-01

    Web platforms are increasingly used to support virtual interactions between members of communities of practice (CoP). However, little is known about how to develop these platforms to support the implementation of best practices for health care professionals. The aim of this article is to explore pediatric physiotherapists' (PTs) perspectives regarding the utility and usability of the characteristic of a web platform developed to support virtual communities of practice (vCoP). This study adopted an explanatory sequential mixed methods design. A web platform supporting the interactions of vCoP members was developed for PTs working with children with developmental coordination disorder. Specific strategies and features were created to support the effectiveness of the platform across three domains: social, information-quality, and system-quality factors. Quantitative data were collected from a cross-sectional survey (n = 41) after 5 months of access to the web platform. Descriptive statistics were calculated. Qualitative data were also collected from semistructured interviews (n = 9), which were coded, interpreted, and analyzed by using Boucher's Web Ergonomics Conceptual Framework. The utility of web platform characteristics targeting the three key domain factors were generally perceived positively by PTs. However, web platform usability issues were noted by PTs, including problems with navigation and information retrieval. Web platform aiming to support vCoP should be carefully developed to target potential users' needs. Whenever possible, users should co-construct the web platform with vCoP developers. Moreover, each of the developed characteristics (eg, newsletter, search function) should be evaluated in terms of utility and usability for the users.

  19. Soil chemistry in lithologically diverse datasets: the quartz dilution effect

    Science.gov (United States)

    Bern, Carleton R.

    2009-01-01

    National- and continental-scale soil geochemical datasets are likely to move our understanding of broad soil geochemistry patterns forward significantly. Patterns of chemistry and mineralogy delineated from these datasets are strongly influenced by the composition of the soil parent material, which itself is largely a function of lithology and particle size sorting. Such controls present a challenge by obscuring subtler patterns arising from subsequent pedogenic processes. Here the effect of quartz concentration is examined in moist-climate soils from a pilot dataset of the North American Soil Geochemical Landscapes Project. Due to variable and high quartz contents (6.2–81.7 wt.%), and its residual and inert nature in soil, quartz is demonstrated to influence broad patterns in soil chemistry. A dilution effect is observed whereby concentrations of various elements are significantly and strongly negatively correlated with quartz. Quartz content drives artificial positive correlations between concentrations of some elements and obscures negative correlations between others. Unadjusted soil data show the highly mobile base cations Ca, Mg, and Na to be often strongly positively correlated with intermediately mobile Al or Fe, and generally uncorrelated with the relatively immobile high-field-strength elements (HFS) Ti and Nb. Both patterns are contrary to broad expectations for soils being weathered and leached. After transforming bulk soil chemistry to a quartz-free basis, the base cations are generally uncorrelated with Al and Fe, and negative correlations generally emerge with the HFS elements. Quartz-free element data may be a useful tool for elucidating patterns of weathering or parent-material chemistry in large soil datasets.

  20. On the prospects of cross-calibrating the Cherenkov Telescope Array with an airborne calibration platform

    Science.gov (United States)

    Brown, Anthony M.

    2018-01-01

    Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.

  1. SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Tian, Z; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States); Pinto, M; Dedes, G; Parodi, K [Ludwig-Maximilians-Universitaet Muenchen, Garching / Munich (Germany)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondary electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.

  2. BayesMotif: de novo protein sorting motif discovery from impure datasets.

    Science.gov (United States)

    Hu, Jianjun; Zhang, Fan

    2010-01-18

    Protein sorting is the process that newly synthesized proteins are transported to their target locations within or outside of the cell. This process is precisely regulated by protein sorting signals in different forms. A major category of sorting signals are amino acid sub-sequences usually located at the N-terminals or C-terminals of protein sequences. Genome-wide experimental identification of protein sorting signals is extremely time-consuming and costly. Effective computational algorithms for de novo discovery of protein sorting signals is needed to improve the understanding of protein sorting mechanisms. We formulated the protein sorting motif discovery problem as a classification problem and proposed a Bayesian classifier based algorithm (BayesMotif) for de novo identification of a common type of protein sorting motifs in which a highly conserved anchor is present along with a less conserved motif regions. A false positive removal procedure is developed to iteratively remove sequences that are unlikely to contain true motifs so that the algorithm can identify motifs from impure input sequences. Experiments on both implanted motif datasets and real-world datasets showed that the enhanced BayesMotif algorithm can identify anchored sorting motifs from pure or impure protein sequence dataset. It also shows that the false positive removal procedure can help to identify true motifs even when there is only 20% of the input sequences containing true motif instances. We proposed BayesMotif, a novel Bayesian classification based algorithm for de novo discovery of a special category of anchored protein sorting motifs from impure datasets. Compared to conventional motif discovery algorithms such as MEME, our algorithm can find less-conserved motifs with short highly conserved anchors. Our algorithm also has the advantage of easy incorporation of additional meta-sequence features such as hydrophobicity or charge of the motifs which may help to overcome the limitations of

  3. Embedded Linux platform for data acquisition systems

    International Nuclear Information System (INIS)

    Patel, Jigneshkumar J.; Reddy, Nagaraj; Kumari, Praveena; Rajpal, Rachana; Pujara, Harshad; Jha, R.; Kalappurakkal, Praveen

    2014-01-01

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  4. Embedded Linux platform for data acquisition systems

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Jigneshkumar J., E-mail: jjp@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Reddy, Nagaraj, E-mail: nagaraj.reddy@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, Harshad, E-mail: pujara@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: rjha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Kalappurakkal, Praveen, E-mail: praveen.k@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India)

    2014-05-15

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  5. SatelliteDL: a Toolkit for Analysis of Heterogeneous Satellite Datasets

    Science.gov (United States)

    Galloy, M. D.; Fillmore, D.

    2014-12-01

    SatelliteDL is an IDL toolkit for the analysis of satellite Earth observations from a diverse set of platforms and sensors. The core function of the toolkit is the spatial and temporal alignment of satellite swath and geostationary data. The design features an abstraction layer that allows for easy inclusion of new datasets in a modular way. Our overarching objective is to create utilities that automate the mundane aspects of satellite data analysis, are extensible and maintainable, and do not place limitations on the analysis itself. IDL has a powerful suite of statistical and visualization tools that can be used in conjunction with SatelliteDL. Toward this end we have constructed SatelliteDL to include (1) HTML and LaTeX API document generation,(2) a unit test framework,(3) automatic message and error logs,(4) HTML and LaTeX plot and table generation, and(5) several real world examples with bundled datasets available for download. For ease of use, datasets, variables and optional workflows may be specified in a flexible format configuration file. Configuration statements may specify, for example, a region and date range, and the creation of images, plots and statistical summary tables for a long list of variables. SatelliteDL enforces data provenance; all data should be traceable and reproducible. The output NetCDF file metadata holds a complete history of the original datasets and their transformations, and a method exists to reconstruct a configuration file from this information. Release 0.1.0 distributes with ingest methods for GOES, MODIS, VIIRS and CERES radiance data (L1) as well as select 2D atmosphere products (L2) such as aerosol and cloud (MODIS and VIIRS) and radiant flux (CERES). Future releases will provide ingest methods for ocean and land surface products, gridded and time averaged datasets (L3 Daily, Monthly and Yearly), and support for 3D products such as temperature and water vapor profiles. Emphasis will be on NPP Sensor, Environmental and

  6. PIVOT: platform for interactive analysis and visualization of transcriptomics data.

    Science.gov (United States)

    Zhu, Qin; Fisher, Stephen A; Dueck, Hannah; Middleton, Sarah; Khaladkar, Mugdha; Kim, Junhyong

    2018-01-05

    Many R packages have been developed for transcriptome analysis but their use often requires familiarity with R and integrating results of different packages requires scripts to wrangle the datatypes. Furthermore, exploratory data analyses often generate multiple derived datasets such as data subsets or data transformations, which can be difficult to track. Here we present PIVOT, an R-based platform that wraps open source transcriptome analysis packages with a uniform user interface and graphical data management that allows non-programmers to interactively explore transcriptomics data. PIVOT supports more than 40 popular open source packages for transcriptome analysis and provides an extensive set of tools for statistical data manipulations. A graph-based visual interface is used to represent the links between derived datasets, allowing easy tracking of data versions. PIVOT further supports automatic report generation, publication-quality plots, and program/data state saving, such that all analysis can be saved, shared and reproduced. PIVOT will allow researchers with broad background to easily access sophisticated transcriptome analysis tools and interactively explore transcriptome datasets.

  7. Towards a Market Entry Framework for Digital Payment Platforms

    DEFF Research Database (Denmark)

    Kazan, Erol; Damsgaard, Jan

    2016-01-01

    This study presents a framework to understand and explain the design and configuration of digital payment platforms and how these platforms create conditions for market entries. By embracing the theoretical lens of platform envelopment, we employed a multiple and comparative-case study...... in a European setting by using our framework as an analytical lens to assess market-entry conditions. We found that digital payment platforms have acquired market entry capabilities, which is achieved through strategic platform design (i.e., platform development and service distribution) and technology design...... (i.e., issuing evolutionary and revolutionary payment instruments). The studied cases reveal that digital platforms leverage payment services as a mean to bridge and converge core and adjacent platform markets. In so doing, platform envelopment strengthens firms’ market position in their respective...

  8. Vertical Wave Impacts on Offshore Wind Turbine Inspection Platforms

    DEFF Research Database (Denmark)

    Bredmose, Henrik; Jacobsen, Niels Gjøl

    2011-01-01

    Breaking wave impacts on a monopile at 20 m depth are computed with a VOF (Volume Of Fluid) method. The impacting waves are generated by the second-order focused wave group technique, to obtain waves that break at the position of the monopile. The subsequent impact from the vertical run-up flow...... on a horizontal inspection platform is computed for five different platform levels. The computational results show details of monopile impact such as slamming pressures from the overturning wave front and the formation of run-up flow. The results show that vertical platform impacts can occur at 20 m water depth....... The dependence of the vertical platform load to the platform level is discussed. Attention is given to the significant downward force that occur after the upward force associated with the vertical impact. The effect of the numerical resolution on the results is assessed. The position of wave overturning is found...

  9. Integrating pipeline data management application and Google maps dataset on web based GIS application unsing open source technology Sharp Map and Open Layers

    Energy Technology Data Exchange (ETDEWEB)

    Wisianto, Arie; Sania, Hidayatus [PT PERTAMINA GAS, Bontang (Indonesia); Gumilar, Oki [PT PERTAMINA GAS, Jakarta (Indonesia)

    2010-07-01

    PT Pertamina Gas operates 3 pipe segments carrying natural gas from producers to PT Pupuk Kaltim in the Kalimantan area. The company wants to build a pipeline data management system consisting of pipeline facilities, inspections and risk assessments which would run on Geographic Information Systems (GIS) platforms. The aim of this paper is to present the integration of the pipeline data management system with GIS. A web based GIS application is developed using the combination of Google maps datasets with local spatial datasets. In addition, Open Layers is used to integrate pipeline data model and Google Map dataset into a single map display on Sharp Map. The GIS based pipeline data management system developed herein constitutes a low cost, powerful and efficient web based GIS solution.

  10. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  11. Absence of paired crossing in the positive parity bands of 124Cs

    Science.gov (United States)

    Singh, A. K.; Basu, A.; Nag, Somnath; Hübel, H.; Domscheit, J.; Ragnarsson, I.; Al-Khatib, A.; Hagemann, G. B.; Herskind, B.; Elema, D. R.; Wilson, J. N.; Clark, R. M.; Cromaz, M.; Fallon, P.; Görgen, A.; Lee, I.-Y.; Ward, D.; Ma, W. C.

    2018-02-01

    High-spin states in 124Cs were populated in the 64Ni(64Ni,p 3 n ) reaction and the Gammasphere detector array was used to measure γ -ray coincidences. Both positive- and negative-parity bands, including bands with chiral configurations, have been extended to higher spin, where a shape change has been observed. The configurations of the bands before and after the alignment are discussed within the framework of the cranked Nilsson-Strutinsky model. The calculations suggest that the nucleus undergoes a shape transition from triaxial to prolate around spin I ≃22 of the positive-parity states. The alignment gain of 8 ℏ , observed in the positive-parity bands, is due to partial alignment of several valence nucleons. This indicates the absence of band crossing due to paired nucleons in the bands.

  12. PIBAS FedSPARQL: a web-based platform for integration and exploration of bioinformatics datasets.

    Science.gov (United States)

    Djokic-Petrovic, Marija; Cvjetkovic, Vladimir; Yang, Jeremy; Zivanovic, Marko; Wild, David J

    2017-09-20

    There are a huge variety of data sources relevant to chemical, biological and pharmacological research, but these data sources are highly siloed and cannot be queried together in a straightforward way. Semantic technologies offer the ability to create links and mappings across datasets and manage them as a single, linked network so that searching can be carried out across datasets, independently of the source. We have developed an application called PIBAS FedSPARQL that uses semantic technologies to allow researchers to carry out such searching across a vast array of data sources. PIBAS FedSPARQL is a web-based query builder and result set visualizer of bioinformatics data. As an advanced feature, our system can detect similar data items identified by different Uniform Resource Identifiers (URIs), using a text-mining algorithm based on the processing of named entities to be used in Vector Space Model and Cosine Similarity Measures. According to our knowledge, PIBAS FedSPARQL was unique among the systems that we found in that it allows detecting of similar data items. As a query builder, our system allows researchers to intuitively construct and run Federated SPARQL queries across multiple data sources, including global initiatives, such as Bio2RDF, Chem2Bio2RDF, EMBL-EBI, and one local initiative called CPCTAS, as well as additional user-specified data source. From the input topic, subtopic, template and keyword, a corresponding initial Federated SPARQL query is created and executed. Based on the data obtained, end users have the ability to choose the most appropriate data sources in their area of interest and exploit their Resource Description Framework (RDF) structure, which allows users to select certain properties of data to enhance query results. The developed system is flexible and allows intuitive creation and execution of queries for an extensive range of bioinformatics topics. Also, the novel "similar data items detection" algorithm can be particularly

  13. ALLocator: an interactive web platform for the analysis of metabolomic LC-ESI-MS datasets, enabling semi-automated, user-revised compound annotation and mass isotopomer ratio analysis.

    Science.gov (United States)

    Kessler, Nikolas; Walter, Frederik; Persicke, Marcus; Albaum, Stefan P; Kalinowski, Jörn; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W

    2014-01-01

    Adduct formation, fragmentation events and matrix effects impose special challenges to the identification and quantitation of metabolites in LC-ESI-MS datasets. An important step in compound identification is the deconvolution of mass signals. During this processing step, peaks representing adducts, fragments, and isotopologues of the same analyte are allocated to a distinct group, in order to separate peaks from coeluting compounds. From these peak groups, neutral masses and pseudo spectra are derived and used for metabolite identification via mass decomposition and database matching. Quantitation of metabolites is hampered by matrix effects and nonlinear responses in LC-ESI-MS measurements. A common approach to correct for these effects is the addition of a U-13C-labeled internal standard and the calculation of mass isotopomer ratios for each metabolite. Here we present a new web-platform for the analysis of LC-ESI-MS experiments. ALLocator covers the workflow from raw data processing to metabolite identification and mass isotopomer ratio analysis. The integrated processing pipeline for spectra deconvolution "ALLocatorSD" generates pseudo spectra and automatically identifies peaks emerging from the U-13C-labeled internal standard. Information from the latter improves mass decomposition and annotation of neutral losses. ALLocator provides an interactive and dynamic interface to explore and enhance the results in depth. Pseudo spectra of identified metabolites can be stored in user- and method-specific reference lists that can be applied on succeeding datasets. The potential of the software is exemplified in an experiment, in which abundance fold-changes of metabolites of the l-arginine biosynthesis in C. glutamicum type strain ATCC 13032 and l-arginine producing strain ATCC 21831 are compared. Furthermore, the capability for detection and annotation of uncommon large neutral losses is shown by the identification of (γ-)glutamyl dipeptides in the same strains

  14. ALLocator: an interactive web platform for the analysis of metabolomic LC-ESI-MS datasets, enabling semi-automated, user-revised compound annotation and mass isotopomer ratio analysis.

    Directory of Open Access Journals (Sweden)

    Nikolas Kessler

    Full Text Available Adduct formation, fragmentation events and matrix effects impose special challenges to the identification and quantitation of metabolites in LC-ESI-MS datasets. An important step in compound identification is the deconvolution of mass signals. During this processing step, peaks representing adducts, fragments, and isotopologues of the same analyte are allocated to a distinct group, in order to separate peaks from coeluting compounds. From these peak groups, neutral masses and pseudo spectra are derived and used for metabolite identification via mass decomposition and database matching. Quantitation of metabolites is hampered by matrix effects and nonlinear responses in LC-ESI-MS measurements. A common approach to correct for these effects is the addition of a U-13C-labeled internal standard and the calculation of mass isotopomer ratios for each metabolite. Here we present a new web-platform for the analysis of LC-ESI-MS experiments. ALLocator covers the workflow from raw data processing to metabolite identification and mass isotopomer ratio analysis. The integrated processing pipeline for spectra deconvolution "ALLocatorSD" generates pseudo spectra and automatically identifies peaks emerging from the U-13C-labeled internal standard. Information from the latter improves mass decomposition and annotation of neutral losses. ALLocator provides an interactive and dynamic interface to explore and enhance the results in depth. Pseudo spectra of identified metabolites can be stored in user- and method-specific reference lists that can be applied on succeeding datasets. The potential of the software is exemplified in an experiment, in which abundance fold-changes of metabolites of the l-arginine biosynthesis in C. glutamicum type strain ATCC 13032 and l-arginine producing strain ATCC 21831 are compared. Furthermore, the capability for detection and annotation of uncommon large neutral losses is shown by the identification of (γ-glutamyl dipeptides in

  15. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  16. Genesis and Evolution of Digital Payment Platforms

    DEFF Research Database (Denmark)

    Hjelholt, Morten; Damsgaard, Jan

    2012-01-01

    Payment transactions through the use of physical coins, bank notes or credit cards have for centuries been the standard formats of exchanging money. Recently online and mobile digital payment platforms has entered the stage as contenders to this position and possibly could penetrate societies...... thoroughly and substitute current payment standards in the decades to come. This paper portrays how digital payment paltforms evolve in socio-technical niches and how various technological platforms aim for institutional attention in their attempt to challenge earlier platforms and standards. The paper...... applies a co-evolutionary multilevel perspective to model the interplay and processes between technology and society wherein digital payment platforms potentially will substitute other payment platforms just like the credit card negated the check. On this basis this paper formulate a multilevel conceptual...

  17. Basin Assessment Spatial Planning Platform

    Energy Technology Data Exchange (ETDEWEB)

    2017-07-26

    The tool is intended to facilitate hydropower development and water resource planning by improving synthesis and interpretation of disparate spatial datasets that are considered in development actions (e.g., hydrological characteristics, environmentally and culturally sensitive areas, existing or proposed water power resources, climate-informed forecasts). The tool enables this capability by providing a unique framework for assimilating, relating, summarizing, and visualizing disparate spatial data through the use of spatial aggregation techniques, relational geodatabase platforms, and an interactive web-based Geographic Information Systems (GIS). Data are aggregated and related based on shared intersections with a common spatial unit; in this case, industry-standard hydrologic drainage areas for the U.S. (National Hydrography Dataset) are used as the spatial unit to associate planning data. This process is performed using all available scalar delineations of drainage areas (i.e., region, sub-region, basin, sub-basin, watershed, sub-watershed, catchment) to create spatially hierarchical relationships among planning data and drainages. These entity-relationships are stored in a relational geodatabase that provides back-end structure to the web GIS and its widgets. The full technology stack was built using all open-source software in modern programming languages. Interactive widgets that function within the viewport are also compatible with all modern browsers.

  18. Together We Innovate: Cross-Cultural Teamwork through Virtual Platforms

    Science.gov (United States)

    Duus, Rikke; Cooray, Muditha

    2014-01-01

    In a global business environment, marketing education must support students to develop cross-cultural agility and adeptness with an aim to enhance their employability. This article contributes with an experiential cross-cultural exercise that enables students to develop new enterprises in collaboration with other students in a different country…

  19. Specialized food composition dataset for vitamin D content in foods based on European standards: Application to dietary intake assessment.

    Science.gov (United States)

    Milešević, Jelena; Samaniego, Lourdes; Kiely, Mairead; Glibetić, Maria; Roe, Mark; Finglas, Paul

    2018-02-01

    A review of national nutrition surveys from 2000 to date, demonstrated high prevalence of vitamin D intakes below the EFSA Adequate Intake (AI) (d vitamin D) in adults across Europe. Dietary assessment and modelling are required to monitor efficacy and safety of ongoing strategic vitamin D fortification. To support these studies, a specialized vitamin D food composition dataset, based on EuroFIR standards, was compiled. The FoodEXplorer™ tool was used to retrieve well documented analytical data for vitamin D and arrange the data into two datasets - European (8 European countries, 981 data values) and US (1836 data values). Data were classified, using the LanguaL™, FoodEX2 and ODIN classification systems and ranked according to quality criteria. Significant differences in the content, quality of data values, missing data on vitamin D 2 and 25(OH)D 3 and documentation of analytical methods were observed. The dataset is available through the EuroFIR platform. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Barriers and Bridges to Positive Cross-Ethnic Relations: African American and White Parent Socialization Beliefs and Practices.

    Science.gov (United States)

    Hamm, Jill V.

    2001-01-01

    Using interviews and focus groups, lower and middle socioeconomic status (SES) African American parents and middle SES white parents discussed their objectives regarding cross-ethnic relations and how they helped their children forge positive cross-ethnic relations. The groups relied on different methods to promote socialization. Parents' efforts…

  1. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  2. The largest human cognitive performance dataset reveals insights into the effects of lifestyle factors and aging

    Directory of Open Access Journals (Sweden)

    Daniel A Sternberg

    2013-06-01

    Full Text Available Making new breakthroughs in understanding the processes underlying human cognition may depend on the availability of very large datasets that have not historically existed in psychology and neuroscience. Lumosity is a web-based cognitive training platform that has grown to include over 600 million cognitive training task results from over 35 million individuals, comprising the largest existing dataset of human cognitive performance. As part of the Human Cognition Project, Lumosity’s collaborative research program to understand the human mind, Lumos Labs researchers and external research collaborators have begun to explore this dataset in order uncover novel insights about the correlates of cognitive performance. This paper presents two preliminary demonstrations of some of the kinds of questions that can be examined with the dataset. The first example focuses on replicating known findings relating lifestyle factors to baseline cognitive performance in a demographically diverse, healthy population at a much larger scale than has previously been available. The second example examines a question that would likely be very difficult to study in laboratory-based and existing online experimental research approaches: specifically, how learning ability for different types of cognitive tasks changes with age. We hope that these examples will provoke the imagination of researchers who are interested in collaborating to answer fundamental questions about human cognitive performance.

  3. Cross-Platform Learning Media Development of Software Installation on Computer Engineering and Networking Expertise Package

    Directory of Open Access Journals (Sweden)

    Afis Pratama

    2018-03-01

    Full Text Available Software Installation is one of the important lessons that must be mastered by student of computer and network engineering expertise package. But there is a problem about the lack of attention and concentration of students in following the teaching and learning process in the subject of installation of the software. The matter must immediately find a solution. This research refers to the technology development that is always increasing. The technology can be used as a tool to support learning activities. Currently, all grade 10 students in public vocational high school (SMK 8 Semarang Indonesia already have a gadget, either a smartphone or a laptop and the intensity of usage is high enough. Based on this phenomenon, this research aims to create a learning media software installation that is cross-platform. It is practical and can be carried easily in a smartphone and a laptop that has different operating system. So that, this media is expected to improve learning outcomes, understanding and enthusiasm of the students in the software installation lesson.

  4. Positive Psychology in Cross-Cultural Narratives: Mexican Students Discover Themselves While Learning Chinese

    Science.gov (United States)

    Oxford, Rebecca L.; Cuéllar, Lourdes

    2014-01-01

    Using the principles of positive psychology and the tools of narrative research, this article focuses on the psychology of five language learners who crossed cultural and linguistic borders. All five were university students learning Chinese in Mexico, and two of them also studied Chinese in China. The grounded theory approach was used to analyze…

  5. 33 CFR 147.809 - Mars Tension Leg Platform safety zone.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Mars Tension Leg Platform safety... SECURITY (CONTINUED) OUTER CONTINENTAL SHELF ACTIVITIES SAFETY ZONES § 147.809 Mars Tension Leg Platform safety zone. (a) Description. The Mars Tension Leg Platform (Mars TLP) is located at position 28°10′10.29...

  6. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Zheng Huiru

    2009-01-01

    Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  7. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Reality (VR) environment. The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.

  8. Application of GNSS Methods for Monitoring Offshore Platform Deformation

    Science.gov (United States)

    Myint, Khin Cho; Nasir Matori, Abd; Gohari, Adel

    2018-03-01

    Global Navigation Satellite System (GNSS) has become a powerful tool for high-precision deformation monitoring application. Monitoring of deformation and subsidence of offshore platform due to factors such as shallow gas phenomena. GNSS is the technical interoperability and compatibility between various satellite navigation systems such as modernized GPS, Galileo, reconstructed GLONASS to be used by civilian users. It has been known that excessive deformation affects platform structurally, causing loss of production and affects the efficiency of the machinery on board the platform. GNSS have been proven to be one of the most precise positioning methods where by users can get accuracy to the nearest centimeter of a given position from carrier phase measurement processing of GPS signals. This research is aimed at using GNSS technique, which is one of the most standard methods to monitor the deformation of offshore platforms. Therefore, station modeling, which accounts for the spatial correlated errors, and hence speeds up the ambiguity resolution process is employed. It was found that GNSS combines the high accuracy of the results monitoring the offshore platforms deformation with the possibility of survey.

  9. False positive reduction in protein-protein interaction predictions using gene ontology annotations

    Directory of Open Access Journals (Sweden)

    Lin Yen-Han

    2007-07-01

    Full Text Available Abstract Background Many crucial cellular operations such as metabolism, signalling, and regulations are based on protein-protein interactions. However, the lack of robust protein-protein interaction information is a challenge. One reason for the lack of solid protein-protein interaction information is poor agreement between experimental findings and computational sets that, in turn, comes from huge false positive predictions in computational approaches. Reduction of false positive predictions and enhancing true positive fraction of computationally predicted protein-protein interaction datasets based on highly confident experimental results has not been adequately investigated. Results Gene Ontology (GO annotations were used to reduce false positive protein-protein interactions (PPI pairs resulting from computational predictions. Using experimentally obtained PPI pairs as a training dataset, eight top-ranking keywords were extracted from GO molecular function annotations. The sensitivity of these keywords is 64.21% in the yeast experimental dataset and 80.83% in the worm experimental dataset. The specificities, a measure of recovery power, of these keywords applied to four predicted PPI datasets for each studied organisms, are 48.32% and 46.49% (by average of four datasets in yeast and worm, respectively. Based on eight top-ranking keywords and co-localization of interacting proteins a set of two knowledge rules were deduced and applied to remove false positive protein pairs. The 'strength', a measure of improvement provided by the rules was defined based on the signal-to-noise ratio and implemented to measure the applicability of knowledge rules applying to the predicted PPI datasets. Depending on the employed PPI-predicting methods, the strength varies between two and ten-fold of randomly removing protein pairs from the datasets. Conclusion Gene Ontology annotations along with the deduced knowledge rules could be implemented to partially

  10. Integrative analysis of multiple diverse omics datasets by sparse group multitask regression

    Directory of Open Access Journals (Sweden)

    Dongdong eLin

    2014-10-01

    Full Text Available A variety of high throughput genome-wide assays enable the exploration of genetic risk factors underlying complex traits. Although these studies have remarkable impact on identifying susceptible biomarkers, they suffer from issues such as limited sample size and low reproducibility. Combining individual studies of different genetic levels/platforms has the promise to improve the power and consistency of biomarker identification. In this paper, we propose a novel integrative method, namely sparse group multitask regression, for integrating diverse omics datasets, platforms and populations to identify risk genes/factors of complex diseases. This method combines multitask learning with sparse group regularization, which will: 1 treat the biomarker identification in each single study as a task and then combine them by multitask learning; 2 group variables from all studies for identifying significant genes; 3 enforce sparse constraint on groups of variables to overcome the ‘small sample, but large variables’ problem. We introduce two sparse group penalties: sparse group lasso and sparse group ridge in our multitask model, and provide an effective algorithm for each model. In addition, we propose a significance test for the identification of potential risk genes. Two simulation studies are performed to evaluate the performance of our integrative method by comparing it with conventional meta-analysis method. The results show that our sparse group multitask method outperforms meta-analysis method significantly. In an application to our osteoporosis studies, 7 genes are identified as significant genes by our method and are found to have significant effects in other three independent studies for validation. The most significant gene SOD2 has been identified in our previous osteoporosis study involving the same expression dataset. Several other genes such as TREML2, HTR1E and GLO1 are shown to be novel susceptible genes for osteoporosis, as confirmed

  11. Climate News Across Media Platforms

    DEFF Research Database (Denmark)

    Eskjær, Mikkel Fugl

    2015-01-01

    In a changing media landscape marked by technological, institutional and cultural convergence, comparative and cross-media content analysis represents a valuable analytical tool in mapping the diverse channels of climate change communication. This paper presents a comparative study of climate...... quantitative and qualitative content analysis the paper documents and explores the extent and character of climate change news across different media platforms. The study aims at contributing to the on-going assessment of how news media are addressing climate change at a time when old and new media...... change news on five different media platforms: newspapers, television, radio, web-news and mobile news. It investigates the themes and actors represented in public climate change communication as well as the diverse possibilities of participating in public debates and information sharing. By combining...

  12. Coupled sensor/platform control design for low-level chemical detection with position-adaptive micro-UAVs

    Science.gov (United States)

    Goodwin, Thomas; Carr, Ryan; Mitra, Atindra K.; Selmic, Rastko R.

    2009-05-01

    We discuss the development of Position-Adaptive Sensors [1] for purposes for detecting embedded chemical substances in challenging environments. This concept is a generalization of patented Position-Adaptive Radar Concepts developed at AFRL for challenging conditions such as urban environments. For purposes of investigating the detection of chemical substances using multiple MAV (Micro-UAV) platforms, we have designed and implemented an experimental testbed with sample structures such as wooden carts that contain controlled leakage points. Under this general concept, some of the members of a MAV swarm can serve as external position-adaptive "transmitters" by blowing air over the cart and some of the members of a MAV swarm can serve as external position-adaptive "receivers" that are equipped with chemical or biological (chem/bio) sensors that function as "electronic noses". The objective can be defined as improving the particle count of chem/bio concentrations that impinge on a MAV-based position-adaptive sensor that surrounds a chemical repository, such as a cart, via the development of intelligent position-adaptive control algorithms. The overall effect is to improve the detection and false-alarm statistics of the overall system. Within the major sections of this paper, we discuss a number of different aspects of developing our initial MAV-Based Sensor Testbed. This testbed includes blowers to simulate position-adaptive excitations and a MAV from Draganfly Innovations Inc. with stable design modifications to accommodate our chem/bio sensor boom design. We include details with respect to several critical phases of the development effort including development of the wireless sensor network and experimental apparatus, development of the stable sensor boom for the MAV, integration of chem/bio sensors and sensor node onto the MAV and boom, development of position-adaptive control algorithms and initial tests at IDCAST (Institute for the Development and

  13. Assembly procedure for column cutting platform

    International Nuclear Information System (INIS)

    Routh, R.D.

    1995-01-01

    This supporting document describes the assembly procedure for the Column Cutting Platform and Elevation Support. The Column Cutting Platform is a component of the 241-SY-101 Equipment Removal System. It is set up on the deck of the Strongback Trailer to provide work access to cut off the upper portion of the Mitigation Pump Assembly (MPA). The Elevation Support provides support for the front of the Storage Container with the Strongback at an inclined position. The upper portion of the MPA must be cut off to install the Containment Caps on the Storage Container. The storage Container must be maintained in an inclined position until the Containment Caps are installed to prevent any residual liquids from migrating forward in the Storage Container

  14. Low-loss compact multilayer silicon nitride platform for 3D photonic integrated circuits.

    Science.gov (United States)

    Shang, Kuanping; Pathak, Shibnath; Guan, Binbin; Liu, Guangyao; Yoo, S J B

    2015-08-10

    We design, fabricate, and demonstrate a silicon nitride (Si(3)N(4)) multilayer platform optimized for low-loss and compact multilayer photonic integrated circuits. The designed platform, with 200 nm thick waveguide core and 700 nm interlayer gap, is compatible for active thermal tuning and applicable to realizing compact photonic devices such as arrayed waveguide gratings (AWGs). We achieve ultra-low loss vertical couplers with 0.01 dB coupling loss, multilayer crossing loss of 0.167 dB at 90° crossing angle, 50 μm bending radius, 100 × 2 μm(2) footprint, lateral misalignment tolerance up to 400 nm, and less than -52 dB interlayer crosstalk at 1550 nm wavelength. Based on the designed platform, we demonstrate a 27 × 32 × 2 multilayer star coupler.

  15. BLAST-EXPLORER helps you building datasets for phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Claverie Jean-Michel

    2010-01-01

    Full Text Available Abstract Background The right sampling of homologous sequences for phylogenetic or molecular evolution analyses is a crucial step, the quality of which can have a significant impact on the final interpretation of the study. There is no single way for constructing datasets suitable for phylogenetic analysis, because this task intimately depends on the scientific question we want to address, Moreover, database mining softwares such as BLAST which are routinely used for searching homologous sequences are not specifically optimized for this task. Results To fill this gap, we designed BLAST-Explorer, an original and friendly web-based application that combines a BLAST search with a suite of tools that allows interactive, phylogenetic-oriented exploration of the BLAST results and flexible selection of homologous sequences among the BLAST hits. Once the selection of the BLAST hits is done using BLAST-Explorer, the corresponding sequence can be imported locally for external analysis or passed to the phylogenetic tree reconstruction pipelines available on the Phylogeny.fr platform. Conclusions BLAST-Explorer provides a simple, intuitive and interactive graphical representation of the BLAST results and allows selection and retrieving of the BLAST hit sequences based a wide range of criterions. Although BLAST-Explorer primarily aims at helping the construction of sequence datasets for further phylogenetic study, it can also be used as a standard BLAST server with enriched output. BLAST-Explorer is available at http://www.phylogeny.fr

  16. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  17. 33 CFR 147.839 - Mad Dog Truss Spar Platform safety zone.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Mad Dog Truss Spar Platform... SECURITY (CONTINUED) OUTER CONTINENTAL SHELF ACTIVITIES SAFETY ZONES § 147.839 Mad Dog Truss Spar Platform safety zone. (a) Description. Mad Dog Truss Spar Platform, Green Canyon 782 (GC 782), located at position...

  18. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  19. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  20. VarB Plus: An Integrated Tool for Visualization of Genome Variation Datasets

    KAUST Repository

    Hidayah, Lailatul

    2012-07-01

    Research on genomic sequences has been improving significantly as more advanced technology for sequencing has been developed. This opens enormous opportunities for sequence analysis. Various analytical tools have been built for purposes such as sequence assembly, read alignments, genome browsing, comparative genomics, and visualization. From the visualization perspective, there is an increasing trend towards use of large-scale computation. However, more than power is required to produce an informative image. This is a challenge that we address by providing several ways of representing biological data in order to advance the inference endeavors of biologists. This thesis focuses on visualization of variations found in genomic sequences. We develop several visualization functions and embed them in an existing variation visualization tool as extensions. The tool we improved is named VarB, hence the nomenclature for our enhancement is VarB Plus. To the best of our knowledge, besides VarB, there is no tool that provides the capability of dynamic visualization of genome variation datasets as well as statistical analysis. Dynamic visualization allows users to toggle different parameters on and off and see the results on the fly. The statistical analysis includes Fixation Index, Relative Variant Density, and Tajima’s D. Hence we focused our efforts on this tool. The scope of our work includes plots of per-base genome coverage, Principal Coordinate Analysis (PCoA), integration with a read alignment viewer named LookSeq, and visualization of geo-biological data. In addition to description of embedded functionalities, significance, and limitations, future improvements are discussed. The result is four extensions embedded successfully in the original tool, which is built on the Qt framework in C++. Hence it is portable to numerous platforms. Our extensions have shown acceptable execution time in a beta testing with various high-volume published datasets, as well as positive

  1. Design of e-Science platform for biomedical imaging research cross multiple academic institutions and hospitals

    Science.gov (United States)

    Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Ling, Tonghui; Wang, Tusheng; Wang, Mingqing; Hu, Haibo; Xu, Xuemin

    2012-02-01

    More and more image informatics researchers and engineers are considering to re-construct imaging and informatics infrastructure or to build new framework to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment. In this presentation, we show an outline and our preliminary design work of building an e-Science platform for biomedical imaging and informatics research and application in Shanghai. We will present our consideration and strategy on designing this platform, and preliminary results. We also will discuss some challenges and solutions in building this platform.

  2. PODAAC-CCF35-01AM1

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is derived under the Cross-Calibrated Multi-Platform (CCMP) project and contains a value-added monthly mean ocean surface wind and pseudostress to...

  3. Enabling systematic interrogation of protein-protein interactions in live cells with a versatile ultra-high-throughput biosensor platform | Office of Cancer Genomics

    Science.gov (United States)

    The vast datasets generated by next generation gene sequencing and expression profiling have transformed biological and translational research. However, technologies to produce large-scale functional genomics datasets, such as high-throughput detection of protein-protein interactions (PPIs), are still in early development. While a number of powerful technologies have been employed to detect PPIs, a singular PPI biosensor platform featured with both high sensitivity and robustness in a mammalian cell environment remains to be established.

  4. THE PERFORMANCE ANALYSIS OF A UAV BASED MOBILE MAPPING SYSTEM PLATFORM

    Directory of Open Access Journals (Sweden)

    M. L. Tsai

    2013-08-01

    Full Text Available To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG based fixed-wing Unmanned Aerial Vehicle (UAV photogrammetric platform where an Inertial Navigation System (INS/Global Positioning System (GPS integrated Positioning and Orientation System (POS system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP. The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP generation, and feature point measurements, is less than one hour.

  5. The Performance Analysis of a Uav Based Mobile Mapping System Platform

    Science.gov (United States)

    Tsai, M. L.; Chiang, K. W.; Lo, C. F.; Ch, C. H.

    2013-08-01

    To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG) based fixed-wing Unmanned Aerial Vehicle (UAV) photogrammetric platform where an Inertial Navigation System (INS)/Global Positioning System (GPS) integrated Positioning and Orientation System (POS) system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP). The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP) generation, and feature point measurements, is less than one hour.

  6. Positive cross-cultural psychology: Exploring similarity\\ud and difference in constructions and experiences of wellbeing

    OpenAIRE

    Lomas, Tim

    2015-01-01

    Critical theorists have accused positive psychology of paying insufficient attention to cultural variation in the way wellbeing is constructed and experienced. While there may be some\\ud merit to this claim, the field has developed a more nuanced appreciation of culture than its critics suggest. However, it could also be argued that positive psychology has not sufficiently appreciated or absorbed the wealth of literature within cross-cultural psychology pertaining to\\ud wellbeing. This paper ...

  7. Striving for Contribution: The Five Cs and Positive Effects of Cross-Age Peer Mentoring

    Science.gov (United States)

    Sinclair, Eric; Larson, Heidi A.

    2018-01-01

    This article explores the relationship between cross-age peer mentoring and positive life outcomes as defined by the Five Cs: competence, character, confidence, connection, and compassion. Qualified high school juniors and seniors were randomly assigned groups of 4-5 freshmen to mentor through the challenges of transitioning to secondary school.…

  8. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  9. Commentary on Cross-Cultural Perspectives on Positive Youth Development With Implications for Intervention Research.

    Science.gov (United States)

    Koller, Silvia H; Verma, Suman

    2017-07-01

    There is a growing focus on youth positive development issues among researchers and practitioners around the world. In this special issue of Child Development, each of the international authors provides new perspectives and understanding about youth developmental assets in different cultural settings. The present commentary (a) examines some of the cross-cultural themes that emerge from the four articles by international authors in this issue with implications for positive youth development (PYD) and (b) how intervention science can benefit by incorporating a PYD approach. As evident, youth involved in contexts that provide positive resources from significant others not only were less likely to exhibit negative outcomes, but also were more likely to show evidence of positive development. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  10. Wearable Device Control Platform Technology for Network Application Development

    Directory of Open Access Journals (Sweden)

    Heejung Kim

    2016-01-01

    Full Text Available Application development platform is the most important environment in IT industry. There are a variety of platforms. Although the native development enables application to optimize, various languages and software development kits need to be acquired according to the device. The coexistence of smart devices and platforms has rendered the native development approach time and cost consuming. Cross-platform development emerged as a response to these issues. These platforms generate applications for multiple devices based on web languages. Nevertheless, development requires additional implementation based on a native language because of the coverage and functions of supported application programming interfaces (APIs. Wearable devices have recently attracted considerable attention. These devices only support Bluetooth-based interdevice communication, thereby making communication and device control impossible beyond a certain range. We propose Network Application Agent (NetApp-Agent in order to overcome issues. NetApp-Agent based on the Cordova is a wearable device control platform for the development of network applications, controls input/output functions of smartphones and wearable/IoT through the Cordova and Native API, and enables device control and information exchange by external users by offering a self-defined API. We confirmed the efficiency of the proposed platform through experiments and a qualitative assessment of its implementation.

  11. Pathogenetic role of Factor VII deficiency and thrombosis in cross-reactive material positive patients.

    Science.gov (United States)

    Girolami, A; Sambado, L; Bonamigo, E; Ferrari, S; Lombardi, A M

    2013-12-01

    Congenital Factor VII (FVII) deficiency can be divided into two groups: cases of "true" deficiency, or cross-reactive material (CRM) negative and variants that are cross-reactive material positive.The first form is commonly recognized as Type I condition whereas the second one is known as Type II. FVII deficiency has been occasionally associated with thrombotic events, mainly venous. The reasons underlying this peculiar manifestation are unknown even though in the majority of associated patients thrombotic risk factors are present. The purpose of the present study was to investigate if a thrombotic event was more frequent in Type I or in Type II defect.The majority of patients with FVII deficiency and thrombosis belong to Type II defects. In the following paper we discuss the possible role of the dysfunctional FVII cross-reaction material as a contributory cause for the occurrence of thrombosis.

  12. OSM POI ANALYZER: A PLATFORM FOR ASSESSING POSITION OF POIs IN OPENSTREETMAP

    Directory of Open Access Journals (Sweden)

    A. Kashian

    2017-09-01

    Full Text Available In recent years, more and increased participation in Volunteered Geographical Information (VGI projects provides enough data coverage for most places around the world for ordinary mapping and navigation purposes, however, the positional credibility of contributed data becomes more and more important to bring a long-term trust in VGI data. Today, it is hard to draw a definite traditional boundary between the authoritative map producers and the public map consumers and we observe that more and more volunteers are joining crowdsourcing activities for collecting geodata, which might result in higher rates of man-made mistakes in open map projects such as OpenStreetMap. While there are some methods for monitoring the accuracy and consistency of the created data, there is still a lack of advanced systems to automatically discover misplaced objects on the map. One feature type which is contributed daily to OSM is Point of Interest (POI. In order to understand how likely it is that a newly added POI represents a genuine real-world feature scientific means to calculate a probability of such a POI existing at that specific position is needed. This paper reports on a new analytic tool which dives into OSM data and finds co-existence patterns between one specific POI and its surrounding objects such as roads, parks and buildings. The platform uses a distance-based classification technique to find relationships among objects and tries to identify the high-frequency association patterns among each category of objects. Using such method, for each newly added POI, a probabilistic score would be generated, and the low scored POIs can be highlighted for editors for a manual check. The same scoring method can be used for existing registered POIs to check if they are located correctly. For a sample study, this paper reports on the evaluation of 800 pre-registered ATMs in Paris with associated scores to understand how outliers and fake entries could be detected

  13. Introducing StatHand: A Cross-Platform Mobile Application to Support Students' Statistical Decision Making.

    Science.gov (United States)

    Allen, Peter J; Roberts, Lynne D; Baughman, Frank D; Loxton, Natalie J; Van Rooy, Dirk; Rock, Adam J; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students' statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand.

  14. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  15. PODAAC-CCF30-01XXX

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is derived under the Cross-Calibrated Multi-Platform (CCMP) project and contains a value-added 6-hourly gridded analysis of ocean surface winds. The...

  16. PODAAC-CCF35-01AD5

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset is derived under the Cross-Calibrated Multi-Platform (CCMP) project and contains a value-added 5-day mean ocean surface wind and pseudostress to...

  17. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-06-01

    Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

  18. Influence of the crude oil type to platforming effects

    International Nuclear Information System (INIS)

    Kafedzhiski, Branko; Crvenkova, Suzana; Zikovski, Toni

    1999-01-01

    Platforming is one of the most subtleties processing in refinery industry and it is permanent source for research and finding out the higher optimum degree. Optimum of the last effects of platforming directly depends on many parameters. One of the more important parameters is the type of the crude. The purpose of this work is presenting positive and negative effect to the platforming parameters caused by different types of crude in OCTA Crude Oil Refinery - Skopje (Macedonia). (Author)

  19. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  20. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    Science.gov (United States)

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  1. Birth and demise of a Middle Jurassic isolated shallow-marine carbonate platform on a tilted fault block: Example from the Southern Iberian continental palaeomargin

    Science.gov (United States)

    Navarro, V.; Ruiz-Ortiz, P. A.; Molina, J. M.

    2012-08-01

    Subbetic Middle Jurassic oolitic limestones of the Jabalcuz Formation crop out in San Cristóbal hill, near Jaén city (Andalucía, Spain), between hemipelagic limestone and marl successions. The Jabalcuz limestones range in facies from calcareous breccias and micritic limestones to white cross-bedded oolitic limestones. Recent erosion has exhumed a Jurassic isolated shallow-water carbonate platform on the San Cristóbal hill. This shallow platform developed on a tilted fault block. An almost continuous, laterally extensive outcrop reveals tectono-sedimentary features distinctive of block-tilting in the different margins of the fault block. The studied sections represent various palaeogeographic positions in the ancient shallow-water carbonate platform and basin transition. This exceptional outcrop allows to decipher the triggering mechanisms of the birth, evolution, and drowning of this Jurassic isolated shallow-water carbonate platform. Two shallowing-upward depositional sequences separated by flooding surfaces can be distinguished on two different sides of the fault block. In the southeastern part of the outcrop, proximal sections grade vertically from distal talus fault breccias, with bivalve and serpulid buildup intercalations, to white cross-bedded oolitic limestones defining the lowermost depositional sequence. Upwards, overlying a flooding surface, the second sequence with oolitic limestones prograding over micritic deposits is recorded. In the southwest, oolitic, peloidal, and more distal micritic facies alternate, with notable southeastern progradation of oolitic facies in the upper part of the section, which represents the upper depositional sequence. The top of this second depositional sequence is another flooding surface recorded by the sedimentation of marls with radiolarians from the overlying formation. In the northwestern outcrops, the two depositional sequences are also almost completely preserved and can be differentiated. A 100 m

  2. The VISPA Internet Platform for Students

    Science.gov (United States)

    Asseldonk, D. v.; Erdmann, M.; Fischer, R.; Glaser, C.; Müller, G.; Quast, T.; Rieger, M.; Urban, M.

    2016-04-01

    The VISPA internet platform enables users to remotely run Python scripts and view resulting plots or inspect their output data. With a standard web browser as the only user requirement on the client-side, the system becomes suitable for blended learning approaches for university physics students. VISPA was used in two consecutive years each by approx. 100 third year physics students at the RWTH Aachen University for their homework assignments. For example, in one exercise students gained a deeper understanding of Einsteins mass-energy relation by analyzing experimental data of electron-positron pairs revealing J / Ψ and Z particles. Because the students were free to choose their working hours, only few users accessed the platform simultaneously. The positive feedback from students and the stability of the platform lead to further development of the concept. This year, students accessed the platform in parallel while they analyzed the data recorded by demonstrated experiments live in the lecture hall. The platform is based on experience in the development of professional analysis tools. It combines core technologies from previous projects: an object-oriented C++ library, a modular data-driven analysis flow, and visual analysis steering. We present the platform and discuss its benefits in the context of teaching based on surveys that are conducted each semester.

  3. An Interactive Platform to Visualize Data-Driven Clinical Pathways for the Management of Multiple Chronic Conditions.

    Science.gov (United States)

    Zhang, Yiye; Padman, Rema

    2017-01-01

    Patients with multiple chronic conditions (MCC) pose an increasingly complex health management challenge worldwide, particularly due to the significant gap in our understanding of how to provide coordinated care. Drawing on our prior research on learning data-driven clinical pathways from actual practice data, this paper describes a prototype, interactive platform for visualizing the pathways of MCC to support shared decision making. Created using Python web framework, JavaScript library and our clinical pathway learning algorithm, the visualization platform allows clinicians and patients to learn the dominant patterns of co-progression of multiple clinical events from their own data, and interactively explore and interpret the pathways. We demonstrate functionalities of the platform using a cluster of 36 patients, identified from a dataset of 1,084 patients, who are diagnosed with at least chronic kidney disease, hypertension, and diabetes. Future evaluation studies will explore the use of this platform to better understand and manage MCC.

  4. WeBCMD: A cross-platform interface for the BCMD modelling framework [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Joshua Russell-Buckland

    2017-07-01

    Full Text Available Multimodal monitoring of the brain generates a great quantity of data, providing the potential for great insight into both healthy and injured cerebral dynamics. In particular, near-infrared spectroscopy can be used to measure various physiological variables of interest, such as haemoglobin oxygenation and the redox state of cytochrome-c-oxidase, alongside systemic signals, such as blood pressure. Interpreting these measurements is a complex endeavour, and much work has been done to develop mathematical models that can help to provide understanding of the underlying processes that contribute to the overall dynamics. BCMD is a software framework that was developed to run such models. However, obtaining, installing and running this software is no simple task. Here we present WeBCMD, an online environment that attempts to make the process simpler and much more accessible. By leveraging modern web technologies, an extensible and cross-platform package has been created that can also be accessed remotely from the cloud. WeBCMD is available as a Docker image and an online service.

  5. Comparing the accuracy of food outlet datasets in an urban environment

    Directory of Open Access Journals (Sweden)

    Michelle S. Wong

    2017-05-01

    Full Text Available Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP, an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7% and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%. We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.

  6. Comparing the accuracy of food outlet datasets in an urban environment.

    Science.gov (United States)

    Wong, Michelle S; Peyton, Jennifer M; Shields, Timothy M; Curriero, Frank C; Gudzune, Kimberly A

    2017-05-11

    Studies that investigate the relationship between the retail food environment and health outcomes often use geospatial datasets. Prior studies have identified challenges of using the most common data sources. Retail food environment datasets created through academic-government partnership present an alternative, but their validity (retail existence, type, location) has not been assessed yet. In our study, we used ground-truth data to compare the validity of two datasets, a 2015 commercial dataset (InfoUSA) and data collected from 2012 to 2014 through the Maryland Food Systems Mapping Project (MFSMP), an academic-government partnership, on the retail food environment in two low-income, inner city neighbourhoods in Baltimore City. We compared sensitivity and positive predictive value (PPV) of the commercial and academic-government partnership data to ground-truth data for two broad categories of unhealthy food retailers: small food retailers and quick-service restaurants. Ground-truth data was collected in 2015 and analysed in 2016. Compared to the ground-truth data, MFSMP and InfoUSA generally had similar sensitivity that was greater than 85%. MFSMP had higher PPV compared to InfoUSA for both small food retailers (MFSMP: 56.3% vs InfoUSA: 40.7%) and quick-service restaurants (MFSMP: 58.6% vs InfoUSA: 36.4%). We conclude that data from academic-government partnerships like MFSMP might be an attractive alternative option and improvement to relying only on commercial data. Other research institutes or cities might consider efforts to create and maintain such an environmental dataset. Even if these datasets cannot be updated on an annual basis, they are likely more accurate than commercial data.

  7. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    Science.gov (United States)

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  8. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    Science.gov (United States)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  9. The Added Utility of Hydrological Model and Satellite Based Datasets in Agricultural Drought Analysis over Turkey

    Science.gov (United States)

    Bulut, B.; Hüsami Afşar, M.; Yilmaz, M. T.

    2017-12-01

    Analysis of agricultural drought, which causes substantial socioeconomically costs in Turkey and in the world, is critical in terms of understanding this natural disaster's characteristics (intensity, duration, influence area) and research on possible precautions. Soil moisture is one of the most important parameters which is used to observe agricultural drought, can be obtained using different methods. The most common, consistent and reliable soil moisture datasets used for large scale analysis are obtained from hydrologic models and remote sensing retrievals. On the other hand, Normalized difference vegetation index (NDVI) and gauge based precipitation observations are also commonly used for drought analysis. In this study, soil moisture products obtained from different platforms, NDVI and precipitation datasets over several different agricultural regions under various climate conditions in Turkey are obtained in growth season period. These datasets are later used to investigate agricultural drought by the help of annual crop yield data of selected agricultural lands. The type of vegetation over these regions are obtained using CORINE Land Cover (CLC 2012) data. The crop yield data were taken from the record of related district's statistics which is provided by Turkish Statistical Institute (TÜİK). This project is supported by TÜBİTAK project number 114Y676.

  10. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  11. A positive deviance approach to early childhood obesity: cross-sectional characterization of positive outliers.

    Science.gov (United States)

    Foster, Byron Alexander; Farragher, Jill; Parker, Paige; Hale, Daniel E

    2015-06-01

    Positive deviance methodology has been applied in the developing world to address childhood malnutrition and has potential for application to childhood obesity in the United States. We hypothesized that among children at high-risk for obesity, evaluating normal weight children will enable identification of positive outlier behaviors and practices. In a community at high-risk for obesity, a cross-sectional mixed-methods analysis was done of normal weight, overweight, and obese children, classified by BMI percentile. Parents were interviewed using a semistructured format in regard to their children's general health, feeding and activity practices, and perceptions of weight. Interviews were conducted in 40 homes in the lower Rio Grande Valley in Texas with a largely Hispanic (87.5%) population. Demographics, including income, education, and food assistance use, did not vary between groups. Nearly all (93.8%) parents of normal weight children perceived their child to be lower than the median weight. Group differences were observed for reported juice and yogurt consumption. Differences in both emotional feeding behaviors and parents' internalization of reasons for healthy habits were identified as different between groups. We found subtle variations in reported feeding and activity practices by weight status among healthy children in a population at high risk for obesity. The behaviors and attitudes described were consistent with previous literature; however, the local strategies associated with a healthy weight are novel, potentially providing a basis for a specific intervention in this population.

  12. Procalcitonin levels in patients with positive blood culture, positive body fluid culture, sepsis, and severe sepsis: a cross-sectional study.

    Science.gov (United States)

    Yu, Ying; Li, Xia-Xi; Jiang, Ling-Xiao; Du, Meng; Liu, Zhan-Guo; Cen, Zhong-Ran; Wang, Hua; Guo, Zhen-Hui; Chang, Ping

    2016-01-01

    Numerous investigations on procalcitonin (PCT) have been carried out, although few with large sample size. To deal with the complexity of sepsis, an understanding of PCT in heterogeneous clinical conditions is required. Hospitalized patients aged 10-79 years were included in this retrospective and cross-sectional study. PCT tests were assayed within 2 days of blood culture. A total of 2952 cases (from 2538 patients) were enrolled in this study, including 440 cases in the 'positive BC' group, 123 cases in the 'positive body fluid culture' group, and 2389 cases in the 'negative all culture' group. Median PCT values were 4.53 ng/ml, 2.95 ng/ml, and 0.49 ng/ml, respectively. Median PCT values in the gram-negative BC group and gram-positive BC group, respectively, were 6.99 ng/ml and 2.96 ng/ml. Median PCT values in the 'positive hydrothorax culture' group, 'positive ascites culture' group, 'positive bile culture' group, and 'positive cerebrospinal fluid culture' group, respectively, were 1.39 ng/ml, 8.32 ng/ml, 5.98 ng/ml, and 0.46 ng/ml. In all, 357 cases were classified into the 'sepsis' group, 150 of them were classified into the 'severe sepsis' group. Median PCT values were 5.63 ng/ml and 11.06 ng/ml, respectively. PCT could be used in clinical algorithms to diagnose positive infections and sepsis. Different PCT levels could be related to different kinds of microbemia, different infection sites, and differing severity of sepsis.

  13. Cross-cultural dataset for the evolution of religion and morality project.

    Science.gov (United States)

    Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph

    2016-11-08

    A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.

  14. Positional Accuracy Assessment for Effective Shoreline Change ...

    African Journals Online (AJOL)

    Michael

    2016-06-01

    Jun 1, 2016 ... as backdrop in GIS environment. Positional error of ... integrated dataset obviously bore the cumulative effect of the input datasets. ... change. The shoreline, which is the interface between land ... modelling, which enables future shoreline change trend to ..... as gaps due to cloud cover and limitation of the.

  15. e-Science platform for translational biomedical imaging research: running, statistics, and analysis

    Science.gov (United States)

    Wang, Tusheng; Yang, Yuanyuan; Zhang, Kai; Wang, Mingqing; Zhao, Jun; Xu, Lisa; Zhang, Jianguo

    2015-03-01

    In order to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment, we had designed an e-Science platform for biomedical imaging research and application cross multiple academic institutions and hospitals in Shanghai and presented this work in SPIE Medical Imaging conference held in San Diego in 2012. In past the two-years, we implemented a biomedical image chain including communication, storage, cooperation and computing based on this e-Science platform. In this presentation, we presented the operating status of this system in supporting biomedical imaging research, analyzed and discussed results of this system in supporting multi-disciplines collaboration cross-multiple institutions.

  16. Platform skin return and retrodirective cross-eye jamming

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2012-01-01

    Full Text Available for, and the effect of variations in Jammer-to-Signal Ratio (JSR) is investigated. The widely-held, though unsubstantiated, view that a JSR of 20 dB is required for effective cross-eye jamming is found to be reasonable, though conservative...

  17. Near Real-time Scientific Data Analysis and Visualization with the ArcGIS Platform

    Science.gov (United States)

    Shrestha, S. R.; Viswambharan, V.; Doshi, A.

    2017-12-01

    Scientific multidimensional data are generated from a variety of sources and platforms. These datasets are mostly produced by earth observation and/or modeling systems. Agencies like NASA, NOAA, USGS, and ESA produce large volumes of near real-time observation, forecast, and historical data that drives fundamental research and its applications in larger aspects of humanity from basic decision making to disaster response. A common big data challenge for organizations working with multidimensional scientific data and imagery collections is the time and resources required to manage and process such large volumes and varieties of data. The challenge of adopting data driven real-time visualization and analysis, as well as the need to share these large datasets, workflows, and information products to wider and more diverse communities, brings an opportunity to use the ArcGIS platform to handle such demand. In recent years, a significant effort has put in expanding the capabilities of ArcGIS to support multidimensional scientific data across the platform. New capabilities in ArcGIS to support scientific data management, processing, and analysis as well as creating information products from large volumes of data using the image server technology are becoming widely used in earth science and across other domains. We will discuss and share the challenges associated with big data by the geospatial science community and how we have addressed these challenges in the ArcGIS platform. We will share few use cases, such as NOAA High Resolution Refresh Radar (HRRR) data, that demonstrate how we access large collections of near real-time data (that are stored on-premise or on the cloud), disseminate them dynamically, process and analyze them on-the-fly, and serve them to a variety of geospatial applications. We will also share how on-the-fly processing using raster functions capabilities, can be extended to create persisted data and information products using raster analytics

  18. Mechanically latchable tiltable platform for forming micromirrors and micromirror arrays

    Science.gov (United States)

    Garcia, Ernest J [Albuquerque, NM; Polosky, Marc A [Tijeras, NM; Sleefe, Gerard E [Cedar Crest, NM

    2006-12-12

    A microelectromechanical (MEM) apparatus is disclosed which includes a platform that can be electrostatically tilted from being parallel to a substrate on which the platform to being tilted at an angle of 1 20 degrees with respect to the substrate. Once the platform has been tilted to a maximum angle of tilt, the platform can be locked in position using an electrostatically-operable latching mechanism which engages a tab protruding below the platform. The platform has a light-reflective upper surface which can be optionally coated to provide an enhanced reflectivity and form a micromirror. An array of such micromirrors can be formed on a common substrate for applications including optical switching (e.g. for fiber optic communications), optical information processing, image projection displays or non-volatile optical memories.

  19. A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs.

    Science.gov (United States)

    Wu, Zifeng; Huang, Yongzhen; Wang, Liang; Wang, Xiaogang; Tan, Tieniu

    2017-02-01

    This paper studies an approach to gait based human identification via similarity learning by deep convolutional neural networks (CNNs). With a pretty small group of labeled multi-view human walking videos, we can train deep networks to recognize the most discriminative changes of gait patterns which suggest the change of human identity. To the best of our knowledge, this is the first work based on deep CNNs for gait recognition in the literature. Here, we provide an extensive empirical evaluation in terms of various scenarios, namely, cross-view and cross-walking-condition, with different preprocessing approaches and network architectures. The method is first evaluated on the challenging CASIA-B dataset in terms of cross-view gait recognition. Experimental results show that it outperforms the previous state-of-the-art methods by a significant margin. In particular, our method shows advantages when the cross-view angle is large, i.e., no less than 36 degree. And the average recognition rate can reach 94 percent, much better than the previous best result (less than 65 percent). The method is further evaluated on the OU-ISIR gait dataset to test its generalization ability to larger data. OU-ISIR is currently the largest dataset available in the literature for gait recognition, with 4,007 subjects. On this dataset, the average accuracy of our method under identical view conditions is above 98 percent, and the one for cross-view scenarios is above 91 percent. Finally, the method also performs the best on the USF gait dataset, whose gait sequences are imaged in a real outdoor scene. These results show great potential of this method for practical applications.

  20. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    Science.gov (United States)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  1. Introducing StatHand: A cross-platform mobile application to support students’ statistical decision making

    Directory of Open Access Journals (Sweden)

    Peter James Allen

    2016-02-01

    Full Text Available Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students’ statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand.

  2. SIGMA: A System for Integrative Genomic Microarray Analysis of Cancer Genomes

    Directory of Open Access Journals (Sweden)

    Davies Jonathan J

    2006-12-01

    Full Text Available Abstract Background The prevalence of high resolution profiling of genomes has created a need for the integrative analysis of information generated from multiple methodologies and platforms. Although the majority of data in the public domain are gene expression profiles, and expression analysis software are available, the increase of array CGH studies has enabled integration of high throughput genomic and gene expression datasets. However, tools for direct mining and analysis of array CGH data are limited. Hence, there is a great need for analytical and display software tailored to cross platform integrative analysis of cancer genomes. Results We have created a user-friendly java application to facilitate sophisticated visualization and analysis such as cross-tumor and cross-platform comparisons. To demonstrate the utility of this software, we assembled array CGH data representing Affymetrix SNP chip, Stanford cDNA arrays and whole genome tiling path array platforms for cross comparison. This cancer genome database contains 267 profiles from commonly used cancer cell lines representing 14 different tissue types. Conclusion In this study we have developed an application for the visualization and analysis of data from high resolution array CGH platforms that can be adapted for analysis of multiple types of high throughput genomic datasets. Furthermore, we invite researchers using array CGH technology to deposit both their raw and processed data, as this will be a continually expanding database of cancer genomes. This publicly available resource, the System for Integrative Genomic Microarray Analysis (SIGMA of cancer genomes, can be accessed at http://sigma.bccrc.ca.

  3. Usability of an internet-based platform (Next.Step for adolescent weight management

    Directory of Open Access Journals (Sweden)

    Pedro Sousa

    2015-02-01

    Full Text Available OBJECTIVE: The current study evaluates the usability perception of an e-therapeutic platform (supported by electronic processes and communication, aiming to promote the behavior change and to improve the adolescent health status through increased and interactive contact between the adolescent and the clinical staff. METHODS: This was a correlational study with a sample of 48 adolescents (12-18 years who attended a Pediatric Obesity Clinic between January and August of 2012. Participants were invited to access, during 24 weeks, the e-therapeutic multidisciplinary platform (Next.Step in addition to the standard treatment program. A usability questionnaire was administered and the platform performance and utilization indicators were analyzed. RESULTS: The users' perception of satisfaction, efficiency, and effectiveness regarding the Next.Step platform was clearly positive. However, only 54.17% of the enrolled adolescents accessed the platform, with a mean task-completion rate of 14.55% (SD = 18.853. The higher the number of the platform consulted resources, the greater the tendency to enjoy the platform, to consider it exciting and quick, to consider that the time spent in it was useful, to consider the access to information easy, and to login easier. Post-intervention assessment revealed a significant reduction in anthropometric and behavioral variables, including body mass index z-score, waist circumference percentile, hip circumference, and weekly screen time. CONCLUSION: These results highlight the importance of information and communication technologies in the health information access and the healthcare provision. Despite the limited adherence rate, platform users expressed a positive overall perception of its usability and presented a positive anthropometric and behavioral progress.

  4. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  5. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  6. Influence of the type of crude oil to platforming effects

    International Nuclear Information System (INIS)

    Kafedzhiski, Branko; Crvenkova, Suzana; Zikovski, Toni

    1999-01-01

    Platforming in one of the most subtleties processing in refinery industry and it is permanent source for research and finding out the higher optimum degree. Optimum of the last effects of platforming directly depends on many parameters. One of more important parameters is the type of crude. The purpose of this work is presenting positive and negative effect to the platforming parameters caused by different types of crude in OCTA-Crude Oil Refinery -Skopje, Macedonia. (Original)

  7. Validating a continental-scale groundwater diffuse pollution model using regional datasets.

    Science.gov (United States)

    Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik

    2017-12-11

    In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2  = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.

  8. Efficient Interaction Recognition through Positive Action Representation

    Directory of Open Access Journals (Sweden)

    Tao Hu

    2013-01-01

    Full Text Available This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI, including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.

  9. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    Science.gov (United States)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is

  10. Hooke: an open software platform for force spectroscopy.

    Science.gov (United States)

    Sandal, Massimo; Benedetti, Fabrizio; Brucale, Marco; Gomez-Casado, Alberto; Samorì, Bruno

    2009-06-01

    Hooke is an open source, extensible software intended for analysis of atomic force microscope (AFM)-based single molecule force spectroscopy (SMFS) data. We propose it as a platform on which published and new algorithms for SMFS analysis can be integrated in a standard, open fashion, as a general solution to the current lack of a standard software for SMFS data analysis. Specific features and support for file formats are coded as independent plugins. Any user can code new plugins, extending the software capabilities. Basic automated dataset filtering and semi-automatic analysis facilities are included. Software and documentation are available at (http://code.google.com/p/hooke). Hooke is a free software under the GNU Lesser General Public License.

  11. An Analysis of Impact Factors for Positioning Performance in WLAN Fingerprinting Systems Using Ishikawa Diagrams and a Simulation Platform

    Directory of Open Access Journals (Sweden)

    Keqiang Liu

    2017-01-01

    Full Text Available Many factors influence the positioning performance in WLAN RSSI fingerprinting systems, and summary of these factors is an important but challenging job. Moreover, impact analysis on nonalgorithm factors is significant to system application and quality control but little research has been conducted. This paper analyzes and summarizes the potential impact factors by using an Ishikawa diagram considering radio signal transmitting, propagating, receiving, and processing. A simulation platform was developed to facilitate the analysis experiment, and the paper classifies the potential factors into controllable, uncontrollable, nuisance, and held-constant factors considering simulation feasibility. It takes five nonalgorithm controllable factors including APs density, APs distribution, radio signal propagating attenuation factor, radio signal propagating noise, and RPs density into consideration and adopted the OFAT analysis method in experiment. The positioning result was achieved by using the deterministic and probabilistic algorithms, and the error was presented by RMSE and CDF. The results indicate that the high APs density, signal propagating attenuation factor, and RPs density, with the low signal propagating noise level, are favorable to better performance, while APs distribution has no particular impact pattern on the positioning error. Overall, this paper has made great potential contribution to the quality control of WLAN fingerprinting solutions.

  12. Genomics Portals: integrative web-platform for mining genomics data.

    Science.gov (United States)

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  13. Inertial particle focusing in serpentine channels on a centrifugal platform

    Science.gov (United States)

    Shamloo, Amir; Mashhadian, Ali

    2018-01-01

    Inertial particle focusing as a powerful passive method is widely used in diagnostic test devices. It is common to use a curved channel in this approach to achieve particle focusing through balancing of the secondary flow drag force and the inertial lift force. Here, we present a focusing device on a disk based on the interaction of secondary flow drag force, inertial lift force, and centrifugal forces to focus particles. By choosing a channel whose cross section has a low aspect ratio, the mixing effect of the secondary flow becomes negligible. To calculate inertial lift force, which is exerted on the particle from the fluid, the interaction between the fluid and particle is investigated accurately through implementation of 3D Direct Numerical Solution (DNS) method. The particle focusing in three serpentine channels with different corner angles of 75°, 85°, and 90° is investigated for three polystyrene particles with diameters of 8 μm, 9.9 μm, and 13 μm. To show the simulation reliability, the results obtained from the simulations of two examples, namely, particle focusing and centrifugal platform, are verified against experimental counterparts. The effects of angular velocity of disk on the fluid velocity and on the focusing parameters are studied. Fluid velocity in a channel with corner angle of 75° is greater than two other channels. Furthermore, the particle equilibrium positions at the cross section of channel are obtained at the outlet. There are two equilibrium positions located at the centers of the long walls. Finally, the effect of particle density on the focusing length is investigated. A particle with a higher density and larger diameter is focused in a shorter length of the channel compared to its counterpart with a lower density and shorter diameter. The channel with a corner angle of 90° has better focusing efficiency compared to other channels. This design focuses particles without using any pump or sheath flow. Inertial particle focusing

  14. The Role of Datasets on Scientific Influence within Conflict Research.

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C; Moates, Shiloh; Carley, Kathleen M

    2016-01-01

    operationalization of conflict. In fact, 94% of the works on the CP that analyzed data either relied on publically available datasets, or they generated a dataset and made it public. These datasets appear to be important in the development of conflict research, allowing for cross-case comparisons, and comparisons to previous works.

  15. Oceanographic profile plankton, Temperature Salinity and other measurements collected using bottle from various platforms in the South Pacific Ocean from 1997 to 1998 (NODC Accession 0014651)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature, salinity, oxygen, nutrients, and other measurements found in the bottle dataset taken from the SNP-1, HUAMANGA (fishing boat) and other platforms in the...

  16. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  17. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    Science.gov (United States)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  18. Research on cross - Project software defect prediction based on transfer learning

    Science.gov (United States)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  19. A Bacterial Analysis Platform: An Integrated System for Analysing Bacterial Whole Genome Sequencing Data for Clinical Diagnostics and Surveillance

    DEFF Research Database (Denmark)

    Thomsen, Martin Christen Frølund; Ahrenfeldt, Johanne; Bellod Cisneros, Jose Luis

    2016-01-01

    and made publicly available, providing easy-to-use automated analysis of bacterial whole genome sequencing data. The platform may be of immediate relevance as a guide for investigators using whole genome sequencing for clinical diagnostics and surveillance. The platform is freely available at: https://cge.cbs.dtu.dk/services...... and antimicrobial resistance genes. A short printable report for each sample will be provided and an Excel spreadsheet containing all the metadata and a summary of the results for all submitted samples can be downloaded. The pipeline was benchmarked using datasets previously used to test the individual services...

  20. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  1. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  2. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  3. GWATCH: a web platform for automated gene association discovery analysis

    Science.gov (United States)

    2014-01-01

    Background As genome-wide sequence analyses for complex human disease determinants are expanding, it is increasingly necessary to develop strategies to promote discovery and validation of potential disease-gene associations. Findings Here we present a dynamic web-based platform – GWATCH – that automates and facilitates four steps in genetic epidemiological discovery: 1) Rapid gene association search and discovery analysis of large genome-wide datasets; 2) Expanded visual display of gene associations for genome-wide variants (SNPs, indels, CNVs), including Manhattan plots, 2D and 3D snapshots of any gene region, and a dynamic genome browser illustrating gene association chromosomal regions; 3) Real-time validation/replication of candidate or putative genes suggested from other sources, limiting Bonferroni genome-wide association study (GWAS) penalties; 4) Open data release and sharing by eliminating privacy constraints (The National Human Genome Research Institute (NHGRI) Institutional Review Board (IRB), informed consent, The Health Insurance Portability and Accountability Act (HIPAA) of 1996 etc.) on unabridged results, which allows for open access comparative and meta-analysis. Conclusions GWATCH is suitable for both GWAS and whole genome sequence association datasets. We illustrate the utility of GWATCH with three large genome-wide association studies for HIV-AIDS resistance genes screened in large multicenter cohorts; however, association datasets from any study can be uploaded and analyzed by GWATCH. PMID:25374661

  4. Validity and reliability of stillbirth data using linked self-reported and administrative datasets.

    Science.gov (United States)

    Hure, Alexis J; Chojenta, Catherine L; Powers, Jennifer R; Byles, Julie E; Loxton, Deborah

    2015-01-01

    A high rate of stillbirth was previously observed in the Australian Longitudinal Study of Women's Health (ALSWH). Our primary objective was to test the validity and reliability of self-reported stillbirth data linked to state-based administrative datasets. Self-reported data, collected as part of the ALSWH cohort born in 1973-1978, were linked to three administrative datasets for women in New South Wales, Australia (n = 4374): the Midwives Data Collection; Admitted Patient Data Collection; and Perinatal Death Review Database. Linkages were obtained from the Centre for Health Record Linkage for the period 1996-2009. True cases of stillbirth were defined by being consistently recorded in two or more independent data sources. Sensitivity, specificity, positive predictive value, negative predictive value, percent agreement, and kappa statistics were calculated for each dataset. Forty-nine women reported 53 stillbirths. No dataset was 100% accurate. The administrative datasets performed better than self-reported data, with high accuracy and agreement. Self-reported data showed high sensitivity (100%) but low specificity (30%), meaning women who had a stillbirth always reported it, but there was also over-reporting of stillbirths. About half of the misreported cases in the ALSWH were able to be removed by identifying inconsistencies in longitudinal data. Data linkage provides great opportunity to assess the validity and reliability of self-reported study data. Conversely, self-reported study data can help to resolve inconsistencies in administrative datasets. Quantifying the strengths and limitations of both self-reported and administrative data can improve epidemiological research, especially by guiding methods and interpretation of findings.

  5. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  6. TAILS N-terminomic and proteomic datasets of healthy human dental pulp

    Directory of Open Access Journals (Sweden)

    Ulrich Eckhard

    2015-12-01

    Full Text Available The Data described here provide the in depth proteomic assessment of the human dental pulp proteome and N-terminome (Eckhard et al., 2015 [1]. A total of 9 human dental pulps were processed and analyzed by the positional proteomics technique TAILS (Terminal Amine Isotopic Labeling of Substrates N-terminomics. 38 liquid chromatography tandem mass spectrometry (LC-MS/MS datasets were collected and analyzed using four database search engines in combination with statistical downstream evaluation, to yield the by far largest proteomic and N-terminomic dataset of any dental tissue to date. The raw mass spectrometry data and the corresponding metadata have been deposited in ProteomeXchange with the PXD identifier ; Supplementary Tables described in this article are available via Mendeley Data (10.17632/555j3kk4sw.1.

  7. Exel's crossed product and crossed products by completely positive maps

    DEFF Research Database (Denmark)

    Kwaśniewski, Bartosz Kosma

    2017-01-01

    construction we extend a result of Brownlowe, Raeburn and Vittadello, by showing that the C∗-algebra of an arbitrary infinite graph E can be realized as a crossed product of the diagonal algebra DE by a 'Perron-Frobenius' operator L. The important difference to the previous result is that in general...

  8. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    Directory of Open Access Journals (Sweden)

    Pontarotti Pierre

    2005-08-01

    Full Text Available Abstract Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes. Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset. The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest.

  9. TranslatomeDB: a comprehensive database and cloud-based analysis platform for translatome sequencing data.

    Science.gov (United States)

    Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie; Zhang, Gong

    2018-01-04

    Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  11. Dynamics of dump truck entrance onto the hoist platform of a mine inclined elevator

    Energy Technology Data Exchange (ETDEWEB)

    Nosyrev, B.A.; Popov, Yu.V.; Mukhutdinov, Sh.D. (Sverdlovskii Gornyi Institut (USSR))

    1989-01-01

    Analyzes the feasibility of transporting heavy-duty dump trucks along slopes on special platforms in coal surface mines. The platforms are hoisted by winches. Theoretical problems associated with hoisting a loaded platform upwards are analyzed. Problems associated with truck travel in the platform area, its exact positioning and mechanical vibrations of the platform caused by truck movement are discussed. Vibrations of the platform with a loaded truck and vibration amplitudes are analyzed. Five states of the system are evaluated. Methods for prevention of excessive vibrations by optimization of platform design and use of flexible elements are evaluated. Optimum speed of truck movement for platform entering is recommended.

  12. The SAIL databank: linking multiple health and social care datasets.

    Science.gov (United States)

    Lyons, Ronan A; Jones, Kerina H; John, Gareth; Brooks, Caroline J; Verplancke, Jean-Philippe; Ford, David V; Brown, Ginevra; Leake, Ken

    2009-01-16

    Vast amounts of data are collected about patients and service users in the course of health and social care service delivery. Electronic data systems for patient records have the potential to revolutionise service delivery and research. But in order to achieve this, it is essential that the ability to link the data at the individual record level be retained whilst adhering to the principles of information governance. The SAIL (Secure Anonymised Information Linkage) databank has been established using disparate datasets, and over 500 million records from multiple health and social care service providers have been loaded to date, with further growth in progress. Having established the infrastructure of the databank, the aim of this work was to develop and implement an accurate matching process to enable the assignment of a unique Anonymous Linking Field (ALF) to person-based records to make the databank ready for record-linkage research studies. An SQL-based matching algorithm (MACRAL, Matching Algorithm for Consistent Results in Anonymised Linkage) was developed for this purpose. Firstly the suitability of using a valid NHS number as the basis of a unique identifier was assessed using MACRAL. Secondly, MACRAL was applied in turn to match primary care, secondary care and social services datasets to the NHS Administrative Register (NHSAR), to assess the efficacy of this process, and the optimum matching technique. The validation of using the NHS number yielded specificity values > 99.8% and sensitivity values > 94.6% using probabilistic record linkage (PRL) at the 50% threshold, and error rates were SAIL databank represents a research-ready platform for record-linkage studies.

  13. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  14. Integrative multi-platform meta-analysis of gene expression profiles in pancreatic ductal adenocarcinoma patients for identifying novel diagnostic biomarkers.

    Science.gov (United States)

    Irigoyen, Antonio; Jimenez-Luna, Cristina; Benavides, Manuel; Caba, Octavio; Gallego, Javier; Ortuño, Francisco Manuel; Guillen-Ponce, Carmen; Rojas, Ignacio; Aranda, Enrique; Torres, Carolina; Prados, Jose

    2018-01-01

    Applying differentially expressed genes (DEGs) to identify feasible biomarkers in diseases can be a hard task when working with heterogeneous datasets. Expression data are strongly influenced by technology, sample preparation processes, and/or labeling methods. The proliferation of different microarray platforms for measuring gene expression increases the need to develop models able to compare their results, especially when different technologies can lead to signal values that vary greatly. Integrative meta-analysis can significantly improve the reliability and robustness of DEG detection. The objective of this work was to develop an integrative approach for identifying potential cancer biomarkers by integrating gene expression data from two different platforms. Pancreatic ductal adenocarcinoma (PDAC), where there is an urgent need to find new biomarkers due its late diagnosis, is an ideal candidate for testing this technology. Expression data from two different datasets, namely Affymetrix and Illumina (18 and 36 PDAC patients, respectively), as well as from 18 healthy controls, was used for this study. A meta-analysis based on an empirical Bayesian methodology (ComBat) was then proposed to integrate these datasets. DEGs were finally identified from the integrated data by using the statistical programming language R. After our integrative meta-analysis, 5 genes were commonly identified within the individual analyses of the independent datasets. Also, 28 novel genes that were not reported by the individual analyses ('gained' genes) were also discovered. Several of these gained genes have been already related to other gastroenterological tumors. The proposed integrative meta-analysis has revealed novel DEGs that may play an important role in PDAC and could be potential biomarkers for diagnosing the disease.

  15. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    Science.gov (United States)

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  16. Cross-calibrating Spatial Positions of Light-viewing Diagnostics using Plasma Edge Sweeps in DIII-D

    International Nuclear Information System (INIS)

    Solomon, W.M.; Burrell, K.H.; Gohil, P.; Groebner, R.; Kaplan, D.

    2003-01-01

    An experimental technique is presented that permits diagnostics viewing light from the plasma edge to be spatially calibrated relative to one another. By sweeping the plasma edge, each chord of each diagnostic sweeps out a portion of the light emission profile. A nonlinear least-squares fit to such data provides superior cross-calibration of diagnostics located at different toroidal locations compared with simple surveying. Another advantage of the technique is that it can be used to monitor the position of viewing chords during an experimental campaign to ensure that alignment does not change over time. Moreover, should such a change occur, the data can still be cross-calibrated and its usefulness retained

  17. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 (Version 2.1) Catchments for the Conterminous United States: Road and Stream Intersections

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset represents the density of road and stream crossings within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of...

  18. The Study of Multifunction External Fixator Based on Stewart Platform

    Directory of Open Access Journals (Sweden)

    Guo Yue

    2015-01-01

    Full Text Available The article develops the model of bone deformities, making 6-DOF Parallel Mechanism have widely applied to correction of deformities. The platform’s positional direct solution is the posture of the motion platform. Malformation can be measured by X-ray, based on the space coordinate transformation can find the final posture of the motion platform. Regarding the reverse solution to platform kinematics the paper gives a quick arithmetic program, six actuators to realize motion requirements. For the computer-assisted fracture reduction, we produced an application interface.

  19. StreptoBase: An Oral Streptococcus mitis Group Genomic Resource and Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Wenning Zheng

    Full Text Available The oral streptococci are spherical Gram-positive bacteria categorized under the phylum Firmicutes which are among the most common causative agents of bacterial infective endocarditis (IE and are also important agents in septicaemia in neutropenic patients. The Streptococcus mitis group is comprised of 13 species including some of the most common human oral colonizers such as S. mitis, S. oralis, S. sanguinis and S. gordonii as well as species such as S. tigurinus, S. oligofermentans and S. australis that have only recently been classified and are poorly understood at present. We present StreptoBase, which provides a specialized free resource focusing on the genomic analyses of oral species from the mitis group. It currently hosts 104 S. mitis group genomes including 27 novel mitis group strains that we sequenced using the high throughput Illumina HiSeq technology platform, and provides a comprehensive set of genome sequences for analyses, particularly comparative analyses and visualization of both cross-species and cross-strain characteristics of S. mitis group bacteria. StreptoBase incorporates sophisticated in-house designed bioinformatics web tools such as Pairwise Genome Comparison (PGC tool and Pathogenomic Profiling Tool (PathoProT, which facilitate comparative pathogenomics analysis of Streptococcus strains. Examples are provided to demonstrate how StreptoBase can be employed to compare genome structure of different S. mitis group bacteria and putative virulence genes profile across multiple streptococcal strains. In conclusion, StreptoBase offers access to a range of streptococci genomic resources as well as analysis tools and will be an invaluable platform to accelerate research in streptococci. Database URL: http://streptococcus.um.edu.my.

  20. StreptoBase: An Oral Streptococcus mitis Group Genomic Resource and Analysis Platform.

    Science.gov (United States)

    Zheng, Wenning; Tan, Tze King; Paterson, Ian C; Mutha, Naresh V R; Siow, Cheuk Chuen; Tan, Shi Yang; Old, Lesley A; Jakubovics, Nicholas S; Choo, Siew Woh

    2016-01-01

    The oral streptococci are spherical Gram-positive bacteria categorized under the phylum Firmicutes which are among the most common causative agents of bacterial infective endocarditis (IE) and are also important agents in septicaemia in neutropenic patients. The Streptococcus mitis group is comprised of 13 species including some of the most common human oral colonizers such as S. mitis, S. oralis, S. sanguinis and S. gordonii as well as species such as S. tigurinus, S. oligofermentans and S. australis that have only recently been classified and are poorly understood at present. We present StreptoBase, which provides a specialized free resource focusing on the genomic analyses of oral species from the mitis group. It currently hosts 104 S. mitis group genomes including 27 novel mitis group strains that we sequenced using the high throughput Illumina HiSeq technology platform, and provides a comprehensive set of genome sequences for analyses, particularly comparative analyses and visualization of both cross-species and cross-strain characteristics of S. mitis group bacteria. StreptoBase incorporates sophisticated in-house designed bioinformatics web tools such as Pairwise Genome Comparison (PGC) tool and Pathogenomic Profiling Tool (PathoProT), which facilitate comparative pathogenomics analysis of Streptococcus strains. Examples are provided to demonstrate how StreptoBase can be employed to compare genome structure of different S. mitis group bacteria and putative virulence genes profile across multiple streptococcal strains. In conclusion, StreptoBase offers access to a range of streptococci genomic resources as well as analysis tools and will be an invaluable platform to accelerate research in streptococci. Database URL: http://streptococcus.um.edu.my.

  1. P185-M Protein Identification and Validation of Results in Workflows that Integrate over Various Instruments, Datasets, Search Engines

    Science.gov (United States)

    Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.

    2007-01-01

    Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.

  2. Payment Platform

    DEFF Research Database (Denmark)

    Hjelholt, Morten; Damsgaard, Jan

    2012-01-01

    thoroughly and substitute current payment standards in the decades to come. This paper portrays how digital payment platforms evolve in socio-technical niches and how various technological platforms aim for institutional attention in their attempt to challenge earlier platforms and standards. The paper...... applies a co-evolutionary multilevel perspective to model the interplay and processes between technology and society wherein digital payment platforms potentially will substitute other payment platforms just like the credit card negated the check. On this basis this paper formulate a multilevel conceptual...

  3. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  4. Fractional flow reserve: lessons from PLATFORM and future perspectives.

    Science.gov (United States)

    Pontone, Gianluca; Carità, Patrizia; Verdecchia, Massimo; Buccheri, Dario; Andreini, Daniele; Guaricci, Andrea I; Rabbat, Mark; Pepi, Mauro

    2017-06-01

    In the treatment of stable coronary artery disease (CAD) the identification of patients who may gain the highest benefit from further invasive treatments is of pivotal importance for the healthcare system. In this setting, it has been established that an ischemia-guided revascularization strategy yields improved clinical outcomes in a cost-effective fashion compared with anatomy-guided revascularization alone. Invasive fractional flow reserve (FFR) is considered the gold standard, especially in the intermediate-range atherosclerotic lesions, for assessing lesion specific ischemia at the time of invasive coronary angiography and has now become the standard of reference for studies assessing the diagnostic performance of the various non-invasive stress tests. Coronary computed tomography angiography (cCTA) is an increasingly utilized non-invasive test that enables direct anatomical visualization of CAD in the epicardial coronary arteries with excellent sensitivity and negative predictive value. However, cCTA alone has poor specificity with FFR. With advances in computational fluid dynamics, it is possible to derive FFR from cCTA datasets improving its positive predictive value and specificity. The aim of this review is to summarize the technical aspects of FFR-CT, clinical evidence and limitations behind the novel technology, with a special focus on the recent PLATFORM Trial analyzing the effectiveness, clinical outcomes and resource utilization of FFR-CT. Finally, the future perspective of FFR-CT will be presented.

  5. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  6. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  7. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  8. Toward an E-Government Semantic Platform

    Science.gov (United States)

    Sbodio, Marco Luca; Moulin, Claude; Benamou, Norbert; Barthès, Jean-Paul

    This chapter describes the major aspects of an e-government platform in which semantics underpins more traditional technologies in order to enable new capabilities and to overcome technical and cultural challenges. The design and development of such an e-government Semantic Platform has been conducted with the financial support of the European Commission through the Terregov research project: "Impact of e-government on Territorial Government Services" (Terregov 2008). The goal of this platform is to let local government and government agencies offer online access to their services in an interoperable way, and to allow them to participate in orchestrated processes involving services provided by multiple agencies. Implementing a business process through an electronic procedure is indeed a core goal in any networked organization. However, the field of e-government brings specific constraints to the operations allowed in procedures, especially concerning the flow of private citizens' data: because of legal reasons in most countries, such data are allowed to circulate only from agency to agency directly. In order to promote transparency and responsibility in e-government while respecting the specific constraints on data flows, Terregov supports the creation of centrally controlled orchestrated processes; while the cross agencies data flows are centrally managed, data flow directly across agencies.

  9. Considerations for Achieving Cross-Platform Point Cloud Data Fusion across Different Dryland Ecosystem Structural States.

    Science.gov (United States)

    Swetnam, Tyson L; Gillan, Jeffrey K; Sankey, Temuulen T; McClaran, Mitchel P; Nichols, Mary H; Heilman, Philip; McVay, Jason

    2017-01-01

    Remotely sensing recent growth, herbivory, or disturbance of herbaceous and woody vegetation in dryland ecosystems requires high spatial resolution and multi-temporal depth. Three dimensional (3D) remote sensing technologies like lidar, and techniques like structure from motion (SfM) photogrammetry, each have strengths and weaknesses at detecting vegetation volume and extent, given the instrument's ground sample distance and ease of acquisition. Yet, a combination of platforms and techniques might provide solutions that overcome the weakness of a single platform. To explore the potential for combining platforms, we compared detection bias amongst two 3D remote sensing techniques (lidar and SfM) using three different platforms [ground-based, small unmanned aerial systems (sUAS), and manned aircraft]. We found aerial lidar to be more accurate for characterizing the bare earth (ground) in dense herbaceous vegetation than either terrestrial lidar or aerial SfM photogrammetry. Conversely, the manned aerial lidar did not detect grass and fine woody vegetation while the terrestrial lidar and high resolution near-distance (ground and sUAS) SfM photogrammetry detected these and were accurate. UAS SfM photogrammetry at lower spatial resolution under-estimated maximum heights in grass and shrubs. UAS and handheld SfM photogrammetry in near-distance high resolution collections had similar accuracy to terrestrial lidar for vegetation, but difficulty at measuring bare earth elevation beneath dense herbaceous cover. Combining point cloud data and derivatives (i.e., meshes and rasters) from two or more platforms allowed for more accurate measurement of herbaceous and woody vegetation (height and canopy cover) than any single technique alone. Availability and costs of manned aircraft lidar collection preclude high frequency repeatability but this is less limiting for terrestrial lidar, sUAS and handheld SfM. The post-processing of SfM photogrammetry data became the limiting factor

  10. The Effect of Body Position on Pain Due to Nasal Continuous Positive Airway Pressure (CPAP in Premature Neonates: A Cross-Over Clinical Trial Study

    Directory of Open Access Journals (Sweden)

    Mahnaz Jabraeili

    2018-01-01

    Full Text Available Background The most common cause of admission to neonatal intensive care units (NICU is respiratory distress syndrome. One of the respiratory assistance methods is using nasal continuous positive airway pressure (CPAP. Regarding the importance of pain control which is one of the major priorities in neonatal nursing care, this study aimed to evaluate the effect of body position on pain due to nasal CPAP in premature neonates. Materials and Methods In this cross-over clinical trial, 50 premature neonates who were receiving nasal CPAP admitted to the NICU of Imam Reza Hospital, Kermanshah, Iran, were included. The neonates were randomly placed at three body positions (fetal, supine, and prone positions. Pain was measured by Astrid Lindgren Children’s Hospital Pain Scale Neonates (ALPS-Neo pain assessment scale. The collected data were analyzed using the SPSS software (Version 22.0. Results Significant difference existed regarding pain of nasal CPAP among body positions (p< 0.001. Mean (SD pain was 5.15 (0.822 in fetal position, 6.260 (0.747 in prone position and 7.326 (0.792 in supine position. Conclusion Body positioning in premature neonates under nasal CPAP in NICU can be effective as a non-pharmacologic method in alleviating pain due to nasal CPAP. Among the studied positions, the lowest pain score was seen in fetal position.

  11. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  12. Genomics Portals: integrative web-platform for mining genomics data

    Directory of Open Access Journals (Sweden)

    Ghosh Krishnendu

    2010-01-01

    Full Text Available Abstract Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc, and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  13. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  14. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  15. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  16. SAXSEV 2.1 CROSS-PLATFORM APPLICATION FOR DATA ANALYSIS OF SMALL-ANGLE X-RAY SCATTERING FROM POLYDISPERSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. V. Kuchko

    2015-03-01

    Full Text Available The present paper discusses development and implementation of the cross-platform application with a graphical user interface for estimation of the particle volume fraction distribution function and fitting specific surface area to this distribution pattern. SAXSEV implements the method of statistical regularization for ill-posed mathematical tasks being solved with the use of Numpy, Scipy and MathPlotlib libraries. The main features of this software application are the ability to adjust the arguments grid of the desired function and the ability to select the optimal value of the regularization parameter. This parameter is selected by several specific and one common criteria. The software application consists of modules written in Python3. The modules are combined by common interface based on Tkinter library. Current version SAXSEV 2.1 was tested on the basis of Windows XP / Vista / 7/8, Ubuntu 14.1. SAXSEV 2.1 was used successfully at effectiveness study of statistical regularization method for analyzing dispersed system by SAXS, at research of the powder consisting from nanoparticles and composite materials with nanoparticles inclusion.

  17. Total ozone trends from 1979 to 2016 derived from five merged observational datasets - the emergence into ozone recovery

    Science.gov (United States)

    Weber, Mark; Coldewey-Egbers, Melanie; Fioletov, Vitali E.; Frith, Stacey M.; Wild, Jeannette D.; Burrows, John P.; Long, Craig S.; Loyola, Diego

    2018-02-01

    We report on updated trends using different merged datasets from satellite and ground-based observations for the period from 1979 to 2016. Trends were determined by applying a multiple linear regression (MLR) to annual mean zonal mean data. Merged datasets used here include NASA MOD v8.6 and National Oceanic and Atmospheric Administration (NOAA) merge v8.6, both based on data from the series of Solar Backscatter UltraViolet (SBUV) and SBUV-2 satellite instruments (1978-present) as well as the Global Ozone Monitoring Experiment (GOME)-type Total Ozone (GTO) and GOME-SCIAMACHY-GOME-2 (GSG) merged datasets (1995-present), mainly comprising satellite data from GOME, the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY), and GOME-2A. The fifth dataset consists of the monthly mean zonal mean data from ground-based measurements collected at World Ozone and UV Data Center (WOUDC). The addition of four more years of data since the last World Meteorological Organization (WMO) ozone assessment (2013-2016) shows that for most datasets and regions the trends since the stratospheric halogen reached its maximum (˜ 1996 globally and ˜ 2000 in polar regions) are mostly not significantly different from zero. However, for some latitudes, in particular the Southern Hemisphere extratropics and Northern Hemisphere subtropics, several datasets show small positive trends of slightly below +1 % decade-1 that are barely statistically significant at the 2σ uncertainty level. In the tropics, only two datasets show significant trends of +0.5 to +0.8 % decade-1, while the others show near-zero trends. Positive trends since 2000 have been observed over Antarctica in September, but near-zero trends are found in October as well as in March over the Arctic. Uncertainties due to possible drifts between the datasets, from the merging procedure used to combine satellite datasets and related to the low sampling of ground-based data, are not accounted for in the trend

  18. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  19. Platform for Distributed 3D Gaming

    Directory of Open Access Journals (Sweden)

    A. Jurgelionis

    2009-01-01

    Full Text Available Video games are typically executed on Windows platforms with DirectX API and require high performance CPUs and graphics hardware. For pervasive gaming in various environments like at home, hotels, or internet cafes, it is beneficial to run games also on mobile devices and modest performance CE devices avoiding the necessity of placing a noisy workstation in the living room or costly computers/consoles in each room of a hotel. This paper presents a new cross-platform approach for distributed 3D gaming in wired/wireless local networks. We introduce the novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple end devices enable the access of games on low cost set top boxes and handheld devices that natively lack the power of executing a game with high-quality graphical output.

  20. The Role of Datasets on Scientific Influence within Conflict Research.

    Directory of Open Access Journals (Sweden)

    Tracy Van Holt

    shape the operationalization of conflict. In fact, 94% of the works on the CP that analyzed data either relied on publically available datasets, or they generated a dataset and made it public. These datasets appear to be important in the development of conflict research, allowing for cross-case comparisons, and comparisons to previous works.

  1. The Role of Datasets on Scientific Influence within Conflict Research

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C.; Moates, Shiloh; Carley, Kathleen M.

    2016-01-01

    shape the operationalization of conflict. In fact, 94% of the works on the CP that analyzed data either relied on publically available datasets, or they generated a dataset and made it public. These datasets appear to be important in the development of conflict research, allowing for cross-case comparisons, and comparisons to previous works. PMID:27124569

  2. Doing History, Creating Memory : Representing the Past in Documentary and Archive-Based Television Programmes within a Multi-Platform Landscape

    NARCIS (Netherlands)

    Hagedoorn, B.

    2016-01-01

    Television is a significant mediator of past and historical events in modern media systems. This dissertation studies practices of representing the past on Dutch television as a multi-platform phenomenon. Dynamic screen practices such as broadcasting, cross-media platforms, digital thematic channels

  3. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  4. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.

    2015-05-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  5. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.; Kleiber, William

    2015-01-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  6. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  7. Research progress of anti-icing/deicing technologies for polar ships and offshore platforms

    Directory of Open Access Journals (Sweden)

    XIE Qiang

    2017-01-01

    Full Text Available The polar regions present adverse circumstances of high humidity and strong air-sea exchange. As such, the surfaces of ships and platforms (oil exploiting and drilling platforms serving in polar regions can easily be frozen by ice accretion, which not only affects the operation of the equipment but also threatens safety. This paper summarizes the status of the anti-icing/deicing technologies of both China and abroad for polar ships and offshore platforms, and introduces the various effects of ice accretion on polar ships and offshore platforms, and the resulting safety impacts. It then reviews existing anti-icing/deicing technologies and methods of both China and abroad, including such active deicing methods as electric heating, infrared heating and ultrasonic guided wave deicing, as well as such passive deicing methods as super hydrophobic coating, sacrificial coating, aqueous lubricating layer coating and low cross-link density (with interfacial slippage coating, summarizes their applicability to polar ships and offshore platforms, and finally discusses their advantages/disadvantages.

  8. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets.

    Science.gov (United States)

    Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M

    2015-10-01

    Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.

  9. Product Platform Performance

    DEFF Research Database (Denmark)

    Munk, Lone

    The aim of this research is to improve understanding of platform-based product development by studying platform performance in relation to internal effects in companies. Platform-based product development makes it possible to deliver product variety and at the same time reduce the needed resources...... engaging in platform-based product development. Similarly platform assessment criteria lack empirical verification regarding relevance and sufficiency. The thesis focuses on • the process of identifying and estimating internal effects, • verification of performance of product platforms, (i...... experienced representatives from the different life systems phase systems of the platform products. The effects are estimated and modeled within different scenarios, taking into account financial and real option aspects. The model illustrates and supports estimation and quantification of internal platform...

  10. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  11. Connecting Archaeological Data and Grey Literature via Semantic Cross Search

    Directory of Open Access Journals (Sweden)

    Douglas Tudhope

    2011-07-01

    Full Text Available Differing terminology and database structure hinders meaningful cross search of excavation datasets. Matching free text grey literature reports with datasets poses yet more challenges. Conventional search techniques are unable to cross search between archaeological datasets and Web-based grey literature. Results are reported from two AHRC funded research projects that investigated the use of semantic techniques to link digital archive databases, vocabularies and associated grey literature. STAR (Semantic Technologies for Archaeological Resources was a collaboration between the University of Glamorgan, Hypermedia Research Unit and English Heritage (EH. The main outcome is a research Demonstrator (available online, which cross searches over excavation datasets from different database schemas, including Raunds Roman, Raunds Prehistoric, Museum of London, Silchester Roman and Stanwick sampling. The system additionally cross searches over an extract of excavation reports from the OASIS index of grey literature, operated by the Archaeology Data Service (ADS. A conceptual framework provided by the CIDOC Conceptual Reference Model (CRM integrates the different database structures and the metadata automatically generated from the OASIS reports by natural language processing techniques. The methods employed for extracting semantic RDF representations from the datasets and the information extraction from grey literature are described. The STELLAR project provides freely available tools to reduce the costs of mapping and extracting data to semantic search systems such as the Demonstrator and to linked data representation generally. Detailed use scenarios (and a screen capture video provide a basis for a discussion of key issues, including cost-benefits, ontology modelling, mapping, terminology control, semantic implementation and information extraction issues. The scenarios show that semantic interoperability can be achieved by mapping and extracting

  12. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Plutonium Metals, Oxides, and Solutions on the High Performance Computing Platform Moonlight

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Bryan Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gough, Sean T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-05

    This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.

  13. Dataset on records of Hericium erinaceus in Slovakia.

    Science.gov (United States)

    Kunca, Vladimír; Čiliak, Marek

    2017-06-01

    The data presented in this article are related to the research article entitled "Habitat preferences of Hericium erinaceus in Slovakia" (Kunca and Čiliak, 2016) [FUNECO607] [2]. The dataset include all available and unpublished data from Slovakia, besides the records from the same tree or stem. We compiled a database of records of collections by processing data from herbaria, personal records and communication with mycological activists. Data on altitude, tree species, host tree vital status, host tree position and intensity of management of forest stands were evaluated in this study. All surveys were based on basidioma occurrence and some result from targeted searches.

  14. Dataset on records of Hericium erinaceus in Slovakia

    Directory of Open Access Journals (Sweden)

    Vladimír Kunca

    2017-06-01

    Full Text Available The data presented in this article are related to the research article entitled “Habitat preferences of Hericium erinaceus in Slovakia” (Kunca and Čiliak, 2016 [FUNECO607] [2]. The dataset include all available and unpublished data from Slovakia, besides the records from the same tree or stem. We compiled a database of records of collections by processing data from herbaria, personal records and communication with mycological activists. Data on altitude, tree species, host tree vital status, host tree position and intensity of management of forest stands were evaluated in this study. All surveys were based on basidioma occurrence and some result from targeted searches.

  15. Analyzing engagement in a web-based intervention platform through visualizing log-data.

    Science.gov (United States)

    Morrison, Cecily; Doherty, Gavin

    2014-11-13

    Engagement has emerged as a significant cross-cutting concern within the development of Web-based interventions. There have been calls to institute a more rigorous approach to the design of Web-based interventions, to increase both the quantity and quality of engagement. One approach would be to use log-data to better understand the process of engagement and patterns of use. However, an important challenge lies in organizing log-data for productive analysis. Our aim was to conduct an initial exploration of the use of visualizations of log-data to enhance understanding of engagement with Web-based interventions. We applied exploratory sequential data analysis to highlight sequential aspects of the log data, such as time or module number, to provide insights into engagement. After applying a number of processing steps, a range of visualizations were generated from the log-data. We then examined the usefulness of these visualizations for understanding the engagement of individual users and the engagement of cohorts of users. The visualizations created are illustrated with two datasets drawn from studies using the SilverCloud Platform: (1) a small, detailed dataset with interviews (n=19) and (2) a large dataset (n=326) with 44,838 logged events. We present four exploratory visualizations of user engagement with a Web-based intervention, including Navigation Graph, Stripe Graph, Start-Finish Graph, and Next Action Heat Map. The first represents individual usage and the last three, specific aspects of cohort usage. We provide examples of each with a discussion of salient features. Log-data analysis through data visualization is an alternative way of exploring user engagement with Web-based interventions, which can yield different insights than more commonly used summative measures. We describe how understanding the process of engagement through visualizations can support the development and evaluation of Web-based interventions. Specifically, we show how visualizations

  16. The Platformization of the Web: Making Web Data Platform Ready

    NARCIS (Netherlands)

    Helmond, A.

    2015-01-01

    In this article, I inquire into Facebook’s development as a platform by situating it within the transformation of social network sites into social media platforms. I explore this shift with a historical perspective on, what I refer to as, platformization, or the rise of the platform as the dominant

  17. Successfully Operationalizing a Franchise-Level Scientific Communication Platform

    OpenAIRE

    Kistler, Jamie; Wilson, Leanne; Wehner, Erica; Fallon, Judy; Gooljarsingh, Tricia

    2018-01-01

    Objective: To ensure consistency in language and communication points across a franchise of products by developing a franchise-level scientific communication platform (SCP) and creating a tool for its dissemination for use by a cross-functional team. Challenge/Problem: A franchise-level SCP was needed to achieve broad alignment on external communications supporting the products, leverage strengths and opportunities, and optimize differentiation in a competitive landscape. An ef...

  18. Fragmentation cross sections outside the limiting-fragmentation regime

    CERN Document Server

    Sümmerer, K

    2003-01-01

    The empirical parametrization of fragmentation cross sections, EPAX, has been successfully applied to estimate fragment production cross sections in reactions of heavy ions at high incident energies. It is checked whether a similar parametrization can be found for proton-induced spallation around 1 GeV, the range of interest for ISOL-type RIB facilities. The validity of EPAX for medium-energy heavy-ion induced reactions is also checked. Only a few datasets are available, but in general EPAX predicts the cross sections rather well, except for fragments close to the projectile, where the experimental cross sections are found to be larger.

  19. Mastering CMake a cross-platform build system : version 3.1

    CERN Document Server

    Martin, Ken

    2015-01-01

    CMake is an open-source build tool enabling collaboration among software developers working on distinct platforms by using a common build specification to drive their native build tools. Mastering CMake explains how to use the CMake suite of tools, including CTest and CPack, to develop, build, test, and package software for distribution. It covers use of the command-line and GUI tools on Linux (UNIX), Microsoft Windows, and Mac OS X. This book also contains a guide for converting projects to CMake and writing CMake code to specify build rules to compile sources, create static and shared libraries, link executables, run custom commands, run tests, and install artifacts. It also includes a copy of key portions of the official reference documentation.

  20. DART, a platform for the creation and registration of cone beam digital tomosynthesis datasets.

    Science.gov (United States)

    Sarkar, Vikren; Shi, Chengyu; Papanikolaou, Niko

    2011-04-01

    Digital tomosynthesis is an imaging modality that allows for tomographic reconstructions using only a fraction of the images needed for CT reconstruction. Since it offers the advantages of tomographic images with a smaller imaging dose delivered to the patient, the technique offers much promise for use in patient positioning prior to radiation delivery. This paper describes a software environment developed to help in the creation of digital tomosynthesis image sets from digital portal images using three different reconstruction algorithms. The software then allows for use of the tomograms for patient positioning or for dose recalculation if shifts are not applied, possibly as part of an adaptive radiotherapy regimen.

  1. Comparison of gene coverage of mouse oligonucleotide microarray platforms

    Directory of Open Access Journals (Sweden)

    Medrano Juan F

    2006-03-01

    Full Text Available Abstract Background The increasing use of DNA microarrays for genetical genomics studies generates a need for platforms with complete coverage of the genome. We have compared the effective gene coverage in the mouse genome of different commercial and noncommercial oligonucleotide microarray platforms by performing an in-house gene annotation of probes. We only used information about probes that is available from vendors and followed a process that any researcher may take to find the gene targeted by a given probe. In order to make consistent comparisons between platforms, probes in each microarray were annotated with an Entrez Gene id and the chromosomal position for each gene was obtained from the UCSC Genome Browser Database. Gene coverage was estimated as the percentage of Entrez Genes with a unique position in the UCSC Genome database that is tested by a given microarray platform. Results A MySQL relational database was created to store the mapping information for 25,416 mouse genes and for the probes in five microarray platforms (gene coverage level in parenthesis: Affymetrix430 2.0 (75.6%, ABI Genome Survey (81.24%, Agilent (79.33%, Codelink (78.09%, Sentrix (90.47%; and four array-ready oligosets: Sigma (47.95%, Operon v.3 (69.89%, Operon v.4 (84.03%, and MEEBO (84.03%. The differences in coverage between platforms were highly conserved across chromosomes. Differences in the number of redundant and unspecific probes were also found among arrays. The database can be queried to compare specific genomic regions using a web interface. The software used to create, update and query the database is freely available as a toolbox named ArrayGene. Conclusion The software developed here allows researchers to create updated custom databases by using public or proprietary information on genes for any organisms. ArrayGene allows easy comparisons of gene coverage between microarray platforms for any region of the genome. The comparison presented here

  2. Micro/nano analysis of tooth microstructures by Focused Ion Beam (FIB cross-sectioning

    Directory of Open Access Journals (Sweden)

    Meltem Sezen

    2017-04-01

    Full Text Available Since dental structures are hard and fragile, cross-sectioning of these materials using ultramicrotomy and other techniques and following micro and nano analysis cause problems. The use of FIB-SEM dual beam platforms is the most convenient solution for investigating the microstructures, site-specifically and in certain geometries. Dual beam platforms allow for imaging at high magnifications and resolutions and simultaneous elemental analysis. In this study, the micro/nano-structural and chemical differences were revealed in dentin and enamel samples. The investigation of dental tissues having different morphologies and chemical components by ion-cross-sectioning is important for the use of FIB-SEM platforms in dentistry in Turkey.

  3. A Digital Knowledge Preservation Platform for Environmental Sciences

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida; Perez, David

    2017-04-01

    The Digital Knowledge Preservation Platform is the evolution of a pilot project for Open Data supporting the full research data life cycle. It is currently being evolved at IFCA (Instituto de Física de Cantabria) as a combination of different open tools that have been extended: DMPTool (https://dmptool.org/) with pilot semantics features (RDF export, parameters definition), INVENIO (http://invenio-software.org/ ) customized version to integrate the entire research data life cycle and Jupyter (http://jupyter.org/) as processing tool and reproducibility environment. This complete platform aims to provide an integrated environment for research data management following the FAIR+R principles: -Findable: The Web portal based on Invenio provides a search engine and all elements including metadata to make them easily findable. -Accessible: Both data and software are available online with internal PIDs and DOIs (provided by Datacite). -Interoperable: Datasets can be combined to perform new analysis. The OAI-PMH standard is also integrated. -Re-usable: different licenses types and embargo periods can be defined. -+Reproducible: directly integrated with cloud computing resources. The deployment of the entire system over a Cloud framework helps to build a dynamic and scalable solution, not only for managing open datasets but also as a useful tool for the final user, who is able to directly process and analyse the open data. In parallel, the direct use of semantics and metadata is being explored and integrated in the framework. Ontologies, being a knowledge representation, can contribute to define the elements and relationships of the research data life cycle, including DMP, datasets, software, etc. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies

  4. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  5. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  6. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  7. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  8. Predicting weather regime transitions in Northern Hemisphere datasets

    Energy Technology Data Exchange (ETDEWEB)

    Kondrashov, D. [University of California, Department of Atmospheric and Oceanic Sciences and Institute of Geophysics and Planetary Physics, Los Angeles, CA (United States); Shen, J. [UCLA, Department of Statistics, Los Angeles, CA (United States); Berk, R. [UCLA, Department of Statistics, Los Angeles, CA (United States); University of Pennsylvania, Department of Criminology, Philadelphia, PA (United States); D' Andrea, F.; Ghil, M. [Ecole Normale Superieure, Departement Terre-Atmosphere-Ocean and Laboratoire de Meteorologie Dynamique (CNRS and IPSL), Paris Cedex 05 (France)

    2007-10-15

    A statistical learning method called random forests is applied to the prediction of transitions between weather regimes of wintertime Northern Hemisphere (NH) atmospheric low-frequency variability. A dataset composed of 55 winters of NH 700-mb geopotential height anomalies is used in the present study. A mixture model finds that the three Gaussian components that were statistically significant in earlier work are robust; they are the Pacific-North American (PNA) regime, its approximate reverse (the reverse PNA, or RNA), and the blocked phase of the North Atlantic Oscillation (BNAO). The most significant and robust transitions in the Markov chain generated by these regimes are PNA {yields} BNAO, PNA {yields} RNA and BNAO {yields} PNA. The break of a regime and subsequent onset of another one is forecast for these three transitions. Taking the relative costs of false positives and false negatives into account, the random-forests method shows useful forecasting skill. The calculations are carried out in the phase space spanned by a few leading empirical orthogonal functions of dataset variability. Plots of estimated response functions to a given predictor confirm the crucial influence of the exit angle on a preferred transition path. This result points to the dynamic origin of the transitions. (orig.)

  9. Single top quark production cross section using the ATLAS detector at the LHC

    CERN Document Server

    Cioara, Irina Antonela; The ATLAS collaboration

    2017-01-01

    Measurements of single top-quark production in proton-proton collisions are presented based on the 8 TeV and 13 TeV ATLAS datasets. In the leading order process, a W boson is exchanged in the t-channel. The cross-section for the production of single top-quarks and single anti-top-quarks total production cross sections, their ratio, as well as a measurement of the inclusive production cross section is presented. At 8 TeV, differential cross-section measurements of the t-channel process are also reported, these analyses include limits on anomalous contributions to the Wtb vertex and measurement of the top quark polarization. A measurement of the production cross section of a single top quark in association with a W boson, the second largest single-top production mode, is also presented. Finally, evidence for s-channel single-top production in the 8 TeV ATLAS dataset is presented. All measurements are compared to state-of-the-art theoretical calculations.

  10. Women's satisfaction with care at the birthplace in Austria: Evaluation of the Babies Born Better survey national dataset.

    Science.gov (United States)

    Luegmair, Karolina; Zenzmaier, Christoph; Oblasser, Claudia; König-Bachmann, Martina

    2018-04-01

    to evaluate women's satisfaction with care at the birthplace in Austria and to provide reference data for cross-country comparisons within the international Babies Born Better project. a cross-sectional design was applied. The data were extracted from the Babies Born Better survey as a national sub-dataset that included all participants with Austria as the indicated country of residence. an online survey targeting women who had given birth within the last five years and distributed primarily via social media. In addition to sociodemographic and closed-ended questions regarding pregnancy and the childbirth environment, the women's childbirth experiences and satisfaction with the birthplace were obtained with three open-ended questions regarding (i) best experience of care, (ii) required changes in care and (iii) honest description of the experienced care. five hundred thirty-nine women who had given birth in Austria within the last five years. based on the concepts of public health, salutogenesis and self-efficacy, a deductive coding framework was developed and applied to analyse the qualitative data of the Babies Born Better survey. Regarding honest descriptions of the experienced care at the birthplace, 82% were positive, indicating that most of the respondents were mostly satisfied with the care experienced. More than 95% of the survey participants' positive experiences and more than 87% of their negative experiences with care could be assigned to the categories of the deductive coding framework. Whereas positive experiences mainly addressed care experienced at the individual level, negative experiences more frequently related to issues of the existing infrastructure, breastfeeding counselling or topics not covered by the coding framework. Evaluation of these unassigned responses revealed an emphasis on antenatal and puerperal care as well as insufficient reimbursements of expenses by health insurance funds and the desire for more midwifery-led care. although the

  11. Detection of the position and cross-section of a tokamak plasma with magnetic probes

    International Nuclear Information System (INIS)

    Aikawa, Hiroshi; Ogata, Atsushi; Suzuki, Yasuo

    1977-02-01

    The position and cross-sectional shape of a Tokamak plasma are obtained analytically from magnetic probe signals, taking into consideration the toroidal effect. Multipole moment analysis of the plasma current density, introducing the vertical asymmetry, shows the horizontal and vertical displacements and the elliptical deviation. The error in the measurement is estimated by means of the least square method. The observed error is proportional to the error of setting the probes, and inversely proportional to the square root of the number of probes. (auth.)

  12. Wrox Cross Platform Android and iOS Mobile Development Three-Pack

    CERN Document Server

    McClure, Wallace B; Croft, John J; Dick, Jonathan; Hardy, Chris; Olson, Scott; Hunter, John; Horgen, Ben; Goers, Kenny; Blyth, Rory; Dunn, Craig; Bowling, Martin

    2012-01-01

    A bundle of 3 best-selling and respected mobile development e-books from Wrox form a complete library on the key tools and techniques for developing apps across the hottest platforms including Android and iOS.  This collection includes the full content of these three books, at a special price:Professional Android Programming with Mono for Android and .NET/C#, ISBN: 9781118026434, by Wallace B. McClure, Nathan Blevins, John J. Croft, IV, Jonathan Dick, and Chris HardyProfessional iPhone Programming with MonoTouch and .NET/C#, ISBN: 9780470637821, by Wallace B. McClure, Rory Blyth, Craig Dunn, C

  13. Service platforms management strategy: case study of an interior design firm

    Directory of Open Access Journals (Sweden)

    Leonel Del Rey de Melo Filho

    2015-03-01

    Full Text Available Platform management is a strategic tool for firms of various sizes, although it demands studies in the service sector. The aim of this paper is to investigate a use of platform management, designed to reach flexibility and operational dynamics in service projects. The studied platform is evaluated as a strategic resource in a particular case. The contributions of the service platform were explored from Resource-Based View (RBV and Service Marketing (SM perspectives, to study their effects on firms’ performance. The research strategy used was an exploratory case study in an interior design firm. The data collection techniques included a participant observation, document analysis and a focus group with firm managers. The research demonstrated that platform management is a strategic resource that assists with the planning of internal capabilities, market positioning, and provides better customer service.

  14. Changes in the peripheral blood transcriptome associated with occupational benzene exposure identified by cross-comparison on two microarray platforms

    Energy Technology Data Exchange (ETDEWEB)

    McHale, Cliona M.; Zhang, Luoping; Lan, Qing; Li, Guilan; Hubbard, Alan E.; Forrest, Matthew S.; Vermeulen, Roel; Chen, Jinsong; Shen, Min; Rappaport, Stephen M.; Yin, Songnian; Smith, Martyn T.; Rothman, Nathaniel

    2009-03-01

    Benzene is an established cause of leukemia and a possible cause of lymphoma in humans but the molecular pathways underlying this remain largely undetermined. This study sought to determine if the use of two different microarray platforms could identify robust global gene expression and pathway changes associated with occupational benzene exposure in the peripheral blood mononuclear cell (PBMC) gene expression of a population of shoe-factory workers with well-characterized occupational exposures to benzene. Microarray data was analyzed by a robust t-test using a Quantile Transformation (QT) approach. Differential expression of 2692 genes using the Affymetrix platform and 1828 genes using the Illumina platform was found. While the overall concordance in genes identified as significantly associated with benzene exposure between the two platforms was 26% (475 genes), the most significant genes identified by either array were more likely to be ranked as significant by the other platform (Illumina = 64%, Affymetrix = 58%). Expression ratios were similar among the concordant genes (mean difference in expression ratio = 0.04, standard deviation = 0.17). Four genes (CXCL16, ZNF331, JUN and PF4), which we previously identified by microarray and confirmed by real-time PCR, were identified by both platforms in the current study and were among the top 100 genes. Gene Ontology analysis showed over representation of genes involved in apoptosis among the concordant genes while Ingenuity{reg_sign} Pathway Analysis (IPA) identified pathways related to lipid metabolism. Using a two-platform approach allows for robust changes in the PBMC transcriptome of benzene-exposed individuals to be identified.

  15. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  16. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    Science.gov (United States)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows

  17. Merged SAGE II, Ozone_cci and OMPS ozone profile dataset and evaluation of ozone trends in the stratosphere

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2017-10-01

    Full Text Available In this paper, we present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of the European Space Agency Climate Change Initiative (Ozone_cci with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and intercompared; only those datasets which are in good agreement, and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE–CCI–OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE–CCI–OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90° S to 90° N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at midlatitudes and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s and stratospheric cooling.

  18. An Extensible Sensing and Control Platform for Building Energy Management

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Anthony [Carnegie Mellon Univ., Pittsburgh, PA (United States); Berges, Mario [Carnegie Mellon Univ., Pittsburgh, PA (United States); Martin, Christopher [Robert Bosch LLC, Anderson, SC (United States)

    2016-04-03

    The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.

  19. Ambient assisted living platform for remote monitoring of bedridden people

    OpenAIRE

    Pereira, F.; Barros, C.; Carvalho, V.; Machado, José; Leão, Celina Pinto; Soares, Filomena; Bezerra, K.; Matos, Demétrio Ferreira

    2015-01-01

    The aim of this paper is to present a platform for remote monitoring of bedridden people developed in the context of Ambient Assisted Living (AAL). This platform, Medical Care Terminal (MCT), includes the measurement of biomedical data (body temperature, galvanic skin resistance, electrocardiogram and electromyogram, level of oxygen, body position and breathing) as well environmental data (level of alcohol in the air, carbon monoxide level in the air, brightness and temperature). It presents ...

  20. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  1. Oceanographic temperature, salinity, oxygen, phosphate, total phosphorus, silicate, nitrite, pH, alkalinity measurements collected using bottle on multiple platforms in the Pacific, Atlantic, Arctic, Mediterranean from 1910 to 1982 (NODC Accession 0038350)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature, salinity, nutrients, oxygen, and other measurements found in dataset OSD taken from the AGASSIZ; A., ALBACORE and other platforms in the Coastal N...

  2. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  3. An Ex Vivo Imaging Pipeline for Producing High- Quality and High-Resolution Diffusion-Weighted Imaging Datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim Bjørn; Baaré, William F.C.; Alexander, Daniel C.

    2011-01-01

    Diffusion tensor (DT) imaging and related multifiber reconstruction algorithms allow the study of in vivo microstructure and, by means of tractography, structural connectivity. Although reconstruction algorithms are promising imaging tools, high‐quality diffusion‐weighted imaging (DWI) datasets...... complexity, to establish an ex vivo imaging pipeline for generating high‐quality DWI datasets. Perfusion fixation ensured that tissue characteristics were comparable to in vivo conditions. There were three main results: (i) heat conduction and unstable tissue mechanics accounted for time‐varying artefacts...... in the DWI dataset, which were present for up to 15 h after positioning brain tissue in the scanner; (ii) using fitted DT, q‐ball, and persistent angular structure magnetic resonance imaging algorithms, any b‐value between ∼2,000 and ∼8,000 s/mm2, with an optimal value around 4,000 s/mm2, allowed...

  4. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  5. Parasol: An Architecture for Cross-Cloud Federated Graph Querying

    Energy Technology Data Exchange (ETDEWEB)

    Lieberman, Michael; Choudhury, Sutanay; Hughes, Marisa; Patrone, Dennis; Hider, Sandy; Piatko, Christine; Chapman, Matthew; Marple, JP; Silberberg, David

    2014-06-22

    Large scale data fusion of multiple datasets can often provide in- sights that examining datasets individually cannot. However, when these datasets reside in different data centers and cannot be collocated due to technical, administrative, or policy barriers, a unique set of problems arise that hamper querying and data fusion. To ad- dress these problems, a system and architecture named Parasol is presented that enables federated queries over graph databases residing in multiple clouds. Parasol’s design is flexible and requires only minimal assumptions for participant clouds. Query optimization techniques are also described that are compatible with Parasol’s lightweight architecture. Experiments on a prototype implementation of Parasol indicate its suitability for cross-cloud federated graph queries.

  6. CPSS: a computational platform for the analysis of small RNA deep sequencing data.

    Science.gov (United States)

    Zhang, Yuanwei; Xu, Bo; Yang, Yifan; Ban, Rongjun; Zhang, Huan; Jiang, Xiaohua; Cooke, Howard J; Xue, Yu; Shi, Qinghua

    2012-07-15

    Next generation sequencing (NGS) techniques have been widely used to document the small ribonucleic acids (RNAs) implicated in a variety of biological, physiological and pathological processes. An integrated computational tool is needed for handling and analysing the enormous datasets from small RNA deep sequencing approach. Herein, we present a novel web server, CPSS (a computational platform for the analysis of small RNA deep sequencing data), designed to completely annotate and functionally analyse microRNAs (miRNAs) from NGS data on one platform with a single data submission. Small RNA NGS data can be submitted to this server with analysis results being returned in two parts: (i) annotation analysis, which provides the most comprehensive analysis for small RNA transcriptome, including length distribution and genome mapping of sequencing reads, small RNA quantification, prediction of novel miRNAs, identification of differentially expressed miRNAs, piwi-interacting RNAs and other non-coding small RNAs between paired samples and detection of miRNA editing and modifications and (ii) functional analysis, including prediction of miRNA targeted genes by multiple tools, enrichment of gene ontology terms, signalling pathway involvement and protein-protein interaction analysis for the predicted genes. CPSS, a ready-to-use web server that integrates most functions of currently available bioinformatics tools, provides all the information wanted by the majority of users from small RNA deep sequencing datasets. CPSS is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/db/cpss/index.html or http://mcg.ustc.edu.cn/sdap1/cpss/index.html.

  7. Mobile Phone Cognitive Bias Modification Research Platform for Substance Use Disorders: Protocol for a Feasibility Study.

    Science.gov (United States)

    Zhang, Melvyn; Ying, JiangBo; Song, Guo; Fung, Daniel Ss; Smith, Helen

    2018-06-12

    Cognitive biases refer to automatic attentional and interpretational tendencies, which could be retained by cognitive bias modification interventions. Cristea et al and Jones et al have published reviews (in 2016 and 2017 respectively) on the effectiveness of such interventions. The advancement of technologies such as electronic health (eHealth) and mobile health (mHealth) has led to them being harnessed for the delivery of cognitive bias modification. To date, at least eight studies have demonstrated the feasibility of mobile technologies for the delivery of cognitive bias modification. Most of the studies are limited to a description of the conventional cognitive bias modification methodology that has been adopted. None of the studies shared the developmental process for the methodology involved, such that future studies could adopt it in the cost-effective replication of such interventions. It is important to have a common platform that could facilitate the design and customization of cognitive bias modification interventions for a variety of psychiatric and addictive disorders. It is the aim of the current research protocol to describe the design of a research platform that allows for customization of cognitive bias modification interventions for addictive disorders. A multidisciplinary team of 2 addiction psychiatrists, a psychologist with expertise in cognitive bias modification, and a computer engineer, were involved in the development of the intervention. The proposed platform would comprise of a mobile phone version of the cognitive bias task which is controlled by a server that could customize the algorithm for the tasks and collate the reaction-time data in realtime. The server would also allow the researcher to program the specific set of images that will be present in the task. The mobile phone app would synchronize with the backend server in real-time. An open-sourced cross-platform gaming software from React Native was used in the current development

  8. Evaluation of multiplex assay platforms for detection of influenza hemagglutinin subtype specific antibody responses.

    Science.gov (United States)

    Li, Zhu-Nan; Weber, Kimberly M; Limmer, Rebecca A; Horne, Bobbi J; Stevens, James; Schwerzmann, Joy; Wrammert, Jens; McCausland, Megan; Phipps, Andrew J; Hancock, Kathy; Jernigan, Daniel B; Levine, Min; Katz, Jacqueline M; Miller, Joseph D

    2017-05-01

    Influenza hemagglutination inhibition (HI) and virus microneutralization assays (MN) are widely used for seroprevalence studies. However, these assays have limited field portability and are difficult to fully automate for high throughput laboratory testing. To address these issues, three multiplex influenza subtype-specific antibody detection assays were developed using recombinant hemagglutinin antigens in combination with Chembio, Luminex ® , and ForteBio ® platforms. Assay sensitivity, specificity, and subtype cross-reactivity were evaluated using a panel of well characterized human sera. Compared to the traditional HI, assay sensitivity ranged from 87% to 92% and assay specificity in sera collected from unexposed persons ranged from 65% to 100% across the platforms. High assay specificity (86-100%) for A(H5N1) rHA was achieved for sera from exposed or unexposed to hetorosubtype influenza HAs. In contrast, assay specificity for A(H1N1)pdm09 rHA using sera collected from A/Vietnam/1204/2004 (H5N1) vaccinees in 2008 was low (22-30%) in all platforms. Although cross-reactivity against rHA subtype proteins was observed in each assay platform, the correct subtype specific responses were identified 78%-94% of the time when paired samples were available for analysis. These results show that high throughput and portable multiplex assays that incorporate rHA can be used to identify influenza subtype specific infections. Published by Elsevier B.V.

  9. Platform Performance and Challenges - using Platforms in Lego Company

    DEFF Research Database (Denmark)

    Munk, Lone; Mortensen, Niels Henrik

    2009-01-01

    needs focus on the incentive of using the platform. This problem lacks attention in literature, as well as industry, where assessment criteria do not cover this aspect. Therefore, we recommend including user incentive in platform assessment criteria to these challenges. Concrete solution elements...... ensuring user incentive in platforms is an object for future research...

  10. Electron-Impact Ionization Cross Sections of H, He, N, O, Ar, Xe, Au, Pb Atoms and Their Ions in the Electron Energy Range from the Threshold up to 200 keV

    CERN Document Server

    Povyshev, V M; Shevelko, V P; Shirkov, G D; Vasina, E G; Vatulin, V V

    2001-01-01

    Single electron-impact ionization cross sections of H, He, N, O, Ar, Xe, Au, Pb atoms and their positive ions (i.e. all ionization stages) are presented in the electron energy range from the threshold up to 200 keV. The data-set for the cross sections has been created on the basis of available experimental data and calculations performed by the computer code ATOM. Consistent data for the ionization cross sections have been fitted by seven parameters using the LSM method. The accuracy of the calculated data presented is within a factor of 2 that in many cases is sufficient to solve the plasma kinetics problems. Contributions from excitation-autoionization and resonant-ionization processes as well as ionization of atoms and ions are not considered here. The results of the numerical calculations are compared with the well-known Lotz formulae for ionization of neutral atoms and positive ions. The material is illustrated by figures and includes tables of ionization cross sections, binding energies and fitting para...

  11. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    Directory of Open Access Journals (Sweden)

    Lemoine Nicholas R

    2007-11-01

    Full Text Available Abstract Background Pancreatic cancer is the 5th leading cause of cancer death in both males and females. In recent years, a wealth of gene and protein expression studies have been published broadening our understanding of pancreatic cancer biology. Due to the explosive growth in publicly available data from multiple different sources it is becoming increasingly difficult for individual researchers to integrate these into their current research programmes. The Pancreatic Expression database, a generic web-based system, is aiming to close this gap by providing the research community with an open access tool, not only to mine currently available pancreatic cancer data sets but also to include their own data in the database. Description Currently, the database holds 32 datasets comprising 7636 gene expression measurements extracted from 20 different published gene or protein expression studies from various pancreatic cancer types, pancreatic precursor lesions (PanINs and chronic pancreatitis. The pancreatic data are stored in a data management system based on the BioMart technology alongside the human genome gene and protein annotations, sequence, homologue, SNP and antibody data. Interrogation of the database can be achieved through both a web-based query interface and through web services using combined criteria from pancreatic (disease stages, regulation, differential expression, expression, platform technology, publication and/or public data (antibodies, genomic region, gene-related accessions, ontology, expression patterns, multi-species comparisons, protein data, SNPs. Thus, our database enables connections between otherwise disparate data sources and allows relatively simple navigation between all data types and annotations. Conclusion The database structure and content provides a powerful and high-speed data-mining tool for cancer research. It can be used for target discovery i.e. of biomarkers from body fluids, identification and analysis

  12. Platform Constellations

    DEFF Research Database (Denmark)

    Staykova, Kalina Stefanova; Damsgaard, Jan

    2016-01-01

    This research paper presents an initial attempt to introduce and explain the emergence of new phenomenon, which we refer to as platform constellations. Functioning as highly modular systems, the platform constellations are collections of highly connected platforms which co-exist in parallel and a......’ acquisition and users’ engagement rates as well as unlock new sources of value creation and diversify revenue streams....

  13. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  14. The next chapter in MOF pillaring strategies: Trigonal heterofunctional ligands to access targeted high-connected three dimensional nets, isoreticular platforms

    KAUST Repository

    Eubank, Jarrod F.

    2011-11-09

    A new pillaring strategy, based on a ligand-to-axial approach that combines the two previous common techniques, axial-to-axial and ligand-to-ligand, and permits design, access, and construction of higher dimensional MOFs, is introduced and validated. Trigonal heterofunctional ligands, in this case isophthalic acid cores functionalized at the 5-position with N-donor (e.g., pyridyl- or triazolyl-type) moieties, are designed and utilized to pillar pretargeted two-dimensional layers (supermolecular building layers, SBLs). These SBLs, based on edge transitive Kagomé and square lattices, are cross-linked into predicted three-dimensional MOFs with tunable large cavities, resulting in isoreticular platforms. © 2011 American Chemical Society.

  15. The next chapter in MOF pillaring strategies: Trigonal heterofunctional ligands to access targeted high-connected three dimensional nets, isoreticular platforms

    KAUST Repository

    Eubank, Jarrod F.; Wojtas, Łukasz; Hight, Matthew R.; Bousquet, Till; Kravtsov, Victor Ch H; Eddaoudi, Mohamed

    2011-01-01

    A new pillaring strategy, based on a ligand-to-axial approach that combines the two previous common techniques, axial-to-axial and ligand-to-ligand, and permits design, access, and construction of higher dimensional MOFs, is introduced and validated. Trigonal heterofunctional ligands, in this case isophthalic acid cores functionalized at the 5-position with N-donor (e.g., pyridyl- or triazolyl-type) moieties, are designed and utilized to pillar pretargeted two-dimensional layers (supermolecular building layers, SBLs). These SBLs, based on edge transitive Kagomé and square lattices, are cross-linked into predicted three-dimensional MOFs with tunable large cavities, resulting in isoreticular platforms. © 2011 American Chemical Society.

  16. VALUE-ADDED SERVICE INVESTING AND PRICING STRATEGIES FOR A TWO-SIDED PLATFORM UNDER INVESTING RESOURCE CONSTRAINT

    Institute of Scientific and Technical Information of China (English)

    Guowei Dou; Ping He

    2017-01-01

    Investing on value-added service (VAS) amplifies users' participation and platform profit.However,the investing resource is usually limited in practice.This paper investigates VAS investing and pricing strategies for a two-sided platform under investing resource constraint.We reveal that with VAS investment,Subsidizing can still be done to enlarge users' demand,even when the investing cost becomes higher.For optimal pricing strategies,the network effect will be the dominating determinant if the gap between two marginal cross-side benefits (i.e.the benefit that users obtain when each new user join the other side of the platform) is large.Interestingly,we show that with the increase of the marginal investing cost,users might either be priced higher or lower.If the marginal investing cost increases to a high level,and the gap between the two marginal cross-side benefits is large,lowering the access fee for users possessing the higher cross-side network effect does not necessarily compensate more profit loss caused by higher cost.Moreover,after VAS is developed,raising the access fee for those whose marginal investing benefit is large does not necessarily generate more profit as well.The opposite strategy further enlarges users' utility,and promotes the investment to benefit more users.

  17. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  18. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  19. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  20. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  1. Geospatial Data Management Platform for Urban Groundwater

    Science.gov (United States)

    Gaitanaru, D.; Priceputu, A.; Gogu, C. R.

    2012-04-01

    tools) and a front-end geoportal service. The SIMPA platform makes use of mark-up transfer standards to provide a user-friendly application that can be accessed through internet to query, analyse, and visualise geospatial data related to urban groundwater. The platform holds the information within the local groundwater geospatial databases and the user is able to access this data through a geoportal service. The database architecture allows storing accurate and very detailed geological, hydrogeological, and infrastructure information that can be straightforwardly generalized and further upscaled. The geoportal service offers the possibility of querying a dataset from the spatial database. The query is coded in a standard mark-up language, and sent to the server through a standard Hyper Text Transfer Protocol (http) to be processed by the local application. After the validation of the query, the results are sent back to the user to be displayed by the geoportal application. The main advantage of the SIMPA platform is that it offers to the user the possibility to make a primary multi-criteria query, which results in a smaller set of data to be analysed afterwards. This improves both the transfer process parameters and the user's means of creating the desired query.

  2. The SAIL databank: linking multiple health and social care datasets

    Directory of Open Access Journals (Sweden)

    Ford David V

    2009-01-01

    Full Text Available Abstract Background Vast amounts of data are collected about patients and service users in the course of health and social care service delivery. Electronic data systems for patient records have the potential to revolutionise service delivery and research. But in order to achieve this, it is essential that the ability to link the data at the individual record level be retained whilst adhering to the principles of information governance. The SAIL (Secure Anonymised Information Linkage databank has been established using disparate datasets, and over 500 million records from multiple health and social care service providers have been loaded to date, with further growth in progress. Methods Having established the infrastructure of the databank, the aim of this work was to develop and implement an accurate matching process to enable the assignment of a unique Anonymous Linking Field (ALF to person-based records to make the databank ready for record-linkage research studies. An SQL-based matching algorithm (MACRAL, Matching Algorithm for Consistent Results in Anonymised Linkage was developed for this purpose. Firstly the suitability of using a valid NHS number as the basis of a unique identifier was assessed using MACRAL. Secondly, MACRAL was applied in turn to match primary care, secondary care and social services datasets to the NHS Administrative Register (NHSAR, to assess the efficacy of this process, and the optimum matching technique. Results The validation of using the NHS number yielded specificity values > 99.8% and sensitivity values > 94.6% using probabilistic record linkage (PRL at the 50% threshold, and error rates were Conclusion With the infrastructure that has been put in place, the reliable matching process that has been developed enables an ALF to be consistently allocated to records in the databank. The SAIL databank represents a research-ready platform for record-linkage studies.

  3. Positive and Negative Impacts of Cross-border M&A

    Institute of Scientific and Technical Information of China (English)

    裴长洪; 林江

    2007-01-01

    Mergers and acquisitions of Chinese enterprises by foreign investors have moved onto the public radar in recent years.To date,the M&A frenzy has drawn widespread attention,with a mixed reaction from proponents and opponents.Proponents consider such mergers and acquisitions conducive to realizing strategic readjustment of the national economic structure,optimizing resource allocation and improving the corporate governance structure.Opponents,however,are concerned that foreign mergers and acquisitions may jeopardize China’s industrial security and erode the executive power of the central government in undertaking industrial development planning.Are the benefits of M&A outweighed by the costs,or vice versa? The focus column of this edition features two articles which debate this issue from opposing viewpoints.In the article"Positive and Negative Impacts of Cross-border M&A",the authors consider foreign M(?)A to be a new way of boosting the level of foreign investment utilization,and advocate China taking full advantage of this approach.The authors of the article"Self-Improvement Or Self-Mutilation",meanwhile,hold foreign M&A to blame for state-owned asset erosion,and insist that China should oppose mergers and acquisitions of key state- owned enterprises by foreign investors at fire-sale prices.

  4. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  5. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  6. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  7. Continuous Platform Development

    DEFF Research Database (Denmark)

    Nielsen, Ole Fiil

    low risks and investments but also with relatively fuzzy results. When looking for new platform projects, it is important to make sure that the company and market is ready for the introduction of platforms, and to make sure that people from marketing and sales, product development, and downstream......, but continuous product family evolution challenges this strategy. The concept of continuous platform development is based on the fact that platform development should not be a one-time experience but rather an ongoing process of developing new platforms and updating existing ones, so that product family...

  8. Linked Heritage: a collaborative terminology management platform for a network of multilingual thesauri and controlled vocabularies

    Directory of Open Access Journals (Sweden)

    Marie-Veronique Leroi

    2013-01-01

    Full Text Available Terminology and multilingualism have been one of the main focuses of the Athena Project. Linked Heritage as a legacy of this project also deals with terminology and bring theory to practice applying the recommendations given in the Athena Project. Linked Heritage as a direct follow-up of these recommendations on terminology and multilingualism is currently working on the development of a Terminology Management Platform (TMP. This platform will allow any cultural institution to register, SKOSify and manage its terminology in a collaborative way. This Terminology Management Platform will provide a network of multilingual and cross-domain terminologies.

  9. Single Top quark production cross section using ATLAS detector at the LHC

    CERN Document Server

    Estrada Pastor, Oscar; The ATLAS collaboration

    2018-01-01

    Measurements of single top-quark production in proton-proton collisions are presented based on the 8 TeV and 13 TeV ATLAS datasets. In the leading order process, a W boson is exchanged in the t-channel. The cross-section for the production of single top-quarks and single anti-top-quarks, their ratio, as well as differential cross-section measurements are also reported. These analyses include limits on anomalous contributions to the Wtb vertex and measurement of the top quark polarization. Measurements of the inclusive and differential cross-sections for the production of a single top quark in association with a W boson, the second largest single-top production mode, are also presented. Finally, evidence for s-channel single-top production in the 8 TeV ATLAS dataset is presented. All measurements are compared to state-of-the-art theoretical calculations.

  10. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  11. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  12. Scaling of gene expression data allowing the comparison of different gene expression platforms

    NARCIS (Netherlands)

    van Ruissen, Fred; Schaaf, Gerben J.; Kool, Marcel; Baas, Frank; Ruijter, Jan M.

    2008-01-01

    Serial analysis of gene expression (SAGE) and microarrays have found a widespread application, but much ambiguity exists regarding the amalgamation of the data resulting from these technologies. Cross-platform utilization of gene expression data from the SAGE and microarray technology could reduce

  13. Cross-media advertising: brand promotion in an age of media convergence

    NARCIS (Netherlands)

    Voorveld, H.; Smit, E.; Neijens, P.; Diehl, S.; Karmasin, M.

    2013-01-01

    Cross-media advertising, in which more than one medium platform is used to communicate related brand content, has become widespread. Several reasons for cross-media strategies can be distinguished: target group extension, complementary effects, repetition, and synergy. Media synergy—the added value

  14. EEG datasets for motor imagery brain-computer interface.

    Science.gov (United States)

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  15. Dataset - Evaluation of Standardized Sample Collection, Packaging, and Decontamination Procedures to Assess Cross-Contamination Potential during Bacillus anthracis Incident Response Operations

    Data.gov (United States)

    U.S. Environmental Protection Agency — Spore recovery data during sample packaging decontamination tests. This dataset is associated with the following publication: Calfee, W., J. Tufts, K. Meyer, K....

  16. Software-Based Wireless Power Transfer Platform for Various Power Control Experiments

    Directory of Open Access Journals (Sweden)

    Sun-Han Hwang

    2015-07-01

    Full Text Available In this paper, we present the design and evaluation of a software-based wireless power transfer platform that enables the development of a prototype involving various open- and closed-loop power control functions. Our platform is based on a loosely coupled planar wireless power transfer circuit that uses a class-E power amplifier. In conjunction with this circuit, we implement flexible control functions using a National Instruments Data Acquisition (NI DAQ board and algorithms in the MATLAB/Simulink. To verify the effectiveness of our platform, we conduct two types of power-control experiments: a no-load or metal detection using open-loop power control, and an output voltage regulation for different receiver positions using closed-loop power control. The use of the MATLAB/Simulink software as a part of the planar wireless power transfer platform for power control experiments is shown to serve as a useful and inexpensive alternative to conventional hardware-based platforms.

  17. A high-resolution European dataset for hydrologic modeling

    Science.gov (United States)

    Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta

    2013-04-01

    There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as

  18. Usability of an internet‐based platform (Next.Step for adolescent weight management

    Directory of Open Access Journals (Sweden)

    Pedro Sousa

    2015-01-01

    Conclusion: These results highlight the importance of information and communication technologies in the health information access and the healthcare provision. Despite the limited adherence rate, platform users expressed a positive overall perception of its usability and presented a positive anthropometric and behavioral progress.

  19. Simulation of machine-specific topographic indices for use across platforms.

    Science.gov (United States)

    Mahmoud, Ashraf M; Roberts, Cynthia; Lembach, Richard; Herderick, Edward E; McMahon, Timothy T

    2006-09-01

    The objective of this project is to simulate the current published topographic indices used for the detection and evaluation of keratoconus to allow their application to maps acquired from multiple topographic machines. A retrospective analysis was performed on 21 eyes of 14 previously diagnosed keratoconus patients from a single practice using a Tomey TMS-1, an Alcon EyeMap, and a Keratron Topographer. Maps that could not be processed or that contained processing errors were excluded from analysis. Topographic indices native to each of the three devices were recorded from each map. Software was written in ANSI standard C to simulate the indices based on the published formulas and/or descriptions to extend the functionality of The Ohio State University Corneal Topography Tool (OSUCTT), a software package designed to accept the input from many corneal topographic devices and provide consistent display and analysis. Twenty indices were simulated. Linear regression analysis was performed between each simulated index and the corresponding native index. A cross-platform comparison using regression analysis was also performed. All simulated indices were significantly correlated with the corresponding native indices (p simulated. Cross-platform comparisons may be limited for specific indices.

  20. 2D and 3D virtual interactive laboratories of physics on Unity platform

    Science.gov (United States)

    González, J. D.; Escobar, J. H.; Sánchez, H.; De la Hoz, J.; Beltrán, J. R.

    2017-12-01

    Using the cross-platform game engine Unity, we develop virtual laboratories for PC, consoles, mobile devices and website as an innovative tool to study physics. There is extensive uptake of ICT in the teaching of science and its impact on the learning, and considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students, we design the virtual laboratories to enhance studentâĂŹs knowledge of concepts in physics. To achieve this goal, we use Unity due to provide support bump mapping, reflection mapping, parallax mapping, dynamics shadows using shadows maps, full-screen post-processing effects and render-to-texture. Unity can use the best variant for the current video hardware and, if none are compatible, to use an alternative shader that may sacrifice features for performance. The control over delivery to mobile devices, web browsers, consoles and desktops is the main reason Unity is the best option among the same kind cross-platform. Supported platforms include Android, Apple TV, Linux, iOS, Nintendo 3DS line, macOS, PlayStation 4, Windows Phone 8, Wii but also an asset server and Nvidia’s PhysX physics engine which is the most relevant tool on Unity for our PhysLab.

  1. WLAN Positioning Methods and Supporting Learning Technologies for Mobile Platforms

    Science.gov (United States)

    Melkonyan, Arsen

    2013-01-01

    Location technologies constitute an essential component of systems design for autonomous operations and control. The Global Positioning System (GPS) works well in outdoor areas, but the satellite signals are not strong enough to penetrate inside most indoor environments. As a result, a new strain of indoor positioning technologies that make use of…

  2. [Refusal of care by a HIV-positive adolescent: role of the cross-cultural approach].

    Science.gov (United States)

    Bouaziz, Nora; Titia Rizzi, Alice

    The refusal of treatment is frequent in human immunodeficiency virus-positive adolescents. The clinical history of a teenage girl presenting severe immunodepression secondary to the virus, a depressive disorder and a refusal of treatment, illustrates the benefit of combined paediatric, child psychiatric and cross-cultural care as proposed by the Cochin-Paris Adolescent Centre. Working on the meaning of the refusal was a prerequisite for the construction of a care project forming part of a life project, as the psychopathological work could only begin once somatic care ensuring the patient's protection was in place. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  3. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.

    Science.gov (United States)

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-06

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.

  4. Would the ‘real’ observed dataset stand up? A critical examination of eight observed gridded climate datasets for China

    International Nuclear Information System (INIS)

    Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Kong, Dongxian; Ye, Aizhong; Di, Zhenhua; Gong, Wei

    2014-01-01

    This research compared and evaluated the spatio-temporal similarities and differences of eight widely used gridded datasets. The datasets include daily precipitation over East Asia (EA), the Climate Research Unit (CRU) product, the Global Precipitation Climatology Centre (GPCC) product, the University of Delaware (UDEL) product, Precipitation Reconstruction over Land (PREC/L), the Asian Precipitation Highly Resolved Observational (APHRO) product, the Institute of Atmospheric Physics (IAP) dataset from the Chinese Academy of Sciences, and the National Meteorological Information Center dataset from the China Meteorological Administration (CN05). The meteorological variables focus on surface air temperature (SAT) or precipitation (PR) in China. All datasets presented general agreement on the whole spatio-temporal scale, but some differences appeared for specific periods and regions. On a temporal scale, EA shows the highest amount of PR, while APHRO shows the lowest. CRU and UDEL show higher SAT than IAP or CN05. On a spatial scale, the most significant differences occur in western China for PR and SAT. For PR, the difference between EA and CRU is the largest. When compared with CN05, CRU shows higher SAT in the central and southern Northwest river drainage basin, UDEL exhibits higher SAT over the Southwest river drainage system, and IAP has lower SAT in the Tibetan Plateau. The differences in annual mean PR and SAT primarily come from summer and winter, respectively. Finally, potential factors impacting agreement among gridded climate datasets are discussed, including raw data sources, quality control (QC) schemes, orographic correction, and interpolation techniques. The implications and challenges of these results for climate research are also briefly addressed. (paper)

  5. Cloudgene: a graphical execution platform for MapReduce programs on private and public clouds.

    Science.gov (United States)

    Schönherr, Sebastian; Forer, Lukas; Weißensteiner, Hansi; Kronenberg, Florian; Specht, Günther; Kloss-Brandstätter, Anita

    2012-08-13

    The MapReduce framework enables a scalable processing and analyzing of large datasets by distributing the computational load on connected computer nodes, referred to as a cluster. In Bioinformatics, MapReduce has already been adopted to various case scenarios such as mapping next generation sequencing data to a reference genome, finding SNPs from short read data or matching strings in genotype files. Nevertheless, tasks like installing and maintaining MapReduce on a cluster system, importing data into its distributed file system or executing MapReduce programs require advanced knowledge in computer science and could thus prevent scientists from usage of currently available and useful software solutions. Here we present Cloudgene, a freely available platform to improve the usability of MapReduce programs in Bioinformatics by providing a graphical user interface for the execution, the import and export of data and the reproducibility of workflows on in-house (private clouds) and rented clusters (public clouds). The aim of Cloudgene is to build a standardized graphical execution environment for currently available and future MapReduce programs, which can all be integrated by using its plug-in interface. Since Cloudgene can be executed on private clusters, sensitive datasets can be kept in house at all time and data transfer times are therefore minimized. Our results show that MapReduce programs can be integrated into Cloudgene with little effort and without adding any computational overhead to existing programs. This platform gives developers the opportunity to focus on the actual implementation task and provides scientists a platform with the aim to hide the complexity of MapReduce. In addition to MapReduce programs, Cloudgene can also be used to launch predefined systems (e.g. Cloud BioLinux, RStudio) in public clouds. Currently, five different bioinformatic programs using MapReduce and two systems are integrated and have been successfully deployed. Cloudgene is

  6. Cloudgene: A graphical execution platform for MapReduce programs on private and public clouds

    Directory of Open Access Journals (Sweden)

    Schönherr Sebastian

    2012-08-01

    Full Text Available Abstract Background The MapReduce framework enables a scalable processing and analyzing of large datasets by distributing the computational load on connected computer nodes, referred to as a cluster. In Bioinformatics, MapReduce has already been adopted to various case scenarios such as mapping next generation sequencing data to a reference genome, finding SNPs from short read data or matching strings in genotype files. Nevertheless, tasks like installing and maintaining MapReduce on a cluster system, importing data into its distributed file system or executing MapReduce programs require advanced knowledge in computer science and could thus prevent scientists from usage of currently available and useful software solutions. Results Here we present Cloudgene, a freely available platform to improve the usability of MapReduce programs in Bioinformatics by providing a graphical user interface for the execution, the import and export of data and the reproducibility of workflows on in-house (private clouds and rented clusters (public clouds. The aim of Cloudgene is to build a standardized graphical execution environment for currently available and future MapReduce programs, which can all be integrated by using its plug-in interface. Since Cloudgene can be executed on private clusters, sensitive datasets can be kept in house at all time and data transfer times are therefore minimized. Conclusions Our results show that MapReduce programs can be integrated into Cloudgene with little effort and without adding any computational overhead to existing programs. This platform gives developers the opportunity to focus on the actual implementation task and provides scientists a platform with the aim to hide the complexity of MapReduce. In addition to MapReduce programs, Cloudgene can also be used to launch predefined systems (e.g. Cloud BioLinux, RStudio in public clouds. Currently, five different bioinformatic programs using MapReduce and two systems are

  7. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  8. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  9. UniMiB SHAR: A Dataset for Human Activity Recognition Using Acceleration Data from Smartphones

    Directory of Open Access Journals (Sweden)

    Daniela Micucci

    2017-10-01

    Full Text Available Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, there are only a few publicly available data sets, which often contain samples from subjects with too similar characteristics, and very often lack specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new dataset of acceleration samples acquired with an Android smartphone designed for human activity recognition and fall detection. The dataset includes 11,771 samples of both human activities and falls performed by 30 subjects of ages ranging from 18 to 60 years. Samples are divided in 17 fine grained classes grouped in two coarse grained classes: one containing samples of 9 types of activities of daily living (ADL and the other containing samples of 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with four different classifiers and with two different feature vectors. We evaluated four different classification tasks: fall vs. no fall, 9 activities, 8 falls, 17 activities and falls. For each classification task, we performed a 5-fold cross-validation (i.e., including samples from all the subjects in both the training and the test dataset and a leave-one-subject-out cross-validation (i.e., the test data include the samples of a subject only, and the training data, the samples of all the other subjects. Regarding the

  10. Cross-Cultural Adaptation and Validation of the Spanish Version of the Performance Enhancement Attitude Scale (Petróczi, 2002

    Directory of Open Access Journals (Sweden)

    Jaime Morente-Sánchez

    2014-06-01

    Full Text Available The aim of the present study was to cross-culturally adapt and validate the Spanish version of the Performance Enhancement Attitude Scale (PEAS. A cross-sectional multi-sample survey with 17 independent datasets was carried out. Cross-cultural adaptation of the PEAS into Spanish was conducted through forward/backward translations, consensus panels and comparative analyses of known-groups to establish evidence for its reliability and validity. Weighted Kappa coefficients with quadratic weighting were used to assess the reliability of each item, with Cronbach’s internal consistency coefficients for overall scale’s reliability and Spearman’s correlation coefficient for test–retest reliability over a one-week period. Confirmatory factor analysis (CFA was performed to assess the scale’s structure. Differences between self-admitted doping users and non-users were analysed to verify the PEAS’ construct validity in 8 datasets. Spearman’s correlation coefficient was also used to assess the relationships between the PEAS and self-esteem, self-efficacy and perceived descriptive norm to establish convergent validity. The scale showed satisfactory levels of internal consistency (α = 0.71–0.85, reliability of each item (Kappa values range 0.34-0.64 and temporal stability (r = 0.818; p < 0.001. CFA showed acceptable fit (RMSEA <0.08, mean RMSEA = 0.055; χ2/df < 3, mean χ2/df = 1.89 for all but one samples. As expected, self-admitted doping users showed more positive attitude toward doping than non-users. Significant and strong negative relationship was found between PEAS and self-efficacy; weak negative correlation with self-esteem and and positive correlation with perceived descriptive norm. The Spanish version of PEAS showed satisfactory psychometric properties. Considerations for application and improvement are outlined.

  11. The hackable city : Citymaking in a platform society

    NARCIS (Netherlands)

    de Waal, Martijn; de Lange, Michiel; Bouw, Matthijs

    2017-01-01

    Can computer hacking have positive parallels in the shaping of the built environment? The Hackable City research project was set up with this question in mind, to investigate the potential of digital platforms to open up the citymaking process. Its cofounders Martijn de Waal, Michiel de Lange and

  12. Platform dependence of inference on gene-wise and gene-set involvement in human lung development

    Directory of Open Access Journals (Sweden)

    Kho Alvin T

    2009-06-01

    Full Text Available Abstract Background With the recent development of microarray technologies, the comparability of gene expression data obtained from different platforms poses an important problem. We evaluated two widely used platforms, Affymetrix U133 Plus 2.0 and the Illumina HumanRef-8 v2 Expression Bead Chips, for comparability in a biological system in which changes may be subtle, namely fetal lung tissue as a function of gestational age. Results We performed the comparison via sequence-based probe matching between the two platforms. "Significance grouping" was defined as a measure of comparability. Using both expression correlation and significance grouping as measures of comparability, we demonstrated that despite overall cross-platform differences at the single gene level, increased correlation between the two platforms was found in genes with higher expression level, higher probe overlap, and lower p-value. We also demonstrated that biological function as determined via KEGG pathways or GO categories is more consistent across platforms than single gene analysis. Conclusion We conclude that while the comparability of the platforms at the single gene level may be increased by increasing sample size, they are highly comparable ontologically even for subtle differences in a relatively small sample size. Biologically relevant inference should therefore be reproducible across laboratories using different platforms.

  13. N-grams Based Supervised Machine Learning Model for Mobile Agent Platform Protection against Unknown Malicious Mobile Agents

    Directory of Open Access Journals (Sweden)

    Pallavi Bagga

    2017-12-01

    Full Text Available From many past years, the detection of unknown malicious mobile agents before they invade the Mobile Agent Platform has been the subject of much challenging activity. The ever-growing threat of malicious agents calls for techniques for automated malicious agent detection. In this context, the machine learning (ML methods are acknowledged more effective than the Signature-based and Behavior-based detection methods. Therefore, in this paper, the prime contribution has been made to detect the unknown malicious mobile agents based on n-gram features and supervised ML approach, which has not been done so far in the sphere of the Mobile Agents System (MAS security. To carry out the study, the n-grams ranging from 3 to 9 are extracted from a dataset containing 40 malicious and 40 non-malicious mobile agents. Subsequently, the classification is performed using different classifiers. A nested 5-fold cross validation scheme is employed in order to avoid the biasing in the selection of optimal parameters of classifier. The observations of extensive experiments demonstrate that the work done in this paper is suitable for the task of unknown malicious mobile agent detection in a Mobile Agent Environment, and also adds the ML in the interest list of researchers dealing with MAS security.

  14. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    Science.gov (United States)

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  15. Homogenised Australian climate datasets used for climate change monitoring

    International Nuclear Information System (INIS)

    Trewin, Blair; Jones, David; Collins; Dean; Jovanovic, Branislava; Braganza, Karl

    2007-01-01

    Full text: The Australian Bureau of Meteorology has developed a number of datasets for use in climate change monitoring. These datasets typically cover 50-200 stations distributed as evenly as possible over the Australian continent, and have been subject to detailed quality control and homogenisation.The time period over which data are available for each element is largely determined by the availability of data in digital form. Whilst nearly all Australian monthly and daily precipitation data have been digitised, a significant quantity of pre-1957 data (for temperature and evaporation) or pre-1987 data (for some other elements) remains to be digitised, and is not currently available for use in the climate change monitoring datasets. In the case of temperature and evaporation, the start date of the datasets is also determined by major changes in instruments or observing practices for which no adjustment is feasible at the present time. The datasets currently available cover: Monthly and daily precipitation (most stations commence 1915 or earlier, with many extending back to the late 19th century, and a few to the mid-19th century); Annual temperature (commences 1910); Daily temperature (commences 1910, with limited station coverage pre-1957); Twice-daily dewpoint/relative humidity (commences 1957); Monthly pan evaporation (commences 1970); Cloud amount (commences 1957) (Jovanovic etal. 2007). As well as the station-based datasets listed above, an additional dataset being developed for use in climate change monitoring (and other applications) covers tropical cyclones in the Australian region. This is described in more detail in Trewin (2007). The datasets already developed are used in analyses of observed climate change, which are available through the Australian Bureau of Meteorology website (http://www.bom.gov.au/silo/products/cli_chg/). They are also used as a basis for routine climate monitoring, and in the datasets used for the development of seasonal

  16. Accurate 3D Positioning for a Mobile Platform in Non-Line-of-Sight Scenarios Based on IMU/Magnetometer Sensor Fusion.

    Science.gov (United States)

    Hellmers, Hendrik; Kasmi, Zakaria; Norrdine, Abdelmoumen; Eichhorn, Andreas

    2018-01-04

    In recent years, a variety of real-time applications benefit from services provided by localization systems due to the advent of sensing and communication technologies. Since the Global Navigation Satellite System (GNSS) enables localization only outside buildings, applications for indoor positioning and navigation use alternative technologies. Ultra Wide Band Signals (UWB), Wireless Local Area Network (WLAN), ultrasonic or infrared are common examples. However, these technologies suffer from fading and multipath effects caused by objects and materials in the building. In contrast, magnetic fields are able to pass through obstacles without significant propagation errors, i.e. in Non-Line of Sight Scenarios (NLoS). The aim of this work is to propose a novel indoor positioning system based on artificially generated magnetic fields in combination with Inertial Measurement Units (IMUs). In order to reach a better coverage, multiple coils are used as reference points. A basic algorithm for three-dimensional applications is demonstrated as well as evaluated in this article. The established system is then realized by a sensor fusion principle as well as a kinematic motion model on the basis of a Kalman filter. Furthermore, a pressure sensor is used in combination with an adaptive filtering method to reliably estimate the platform's altitude.

  17. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  18. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  19. Context-Aware AAL Services through a 3D Sensor-Based Platform

    Directory of Open Access Journals (Sweden)

    Alessandro Leone

    2013-01-01

    Full Text Available The main goal of Ambient Assisted Living solutions is to provide assistive technologies and services in smart environments allowing elderly people to have high quality of life. Since 3D sensing technologies are increasingly investigated as monitoring solution able to outperform traditional approaches, in this work a noninvasive monitoring platform based on 3D sensors is presented providing a wide-range solution suitable in several assisted living scenarios. Detector nodes are managed by low-power embedded PCs in order to process 3D streams and extract postural features related to person’s activities. The feature level of details is tuned in accordance with the current context in order to save bandwidth and computational resources. The platform architecture is conceived as a modular system suitable to be integrated into third-party middleware to provide monitoring functionalities in several scenarios. The event detection capabilities were validated by using both synthetic and real datasets collected in controlled and real-home environments. Results show the soundness of the presented solution to adapt to different application requirements, by correctly detecting events related to four relevant AAL services.

  20. SAR image dataset of military ground targets with multiple poses for ATR

    Science.gov (United States)

    Belloni, Carole; Balleri, Alessio; Aouf, Nabil; Merlet, Thomas; Le Caillec, Jean-Marc

    2017-10-01

    Automatic Target Recognition (ATR) is the task of automatically detecting and classifying targets. Recognition using Synthetic Aperture Radar (SAR) images is interesting because SAR images can be acquired at night and under any weather conditions, whereas optical sensors operating in the visible band do not have this capability. Existing SAR ATR algorithms have mostly been evaluated using the MSTAR dataset.1 The problem with the MSTAR is that some of the proposed ATR methods have shown good classification performance even when targets were hidden,2 suggesting the presence of a bias in the dataset. Evaluations of SAR ATR techniques are currently challenging due to the lack of publicly available data in the SAR domain. In this paper, we present a high resolution SAR dataset consisting of images of a set of ground military target models taken at various aspect angles, The dataset can be used for a fair evaluation and comparison of SAR ATR algorithms. We applied the Inverse Synthetic Aperture Radar (ISAR) technique to echoes from targets rotating on a turntable and illuminated with a stepped frequency waveform. The targets in the database consist of four variants of two 1.7m-long models of T-64 and T-72 tanks. The gun, the turret position and the depression angle are varied to form 26 different sequences of images. The emitted signal spanned the frequency range from 13 GHz to 18 GHz to achieve a bandwidth of 5 GHz sampled with 4001 frequency points. The resolution obtained with respect to the size of the model targets is comparable to typical values obtained using SAR airborne systems. Single polarized images (Horizontal-Horizontal) are generated using the backprojection algorithm.3 A total of 1480 images are produced using a 20° integration angle. The images in the dataset are organized in a suggested training and testing set to facilitate a standard evaluation of SAR ATR algorithms.

  1. Design and Evaluation of a Cross-Cultural Training System

    Science.gov (United States)

    Santarelli, Thomas; Stagl, Kevin C.

    2011-01-01

    Cross-cultural competency, and the underlying communication and affective skills required to develop such expertise, is becoming increasingly important for a wide variety of domains. To address this need, we developed a blended learning platform which combines virtual role-play with tutorials, assessment and feedback. A Middle-Eastern Curriculum (MEC) exemplar for cross-cultural training U.S. military personnel was developed to guide the refinement of an existing game-based training platform. To complement this curriculum, we developed scenario authoring tools to enable end-users to define training objectives, link performance measures and feedback/remediation to these objectives, and deploy experiential scenarios within a game-based virtual environment (VE). Lessons learned from the design and development of this exemplar cross-cultural competency curriculum, as well as formative evaluation results, are discussed. Initial findings suggest that the underlying training technology promotes deep levels of semantic processing of the key information of relevant cultural and communication skills.

  2. IMPROVING THE POSITIONING ACCURACY OF TRAIN ON THE APPROACH SECTION TO THE RAILWAY CROSSING

    Directory of Open Access Journals (Sweden)

    V. I. Havryliuk

    2016-02-01

    Full Text Available Purpose. In the paper it is necessary to analyze possibility of improving the positioning accuracy of train on the approach section to crossing for traffic safety control at railway crossings. Methodology. Researches were performed using developed mathematical model, describing dependence of the input impedance of the coded and audio frequency track circuits on a train coordinate at various values of ballast isolation resistances and for all usable frequencies. Findings. The paper presents the developed mathematical model, describing dependence of the input impedance of the coded and audio-frequency track circuits on the train coordinate at various values of ballast isolation resistances and for all frequencies used in track circuits. The relative error determination of train coordinate by input impedance caused by variation of the ballast isolation resistance for the coded track circuits was investigated. The values of relative error determination of train coordinate can achieve up to 40-50 % and these facts do not allow using this method directly for coded track circuits. For short audio frequency track circuits on frequencies of continuous cab signaling (25, 50 Hz the relative error does not exceed acceptable values, this allow using the examined method for determination of train location on the approach section to railway crossing. Originality. The developed mathematical model allowed determination of the error dependence of train coordinate by using input impedance of the track circuit for coded and audio-frequency track circuits at various frequencies of the signal current and at different ballast isolation resistances. Practical value. The authors proposethe method for train location determination on approach section to the crossing, equipped with audio-frequency track circuits, which is a combination of discrete and continuous monitoring of the train location.

  3. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  4. Overview of the CERES Edition-4 Multilayer Cloud Property Datasets

    Science.gov (United States)

    Chang, F. L.; Minnis, P.; Sun-Mack, S.; Chen, Y.; Smith, R. A.; Brown, R. R.

    2014-12-01

    Knowledge of the cloud vertical distribution is important for understanding the role of clouds on earth's radiation budget and climate change. Since high-level cirrus clouds with low emission temperatures and small optical depths can provide a positive feedback to a climate system and low-level stratus clouds with high emission temperatures and large optical depths can provide a negative feedback effect, the retrieval of multilayer cloud properties using satellite observations, like Terra and Aqua MODIS, is critically important for a variety of cloud and climate applications. For the objective of the Clouds and the Earth's Radiant Energy System (CERES), new algorithms have been developed using Terra and Aqua MODIS data to allow separate retrievals of cirrus and stratus cloud properties when the two dominant cloud types are simultaneously present in a multilayer system. In this paper, we will present an overview of the new CERES Edition-4 multilayer cloud property datasets derived from Terra as well as Aqua. Assessment of the new CERES multilayer cloud datasets will include high-level cirrus and low-level stratus cloud heights, pressures, and temperatures as well as their optical depths, emissivities, and microphysical properties.

  5. Testing the Neutral Theory of Biodiversity with Human Microbiome Datasets.

    Science.gov (United States)

    Li, Lianwei; Ma, Zhanshan Sam

    2016-08-16

    The human microbiome project (HMP) has made it possible to test important ecological theories for arguably the most important ecosystem to human health-the human microbiome. Existing limited number of studies have reported conflicting evidence in the case of the neutral theory; the present study aims to comprehensively test the neutral theory with extensive HMP datasets covering all five major body sites inhabited by the human microbiome. Utilizing 7437 datasets of bacterial community samples, we discovered that only 49 communities (less than 1%) satisfied the neutral theory, and concluded that human microbial communities are not neutral in general. The 49 positive cases, although only a tiny minority, do demonstrate the existence of neutral processes. We realize that the traditional doctrine of microbial biogeography "Everything is everywhere, but the environment selects" first proposed by Baas-Becking resolves the apparent contradiction. The first part of Baas-Becking doctrine states that microbes are not dispersal-limited and therefore are neutral prone, and the second part reiterates that the freely dispersed microbes must endure selection by the environment. Therefore, in most cases, it is the host environment that ultimately shapes the community assembly and tip the human microbiome to niche regime.

  6. A Versatile Microarray Platform for Capturing Rare Cells

    Science.gov (United States)

    Brinkmann, Falko; Hirtz, Michael; Haller, Anna; Gorges, Tobias M.; Vellekoop, Michael J.; Riethdorf, Sabine; Müller, Volkmar; Pantel, Klaus; Fuchs, Harald

    2015-10-01

    Analyses of rare events occurring at extremely low frequencies in body fluids are still challenging. We established a versatile microarray-based platform able to capture single target cells from large background populations. As use case we chose the challenging application of detecting circulating tumor cells (CTCs) - about one cell in a billion normal blood cells. After incubation with an antibody cocktail, targeted cells are extracted on a microarray in a microfluidic chip. The accessibility of our platform allows for subsequent recovery of targets for further analysis. The microarray facilitates exclusion of false positive capture events by co-localization allowing for detection without fluorescent labelling. Analyzing blood samples from cancer patients with our platform reached and partly outreached gold standard performance, demonstrating feasibility for clinical application. Clinical researchers free choice of antibody cocktail without need for altered chip manufacturing or incubation protocol, allows virtual arbitrary targeting of capture species and therefore wide spread applications in biomedical sciences.

  7. Cross-cultural patterns in dynamic ratings of positive and negative natural emotional behaviour.

    Science.gov (United States)

    Sneddon, Ian; McKeown, Gary; McRorie, Margaret; Vukicevic, Tijana

    2011-02-18

    Studies of cross-cultural variations in the perception of emotion have typically compared rates of recognition of static posed stimulus photographs. That research has provided evidence for universality in the recognition of a range of emotions but also for some systematic cross-cultural variation in the interpretation of emotional expression. However, questions remain about how widely such findings can be generalised to real life emotional situations. The present study provides the first evidence that the previously reported interplay between universal and cultural influences extends to ratings of natural, dynamic emotional stimuli. Participants from Northern Ireland, Serbia, Guatemala and Peru used a computer based tool to continuously rate the strength of positive and negative emotion being displayed in twelve short video sequences by people from the United Kingdom engaged in emotional conversations. Generalized additive mixed models were developed to assess the differences in perception of emotion between countries and sexes. Our results indicate that the temporal pattern of ratings is similar across cultures for a range of emotions and social contexts. However, there are systematic differences in intensity ratings between the countries, with participants from Northern Ireland making the most extreme ratings in the majority of the clips. The results indicate that there is strong agreement across cultures in the valence and patterns of ratings of natural emotional situations but that participants from different cultures show systematic variation in the intensity with which they rate emotion. Results are discussed in terms of both 'in-group advantage' and 'display rules' approaches. This study indicates that examples of natural spontaneous emotional behaviour can be used to study cross-cultural variations in the perception of emotion.

  8. Cross-cultural patterns in dynamic ratings of positive and negative natural emotional behaviour.

    Directory of Open Access Journals (Sweden)

    Ian Sneddon

    2011-02-01

    Full Text Available Studies of cross-cultural variations in the perception of emotion have typically compared rates of recognition of static posed stimulus photographs. That research has provided evidence for universality in the recognition of a range of emotions but also for some systematic cross-cultural variation in the interpretation of emotional expression. However, questions remain about how widely such findings can be generalised to real life emotional situations. The present study provides the first evidence that the previously reported interplay between universal and cultural influences extends to ratings of natural, dynamic emotional stimuli.Participants from Northern Ireland, Serbia, Guatemala and Peru used a computer based tool to continuously rate the strength of positive and negative emotion being displayed in twelve short video sequences by people from the United Kingdom engaged in emotional conversations. Generalized additive mixed models were developed to assess the differences in perception of emotion between countries and sexes. Our results indicate that the temporal pattern of ratings is similar across cultures for a range of emotions and social contexts. However, there are systematic differences in intensity ratings between the countries, with participants from Northern Ireland making the most extreme ratings in the majority of the clips.The results indicate that there is strong agreement across cultures in the valence and patterns of ratings of natural emotional situations but that participants from different cultures show systematic variation in the intensity with which they rate emotion. Results are discussed in terms of both 'in-group advantage' and 'display rules' approaches. This study indicates that examples of natural spontaneous emotional behaviour can be used to study cross-cultural variations in the perception of emotion.

  9. Comparative and joint analysis of two metagenomic datasets from a biogas fermenter obtained by 454-pyrosequencing.

    Directory of Open Access Journals (Sweden)

    Sebastian Jaenicke

    Full Text Available Biogas production from renewable resources is attracting increased attention as an alternative energy source due to the limited availability of traditional fossil fuels. Many countries are promoting the use of alternative energy sources for sustainable energy production. In this study, a metagenome from a production-scale biogas fermenter was analysed employing Roche's GS FLX Titanium technology and compared to a previous dataset obtained from the same community DNA sample that was sequenced on the GS FLX platform. Taxonomic profiling based on 16S rRNA-specific sequences and an Environmental Gene Tag (EGT analysis employing CARMA demonstrated that both approaches benefit from the longer read lengths obtained on the Titanium platform. Results confirmed Clostridia as the most prevalent taxonomic class, whereas species of the order Methanomicrobiales are dominant among methanogenic Archaea. However, the analyses also identified additional taxa that were missed by the previous study, including members of the genera Streptococcus, Acetivibrio, Garciella, Tissierella, and Gelria, which might also play a role in the fermentation process leading to the formation of methane. Taking advantage of the CARMA feature to correlate taxonomic information of sequences with their assigned functions, it appeared that Firmicutes, followed by Bacteroidetes and Proteobacteria, dominate within the functional context of polysaccharide degradation whereas Methanomicrobiales represent the most abundant taxonomic group responsible for methane production. Clostridia is the most important class involved in the reductive CoA pathway (Wood-Ljungdahl pathway that is characteristic for acetogenesis. Based on binning of 16S rRNA-specific sequences allocated to the dominant genus Methanoculleus, it could be shown that this genus is represented by several different species. Phylogenetic analysis of these sequences placed them in close proximity to the hydrogenotrophic methanogen

  10. Comparative and Joint Analysis of Two Metagenomic Datasets from a Biogas Fermenter Obtained by 454-Pyrosequencing

    Science.gov (United States)

    Jaenicke, Sebastian; Ander, Christina; Bekel, Thomas; Bisdorf, Regina; Dröge, Marcus; Gartemann, Karl-Heinz; Jünemann, Sebastian; Kaiser, Olaf; Krause, Lutz; Tille, Felix; Zakrzewski, Martha; Pühler, Alfred

    2011-01-01

    Biogas production from renewable resources is attracting increased attention as an alternative energy source due to the limited availability of traditional fossil fuels. Many countries are promoting the use of alternative energy sources for sustainable energy production. In this study, a metagenome from a production-scale biogas fermenter was analysed employing Roche's GS FLX Titanium technology and compared to a previous dataset obtained from the same community DNA sample that was sequenced on the GS FLX platform. Taxonomic profiling based on 16S rRNA-specific sequences and an Environmental Gene Tag (EGT) analysis employing CARMA demonstrated that both approaches benefit from the longer read lengths obtained on the Titanium platform. Results confirmed Clostridia as the most prevalent taxonomic class, whereas species of the order Methanomicrobiales are dominant among methanogenic Archaea. However, the analyses also identified additional taxa that were missed by the previous study, including members of the genera Streptococcus, Acetivibrio, Garciella, Tissierella, and Gelria, which might also play a role in the fermentation process leading to the formation of methane. Taking advantage of the CARMA feature to correlate taxonomic information of sequences with their assigned functions, it appeared that Firmicutes, followed by Bacteroidetes and Proteobacteria, dominate within the functional context of polysaccharide degradation whereas Methanomicrobiales represent the most abundant taxonomic group responsible for methane production. Clostridia is the most important class involved in the reductive CoA pathway (Wood-Ljungdahl pathway) that is characteristic for acetogenesis. Based on binning of 16S rRNA-specific sequences allocated to the dominant genus Methanoculleus, it could be shown that this genus is represented by several different species. Phylogenetic analysis of these sequences placed them in close proximity to the hydrogenotrophic methanogen Methanoculleus

  11. Mobile platform security

    CERN Document Server

    Asokan, N; Dmitrienko, Alexandra

    2013-01-01

    Recently, mobile security has garnered considerable interest in both the research community and industry due to the popularity of smartphones. The current smartphone platforms are open systems that allow application development, also for malicious parties. To protect the mobile device, its user, and other mobile ecosystem stakeholders such as network operators, application execution is controlled by a platform security architecture. This book explores how such mobile platform security architectures work. We present a generic model for mobile platform security architectures: the model illustrat

  12. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Segre, Daniel [Boston Univ., MA (United States)

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  13. Daily precipitation grids for Austria since 1961—development and evaluation of a spatial dataset for hydroclimatic monitoring and modelling

    Science.gov (United States)

    Hiebl, Johann; Frei, Christoph

    2018-04-01

    Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.

  14. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  15. Dataset of aqueous humor cytokine profile in HIV patients with Cytomegalovirus (CMV retinitis

    Directory of Open Access Journals (Sweden)

    Jayant Venkatramani Iyer

    2016-09-01

    Full Text Available The data shows the aqueous humor cytokine profiling results acquired in a small cohort of 17 HIV patients clinically diagnosed with Cytomegalovirus retinitis using the FlexMAP 3D (Luminex® platform using the Milliplex Human Cytokine® kit. Aqueous humor samples were collected from these patients at different time points (pre-treatment and at 4-weekly intervals through the 12-week course of intravitreal ganciclovir treatment and 41 cytokine levels were analyzed at each time point. CMV DNA viral load was assessed in 8 patients at different time points throughout the course of ganciclovir treatment. The data described herein is related to the research article entitled “Aqueous humor immune factors and cytomegalovirus (CMV levels in CMV retinitis through treatment - The CRIGSS study” (Iyer et al., 2016 [1]. Cytokine levels against the different time points which indicate the response to the given treatment and against the CMV viral load were analyzed. Keywords: Cytokines, CMV retinitis, Dataset, HIV, Luminex bead assay

  16. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  17. Simulation of Multi-Platform Geolocation Using a Hybrid TDOA/AOA Method

    National Research Council Canada - National Science Library

    Du, Huai-Jing; Lee, Jim P

    2004-01-01

    ...) geolocation of radar emitters. A mathematical model is developed by combining sensor information such as AOA measurements, TDOA measurements and sensor position information from all platforms. A least-squares (LS...

  18. The LANDFIRE Refresh strategy: updating the national dataset

    Science.gov (United States)

    Nelson, Kurtis J.; Connot, Joel A.; Peterson, Birgit E.; Martin, Charley

    2013-01-01

    The LANDFIRE Program provides comprehensive vegetation and fuel datasets for the entire United States. As with many large-scale ecological datasets, vegetation and landscape conditions must be updated periodically to account for disturbances, growth, and natural succession. The LANDFIRE Refresh effort was the first attempt to consistently update these products nationwide. It incorporated a combination of specific systematic improvements to the original LANDFIRE National data, remote sensing based disturbance detection methods, field collected disturbance information, vegetation growth and succession modeling, and vegetation transition processes. This resulted in the creation of two complete datasets for all 50 states: LANDFIRE Refresh 2001, which includes the systematic improvements, and LANDFIRE Refresh 2008, which includes the disturbance and succession updates to the vegetation and fuel data. The new datasets are comparable for studying landscape changes in vegetation type and structure over a decadal period, and provide the most recent characterization of fuel conditions across the country. The applicability of the new layers is discussed and the effects of using the new fuel datasets are demonstrated through a fire behavior modeling exercise using the 2011 Wallow Fire in eastern Arizona as an example.

  19. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  20. Single top-quark production cross section measurements using the ATLAS detector at the LHC

    CERN Document Server

    Rieck, Patrick; The ATLAS collaboration

    2016-01-01

    Measurements of single top­quark production in proton proton collisions are presented. The measurements include the first such measurements from the 13 TeV ATLAS dataset. In the leading order process, a W boson is exchanged in the t­channel. The single top­quark and anti­top total production cross sections, their ratio, as well as a measurement of the inclusive production cross section is presented. At 8 TeV, differential cross­section measurements of the t­channel process are also presented, these measurements include limits on anomalous contributions to the Wtb vertex. A measurement of the production cross section of a single top quark in association with a W boson, the second largest single­top production mode, is also presented. Finally, evidence for single­top production in the 8 TeV ATLAS dataset is presented. All measurements are compared to state­of­ the­art theoretical calculations.

  1. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  2. Platform-based production development

    DEFF Research Database (Denmark)

    Bossen, Jacob; Brunoe, Thomas Ditlev; Nielsen, Kjeld

    2015-01-01

    Platforms as a means for applying modular thinking in product development is relatively well studied, but platforms in the production system has until now not been given much attention. With the emerging concept of platform-based co-development the importance of production platforms is though...

  3. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  4. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  5. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  6. Omnidirectional holonomic platforms

    International Nuclear Information System (INIS)

    Pin, F.G.; Killough, S.M.

    1994-01-01

    This paper presents the concepts for a new family of wheeled platforms which feature full omnidirectionality with simultaneous and independently controlled rotational and translational motion capabilities. The authors first present the orthogonal-wheels concept and the two major wheel assemblies on which these platforms are based. They then describe how a combination of these assemblies with appropriate control can be used to generate an omnidirectional capability for mobile robot platforms. The design and control of two prototype platforms are then presented and their respective characteristics with respect to rotational and translational motion control are discussed

  7. Platform decommissioning costs

    International Nuclear Information System (INIS)

    Rodger, David

    1998-01-01

    There are over 6500 platforms worldwide contributing to the offshore oil and gas production industry. In the North Sea there are around 500 platforms in place. There are many factors to be considered in planning for platform decommissioning and the evaluation of options for removal and disposal. The environmental impact, technical feasibility, safety and cost factors all have to be considered. This presentation considers what information is available about the overall decommissioning costs for the North Sea and the costs of different removal and disposal options for individual platforms. 2 figs., 1 tab

  8. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    Science.gov (United States)

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  10. Using Multiple Big Datasets and Machine Learning to Produce a New Global Particulate Dataset: A Technology Challenge Case Study

    Science.gov (United States)

    Lary, D. J.

    2013-12-01

    A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.

  11. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  12. Product Platform Replacements

    DEFF Research Database (Denmark)

    Sköld, Martin; Karlsson, Christer

    2012-01-01

    . To shed light on this unexplored and growing managerial concern, the purpose of this explorative study is to identify operational challenges to management when product platforms are replaced. Design/methodology/approach – The study uses a longitudinal field-study approach. Two companies, Gamma and Omega...... replacement was chosen in each company. Findings – The study shows that platform replacements primarily challenge managers' existing knowledge about platform architectures. A distinction can be made between “width” and “height” in platform replacements, and it is crucial that managers observe this in order...... to challenge their existing knowledge about platform architectures. Issues on technologies, architectures, components and processes as well as on segments, applications and functions are identified. Practical implications – Practical implications are summarized and discussed in relation to a framework...

  13. Tissue viability monitoring: a multi-sensor wearable platform approach

    Science.gov (United States)

    Mathur, Neha; Davidson, Alan; Buis, Arjan; Glesk, Ivan

    2016-12-01

    Health services worldwide are seeking ways to improve patient care for amputees suffering from diabetes, and at the same time reduce costs. The monitoring of residual limb temperature, interface pressure and gait can be a useful indicator of tissue viability in lower limb amputees especially to predict the occurrence of pressure ulcers. This is further exacerbated by elevated temperatures and humid micro environment within the prosthesis which encourages the growth of bacteria and skin breakdown. Wearable systems for prosthetic users have to be designed such that the sensors are minimally obtrusive and reliable enough to faithfully record movement and physiological signals. A mobile sensor platform has been developed for use with the lower limb prosthetic users. This system uses an Arduino board that includes sensors for temperature, gait, orientation and pressure measurements. The platform transmits sensor data to a central health authority database server infrastructure through the Bluetooth protocol at a suitable sampling rate. The data-sets recorded using these systems are then processed using machine learning algorithms to extract clinically relevant information from the data. Where a sensor threshold is reached a warning signal can be sent wirelessly together with the relevant data to the patient and appropriate medical personnel. This knowledge is also useful in establishing biomarkers related to a possible deterioration in a patient's health or for assessing the impact of clinical interventions.

  14. Impact of network aided platforms as educational tools on academic performance and attitude of pharmacology students.

    Science.gov (United States)

    Khan, Aftab Ahmed; Siddiqui, Adel Zia; Mohsin, Syed Fareed; Momani, Mohammed Mahmoud Al; Mirza, Eraj Humayun

    2017-01-01

    This cross-sectional study aimed to examine the impact of learning management system and WhatsApp application as educational tools on students' academic achievement and attitude. The sample population was the students of six medical colleges of Riyadh, Saudi Arabia attending Medical Pharmacology's semester course in Bachelor of Medicine, Bachelor of Surgery (MBBS) program from September 2016 to January 2017. An exploratory approach was adopted based on a comparison between students exposed to only in-class lectures (Group-N), in-class lectures together with WhatsApp platform to disseminate the lecture slides (Group-W) and students group with in-class lectures facility blended with Learning Management System (LMS) and WhatsApp platform (Group-WL). The students' grades were assessed using unified multiple choice questions at the end of the semester. Data were analyzed using descriptive statistics and Pearson correlation (pWhatsApp messenger tool showed a significant positive correlation in improving students' grades. Additionally, use of WhatsApp enhances students' in-class attendance though statistically insignificant. The results are pivotal for a paradigm shift of in-class lectures and discussion to mobile learning (M-learning). M-learning through WhatsApp may be as an alternative, innovative, and collaborative tool in achieving the required goals in medical education.

  15. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  16. Robotic vehicle with multiple tracked mobility platforms

    Science.gov (United States)

    Salton, Jonathan R [Albuquerque, NM; Buttz, James H [Albuquerque, NM; Garretson, Justin [Albuquerque, NM; Hayward, David R [Wetmore, CO; Hobart, Clinton G [Albuquerque, NM; Deuel, Jr., Jamieson K.

    2012-07-24

    A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.

  17. A mini-UAV VTOL Platform for Surveying Applications

    Directory of Open Access Journals (Sweden)

    Kuldeep Rawat

    2014-05-01

    Full Text Available In this paper we discuss implementation of a mini-Unmanned Aerial Vehicle (UAV vertical take-off and landing (VTOL platform for surveying activities related to highway construction. Recent advances in sensor and communication technologies have allowed scaling sizes of unmanned aerial platforms, and explore them for tasks that are economical and safe over populated or inhabited areas. In highway construction the capability of mini-UAVs to survey in hostile and/or hardly accessible areas can greatly reduce human risks. The project focused on developing a cost effective, remotely controlled, fuel powered mini-UAV VTOL (helicopter platform with certain payload capacity and configuration and demonstrated its use in surveying and monitoring activities required for highway planning and construction. With an on-board flight recorder global positioning system (GPS device, memory storage card, telemetry, inertial navigation sensors, and a video camera the mini-UAV can record flying coordinates and relay live video images to a remote ground receiver and surveyor. After all necessary integration and flight tests were done the mini-UAV helicopter was tested to operate and relay video from the areas where construction was underway. The mini-UAV can provide a platform for a range of sensors and instruments that directly support the operational requirements of transportation sector.

  18. Single Top quark production cross-section measurements using the ATLAS detector at the LHC

    CERN Document Server

    Jimenez Pena, Javier; The ATLAS collaboration

    2018-01-01

    Measurements of single top-quark production in proton-proton collisions are presented based on the 13 TeV and 8 TeV ATLAS datasets. In the leading order process, a W-boson is exchanged in the t-channel. The cross-section for the production of single top-quarks and single antitop-quarks, their ratio, as well as differential cross-section measurements are also reported. Measurements of the inclusive and differential cross-sections for the production of a single top quark in association with a W-boson, the second largest single top production mode are also presented. Evidence for the s-channel single top-quark production in the 8 TeV dataset is presented. Finally, the first measurement of the tZq electroweak production is presented. All measurements are compared to state-of-the art theoretical calculations. (On behalf of the ATLAS collaboration)

  19. Oceanographic profile plankton, Temperature Salinity and other measurements collected using bottle from VICTORIA 1 FISHING BOAT), ALEJERO HUMBOLDT and other platforms in the South Pacific, Coastal S Pacific and other locations from 1980 to 1982 (NODC Accession 0002083)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature, salinity and other measurements found in dataset OSD taken from the VICTORIA 1(FISHING BOAT), ALEJERO HUMBOLDT and other platforms in the South...

  20. Oil Fields, Oil and gas production platforms are potential source for oil spills and may interfere with mechanical means to clean up oil spills., Published in 1998, 1:24000 (1in=2000ft) scale, Louisiana State University (LSU).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Oil Fields dataset current as of 1998. Oil and gas production platforms are potential source for oil spills and may interfere with mechanical means to clean up oil...