WorldWideScience

Sample records for open-source cross-platform multi-modal

  1. DataViewer3D: An open-source, cross-platform multi-modal neuroimaging data visualization tool

    Directory of Open Access Journals (Sweden)

    Andre D Gouws

    2009-03-01

    Full Text Available Integration and display of results from multiple neuroimaging modalities (e.g. MRI, MEG, EEG relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK for 2D and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay. Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additonal format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data.

  2. A new, open-source, multi-modality digital breast phantom

    Science.gov (United States)

    Graff, Christian G.

    2016-03-01

    An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper's ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.

  3. PyGaze : An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments

    NARCIS (Netherlands)

    Dalmaijer, Edwin S.; Mathot, Sebastiaan; Van der Stigchel, Stefan

    2014-01-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexi

  4. PyGaze : An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments

    NARCIS (Netherlands)

    Dalmaijer, Edwin S.; Mathot, Sebastiaan; Van der Stigchel, Stefan

    2014-01-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexi

  5. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  6. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  7. Cell_motility: a cross-platform, open source application for the study of cell motion paths

    Directory of Open Access Journals (Sweden)

    Gevaert Kris

    2006-06-01

    Full Text Available Abstract Background Migration is an important aspect of cellular behaviour and is therefore widely studied in cell biology. Numerous components are known to participate in this process in a highly dynamic manner. In order to obtain a better insight in cell migration, mutants or drugs are used and their motive phenotype is then linked with the disturbing factors. One of the typical approaches to study motion paths of individual cells relies on fitting mean square displacements to a persistent random walk function. Since the numerous calculations involved often rely on diverse commercial software packages, the analysis can be expensive, labour-intensive and error-prone work. Additionally, due to the nature of algorithms employed the calculations involved are not readily reproducible without access to the exact software package(s used. Results We here present the cell_motility software, an open source Java application under the GNU-GPL license that provides a clear and concise analysis workbench for large amounts of cell motion data. Apart from performing the necessary calculations, the software also visualizes the original motion paths as well as the results of the calculations to help the user interpret the data. The application features an intuitive graphical user interface as well as full user and developer documentation and both source and binary files can be freely downloaded from the project website at http://genesis.UGent.be/cell_motility . Conclusion In providing a free, open source software solution for the automated processing of cell motion data, we aim to achieve two important goals: labs can greatly simplify their data analysis pipeline as switching between different computational software packages becomes obsolete (thus reducing the chances for human error during data manipulation and transfer and secondly, to provide scientists in the field with a freely available common platform to perform their analyses, enabling more efficient

  8. GeolOkit 1.0: a new Open Source, Cross-Platform software for geological data visualization in Google Earth environment

    Science.gov (United States)

    Triantafyllou, Antoine; Bastin, Christophe; Watlet, Arnaud

    2016-04-01

    GIS software suites are today's essential tools to gather and visualise geological data, to apply spatial and temporal analysis and in fine, to create and share interactive maps for further geosciences' investigations. For these purposes, we developed GeolOkit: an open-source, freeware and lightweight software, written in Python, a high-level, cross-platform programming language. GeolOkit software is accessible through a graphical user interface, designed to run in parallel with Google Earth. It is a super user-friendly toolbox that allows 'geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to plot these one into Google Earth environment using KML code. This workflow requires no need of any third party software, except Google Earth itself. GeolOkit comes with large number of geosciences' labels, symbols, colours and placemarks and may process : (i) multi-points data, (ii) contours via several interpolations methods, (iii) discrete planar and linear structural data in 2D or 3D supporting large range of structures input format, (iv) clustered stereonets and rose diagram, (v) drawn cross-sections as vertical sections, (vi) georeferenced maps and vectors, (vii) field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS. We are looking for you to discover all the functionalities of GeolOkit software. As this project is under development, we are definitely looking to discussions regarding your proper needs, your ideas and contributions to GeolOkit project.

  9. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.|info:eu-repo/dai/nl/29880977X

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and

  10. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexib

  11. Multi-Modality Phantom Development

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe both our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.

  12. Cross-Platform Technologies

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2017-04-01

    Full Text Available Cross-platform - a concept becoming increasingly used in recent years especially in the development of mobile apps, but this consistently over time and in the development of conventional desktop applications. The notion of cross-platform software (multi-platform or platform-independent refers to a software application that can run on more than one operating system or computing architecture. Thus, a cross-platform application can operate independent of software or hardware platform on which it is execute. As a generic definition presents a wide range of meanings for purposes of this paper we individualize this definition as follows: we will reduce the horizon of meaning and we use functionally following definition: a cross-platform application is a software application that can run on more than one operating system (desktop or mobile identical or in a similar way.

  13. Open Source Business Solutions

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2008-01-01

    Full Text Available This analyses the Open source movement. Open source development process and management is seen different from the classical point of view. This focuses on characteristics and software market tendencies for the main Open source initiatives. It also points out the labor market future evolution for the software developers.

  14. On the Bicriterion Multi Modal Assignment Problem

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, L.R.; Andersen, K.A.

    2005-01-01

    We consider the bicriterion multi modal assignment problem which is a new generalization of the classical linear assignment problem. A two-phase solution method using an effective ranking scheme is presented. The algorithm is valid for generating all nondominated criterion points...

  15. Multi modal child-to-child interaction

    DEFF Research Database (Denmark)

    Fisker, Tine Basse

    In this presentation the interaction and relation of three boys is analyzed using multi modal analysis. The analysis clearly, and surprisingly demonstrates that the boys interact via different modes and that they are able to handle several interaction partners at the same time. They co-construct ...

  16. Multi modal child-to-child interaction

    DEFF Research Database (Denmark)

    Fisker, Tine Basse

    In this presentation the interaction and relation of three boys is analyzed using multi modal analysis. The analysis clearly, and surprisingly demonstrates that the boys interact via different modes and that they are able to handle several interaction partners at the same time. They co-construct ...

  17. Crux: rapid open source protein tandem mass spectrometry analysis.

    Science.gov (United States)

    McIlwain, Sean; Tamura, Kaipo; Kertesz-Farkas, Attila; Grant, Charles E; Diament, Benjamin; Frewen, Barbara; Howbert, J Jeffry; Hoopmann, Michael R; Käll, Lukas; Eng, Jimmy K; MacCoss, Michael J; Noble, William Stafford

    2014-10-03

    Efficiently and accurately analyzing big protein tandem mass spectrometry data sets requires robust software that incorporates state-of-the-art computational, machine learning, and statistical methods. The Crux mass spectrometry analysis software toolkit ( http://cruxtoolkit.sourceforge.net ) is an open source project that aims to provide users with a cross-platform suite of analysis tools for interpreting protein mass spectrometry data.

  18. Creating Open Source Conversation

    Science.gov (United States)

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  19. Open source community organization

    CSIR Research Space (South Africa)

    Molefe, Onkgopotse M

    2009-05-01

    Full Text Available Open Source communities (OSCs), sometimes referred to as virtual or online communities play a significant role in terms of the contribution they continue to make in producing user-friendly Open Source Software (OSS) solutions. Many projects have...

  20. Creating Open Source Conversation

    Science.gov (United States)

    Sheehan, Kate

    2009-01-01

    Darien Library, where the author serves as head of knowledge and learning services, launched a new website on September 1, 2008. The website is built with Drupal, an open source content management system (CMS). In this article, the author describes how she and her colleagues overhauled the library's website to provide an open source content…

  1. Multi-Modal Interaction for Robotic Mules

    Science.gov (United States)

    2014-02-26

    Multi-Modal Interaction for Robotic Mules Glenn Taylor, Mike Quist , Matt Lanting, Cory Dunham, Patrick Theisen, Paul Muench Abstract...Taylor, Mike Quist , Matt Lanting, Cory Dunham, and Patrick Theisen are with Soar Technology, Inc. (corresponding author: 734-887- 7620; email: glenn...soartech.com; quist @soartech.com; matt.lanting@soartech.com; dunham@soartech.com; patrick.theisen@soartech.com Paul Muench is with US Army TARDEC

  2. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.

  3. mDCC_tools: characterizing multi-modal atomic motions in molecular dynamics trajectories.

    Science.gov (United States)

    Kasahara, Kota; Mohan, Neetha; Fukuda, Ikuo; Nakamura, Haruki

    2016-08-15

    We previously reported the multi-modal Dynamic Cross Correlation (mDCC) method for analyzing molecular dynamics trajectories. This method quantifies the correlation coefficients of atomic motions with complex multi-modal behaviors by using a Bayesian-based pattern recognition technique that can effectively capture transiently formed, unstable interactions. Here, we present an open source toolkit for performing the mDCC analysis, including pattern recognitions, complex network analyses and visualizations. We include a tutorial document that thoroughly explains how to apply this toolkit for an analysis, using the example trajectory of the 100 ns simulation of an engineered endothelin-1 peptide dimer. The source code is available for free at http://www.protein.osaka-u.ac.jp/rcsfp/pi/mdcctools/, implemented in C ++ and Python, and supported on Linux. kota.kasahara@protein.osaka-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. On the Bicriterion Multi Modal Assignment Problem

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, L.R.; Andersen, K.A.

    2005-01-01

    We consider the bicriterion multi modal assignment problem which is a new generalization of the classical linear assignment problem. A two-phase solution method using an effective ranking scheme is presented. The algorithm is valid for generating all nondominated criterion points...... or an approximation. Extensive computational results are conducted on a large library of test instances to test the performance of the algorithm and to identify hard test instances. Also, test results of the algorithm applied to the bicriterion assignment problem is given. Here our algorithm outperforms all...

  5. MoBILAB: An open source toolbox for analysis and visualization of mobile brain/body imaging data

    Directory of Open Access Journals (Sweden)

    Alejandro eOjeda

    2014-03-01

    Full Text Available A new paradigm for human brain imaging, mobile brain/body imaging (MoBI, involves synchronous collection of human brain activity (via electroencephalography, EEG and behavior (via body motion capture, eye tracking, etc., plus environmental events (scene and event recording to study joint brain/body dynamics supporting natural human cognition supporting performance of naturally motivated human actions and interactions in 3-D environments (Makeig et al., 2009⁠. Processing complex, concurrent, multi-modal, multi-rate data streams requires a signal-processing environment quite different from one designed to process single-modality time series data. Here we describe MoBILAB (more details available at sccn.ucsd.edu/wiki/MoBILAB, an open source, cross platform toolbox running on MATLAB (The Mathworks, Inc. that supports analysis and visualization of any mixture of synchronously recorded brain, behavioral, and environmental time series plus time-marked event stream data. MoBILAB can serve as a pre-processing environment for adding behavioral and other event markers to EEG data for further processing, and/or as a development platform for expanded analysis of simultaneously recorded data streams.

  6. Open Source Telecommunication Companies

    Directory of Open Access Journals (Sweden)

    Peter Liu

    2007-08-01

    Full Text Available Little is known about companies whose core business is selling telecommunications products that lever open source projects. Open source telecommunications (OST companies operate in markets that are very different from typical software product markets. The telecommunications market is regulated, vertically integrated, and proprietary designs and special chips are widely used. For a telecommunications product to be useful, it must interact with both access network products and core network products. Due to specifications in Service Agreements Levels, penalties for failures of telecommunications products are very high. This article shares information that is not widely known, including a list of OST companies and the open source projects on which they depend, the size and diversity of venture capital investment in OST companies, the nature of the commercial product-open source software and company-project relationships, ways in which OST companies make money, benefits and risks of OST companies, and competition between OST companies. Analysis of this information provides insights into the ways in which companies can build business models around open source software. These findings will be of interest to entrepreneurs, top management teams of incumbent companies that sell telecommunications products, and those who care about Ontario's ability to compete globally.

  7. Open Source Software Acquisition

    DEFF Research Database (Denmark)

    Holck, Jesper; Kühn Pedersen, Mogens; Holm Larsen, Michael

    2005-01-01

    Lately we have seen a growing interest from both public and private organisations to adopt OpenSource Software (OSS), not only for a few, specific applications but also on a more general levelthroughout the organisation. As a consequence, the organisations' decisions on adoption of OSS arebecoming...

  8. Open source development

    DEFF Research Database (Denmark)

    Ulhøi, John Parm

    2004-01-01

    This paper addresses innovations based on open source or non-proprietary knowledge. Viewed through the lens of private property theory, such agency appears to be a true anomaly. However, by a further turn of the theoretical kaleidoscope, we will show that there may be perfectly justifiable reason...

  9. Evaluating Open Source Portals

    Science.gov (United States)

    Goh, Dion; Luyt, Brendan; Chua, Alton; Yee, See-Yong; Poh, Kia-Ngoh; Ng, How-Yeu

    2008-01-01

    Portals have become indispensable for organizations of all types trying to establish themselves on the Web. Unfortunately, there have only been a few evaluative studies of portal software and even fewer of open source portal software. This study aims to add to the available literature in this important area by proposing and testing a checklist for…

  10. Open Source in Education

    Science.gov (United States)

    Lakhan, Shaheen E.; Jhunjhunwala, Kavita

    2008-01-01

    Educational institutions have rushed to put their academic resources and services online, beginning the global community onto a common platform and awakening the interest of investors. Despite continuing technical challenges, online education shows great promise. Open source software offers one approach to addressing the technical problems in…

  11. Open-Source Colorimeter

    Science.gov (United States)

    Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.

    2013-01-01

    The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032

  12. Open Source Physics

    CERN Document Server

    Wee, Loo Kang

    2013-01-01

    Open Source Physics (Brown, 2012; Christian, 2010; Esquembre, 2012; Hwang, 2010) empowers teachers and students to create and use these free tools with the associated intellectual property rights given to customise (Wee & Mak, 2009) the computer models/tools to suit their teaching and learning needs. Open Source Physics (OSP) focuses on design of computer models, such as Easy Java Simulations (EJS) and the use of video modeling and analysis (Tracker). They allow students to investigate, explore and analyse data which is either real or simulated. The OSP approach helps users overcome barriers in creating, using and scaling up meaningful ICT use in education. In Singapore, teachers and students have created or customised existing computer models to design and re-purpose EJS models to suit their context and learning needs. Tracker tools allow students to analyse different aspects of a physics phenomena to deepen their understanding of abstract physics concepts. Using Tracker, students record the motion of ob...

  13. Open-Source GIS

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL; Burk, Thomas E [University of Minnesota; Lime, Steve [Minnesota Department of Natural Resources

    2012-01-01

    The components making up an Open Source GIS are explained in this chapter. A map server (Sect. 30.1) can broadly be defined as a software platform for dynamically generating spatially referenced digital map products. The University of Minnesota MapServer (UMN Map Server) is one such system. Its basic features are visualization, overlay, and query. Section 30.2 names and explains many of the geospatial open source libraries, such as GDAL and OGR. The other libraries are FDO, JTS, GEOS, JCS, MetaCRS, and GPSBabel. The application examples include derived GIS-software and data format conversions. Quantum GIS, its origin and its applications explained in detail in Sect. 30.3. The features include a rich GUI, attribute tables, vector symbols, labeling, editing functions, projections, georeferencing, GPS support, analysis, and Web Map Server functionality. Future developments will address mobile applications, 3-D, and multithreading. The origins of PostgreSQL are outlined and PostGIS discussed in detail in Sect. 30.4. It extends PostgreSQL by implementing the Simple Feature standard. Section 30.5 details the most important open source licenses such as the GPL, the LGPL, the MIT License, and the BSD License, as well as the role of the Creative Commons.

  14. Open source posturography.

    Science.gov (United States)

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  15. Open Source Software Acquisition

    DEFF Research Database (Denmark)

    Holck, Jesper; Kühn Pedersen, Mogens; Holm Larsen, Michael

    2005-01-01

    Lately we have seen a growing interest from both public and private organisations to adopt OpenSource Software (OSS), not only for a few, specific applications but also on a more general levelthroughout the organisation. As a consequence, the organisations' decisions on adoption of OSS arebecoming......, in smaller organisations and in small-scale adoption of OSS, the cheapprice of OSS is a major enabler, as it provides a good opportunity for experiments and short-termeconomic benefits. For small organisations these experiments can lead to development of a commonIT-architecture, and in larger organisations...

  16. SOFTWARE OPEN SOURCE, SOFTWARE GRATIS?

    Directory of Open Access Journals (Sweden)

    Nur Aini Rakhmawati

    2006-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Berlakunya Undang – undang Hak Atas Kekayaan Intelektual (HAKI, memunculkan suatu alternatif baru untuk menggunakan software open source. Penggunaan software open source menyebar seiring dengan isu global pada Information Communication Technology (ICT saat ini. Beberapa organisasi dan perusahaan mulai menjadikan software open source sebagai pertimbangan. Banyak konsep mengenai software open source ini. Mulai dari software yang gratis sampai software tidak berlisensi. Tidak sepenuhnya isu software open source benar, untuk itu perlu dikenalkan konsep software open source mulai dari sejarah, lisensi dan bagaimana cara memilih lisensi, serta pertimbangan dalam memilih software open source yang ada. Kata kunci :Lisensi, Open Source, HAKI

  17. Multi-modality molecular imaging for gastric cancer research

    Science.gov (United States)

    Liang, Jimin; Chen, Xueli; Liu, Junting; Hu, Hao; Qu, Xiaochao; Wang, Fu; Nie, Yongzhan

    2011-12-01

    Because of the ability of integrating the strengths of different modalities and providing fully integrated information, multi-modality molecular imaging techniques provide an excellent solution to detecting and diagnosing earlier cancer, which remains difficult to achieve by using the existing techniques. In this paper, we present an overview of our research efforts on the development of the optical imaging-centric multi-modality molecular imaging platform, including the development of the imaging system, reconstruction algorithms and preclinical biomedical applications. Primary biomedical results show that the developed optical imaging-centric multi-modality molecular imaging platform may provide great potential in the preclinical biomedical applications and future clinical translation.

  18. Multi-Modal Intelligent Traffic Signal Systems GPS

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  19. Multi-Modal Intelligent Traffic Signal Systems Basic Safety Message

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  20. PR-PR: cross-platform laboratory automation system.

    Science.gov (United States)

    Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J

    2014-08-15

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  1. Based on the cocos2d cross-platform development

    Institute of Scientific and Technical Information of China (English)

    申志兵

    2016-01-01

    the intelligent mobile Phone operating system is various, but apple's IOS, Google's Android and Microsof 's Windows Phone operating system almost all of the mobile Phone market, so developing a cross-platform recruit games has the very high commercial value.Cocos2d - x is a cross-platform, open source2 d mobile game framework, using the Cocos2d - x development projects can in IOS, Android, Windows Phone support, such as c + + platform to create and run. This paper mainly introduces recruit class game development stages of preparation, including feasibility analysis, system requirements analysis, system outline design, detailed design and coding and testing of the system as well as content.Whether it is worth in feasibility analysis, mainly on the system development in all aspects of the analysis;In system demand analysis, the major demand analysis and the function of the system system function structure diagram, use case diagram, system flow chart to comprehend;In system design, mainly on the system of the game execution module, main module, a monster module, props module in detail, such as design and draw the class diagram, sequence diagram and state diagram;This game is a dungeon stage mode class, to rotate around the protagonist to destroy the monster, obtains the key to unlock the next level, in the pursuit of complete game speed at the same time to finish the game within the prescribed steps.

  2. PR-PR: Cross-Platform Laboratory Automation System

    Energy Technology Data Exchange (ETDEWEB)

    Linshiz, G; Stawski, N; Goyal, G; Bi, CH; Poust, S; Sharma, M; Mutalik, V; Keasling, JD; Hillson, NJ

    2014-08-01

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  3. Defining Open Source

    Directory of Open Access Journals (Sweden)

    Russ Nelson

    2007-09-01

    Full Text Available The Open Source Initiative and the Free Software Foundation share a common goal: that everyone should be free to modify and redistribute the software they commonly use. 'Should' is of course a normative word. For the FSF, 'should' is a moral imperative. Anything else is an immoral restriction on people's activities, just as are restrictions on speech, press, movement, and religion. For the OSI, freedom is a necessary precondition for a world where "software doesn't suck", in the words of a founder of the OSI. The FSF started from its founder's GNU Manifesto widely published in 1985. Given the manifesto's hostility to copyright, and given the failure of the Free Software Foundation to gain any traction amongst commercial users of software even with a 13-year head start, a group of people gathered together in 1998 to talk about a new strategy to get the corporate world to listen to hackers. They were impressed by Eric Raymond's Cathedral and the Bazaar's take-up among business leaders.

  4. Editorial: Open Source in Government

    Directory of Open Access Journals (Sweden)

    Dru Lavigne

    2009-04-01

    Full Text Available Last summer, the Center for Strategic and International Studies published the sixth update to their Open Source Policy survey. The survey "tracks governmental policies on the use of open source software as reported in the press or other media." The report lists 275 open source policy initiatives. It also breaks down by country and by government level whether the policy on the use of open source is considered to be advisory, preferential, or mandatory. The editorial theme for the May issue of the OSBR is "open source in government" and we are pleased that the authors have drawn upon their experiences to provide insight into public policy regarding open source for many parts of the world.

  5. OpenSesame : An open-source, graphical experiment builder for the social sciences

    NARCIS (Netherlands)

    Mathot, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-01-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality,

  6. Investigating Advances in the Acquisition of Systems Based on Open Architecture and Open Source Software

    Science.gov (United States)

    2011-08-01

    Copyright 2003-2008 Rodrigo B. Oliveira 4. UnityScript, Copyright 2005-2008 Rodrigo B. Oliveira 5. OpenAL cross platform audio library, Copyright...not know. ACM Computing Surveys, (in press). [5] Hauge, O., Ayala , C. and Conradi, R. (2010). Adoption of Open Source Software in Software- Intensive

  7. Cross-platform wireless sensor network development

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg; Kusy, Branislav

    -source development environment that takes a holistic approach to implementing sensor network applications. Users build applications using a drag-and-drop visual programming language Open Blocks, a language that Google selected for its App Inventor for Android. Tinylnventor uses cross-platform programming concepts...

  8. Xamarin cross-platform application development

    CERN Document Server

    Peppers, Jonathan

    2015-01-01

    If you are a developer with experience in C# and are just getting into mobile development, this is the book for you. If you have experience with desktop applications or the Web, this book will give you a head start on cross-platform development.

  9. Cross-platform wireless sensor network development

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg; Kusy, Branislav

    -source development environment that takes a holistic approach to implementing sensor network applications. Users build applications using a drag-and-drop visual programming language Open Blocks, a language that Google selected for its App Inventor for Android. Tinylnventor uses cross-platform programming concepts...

  10. Handbook of Open Source Tools

    CERN Document Server

    Koranne, Sandeep

    2011-01-01

    Handbook of Open Source Tools introduces a comprehensive collection of advanced open source tools useful in developing software applications. The book contains information on more than 200 open-source tools which include software construction utilities for compilers, virtual-machines, database, graphics, high-performance computing, OpenGL, geometry, algebra, graph theory , GUIs and more. Special highlights for software construction utilities and application libraries are included. Each tool is covered in the context of a real like application development setting. This unique handbook presents

  11. Comparison of Hybrid Cross-Platform Mobile Applications with Native Cross-Platform Applications

    Directory of Open Access Journals (Sweden)

    NOVAC Ovidiu Constantin

    2016-10-01

    Full Text Available In this paper we present two types of cross platform mobile applications, we will look at the advantages and disadvantages of each one. Hybrid applications are HTML5/Javascript applications which are given a native device wrapper in order to be able to run as a stand-alone application, rather than a web page which has to be rendered in the web browser. These types of application look the same on each platform they are deployed to. Native cross platform applications offers platform-like styling, making almost impossible to tell the difference between a native and a cross platform application.

  12. Analyzing huge pathology images with open source software.

    Science.gov (United States)

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  13. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  14. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    This paper demonstrates that flexible and statistically tractable multi-modal diffusion models can be attained by transformation of simple well-known diffusion models such as the Ornstein–Uhlenbeck model, or more generally a Pearson diffusion. The transformed diffusion inherits many properties...

  15. Choice set generation in multi-modal transportation networks

    NARCIS (Netherlands)

    Fiorenzo-Catalano, M.S.

    2007-01-01

    Multi-modal transport relates to trips for which travellers use two or more transport modes, for example bicycle and train, train and bus, or private car and metro. The main theme in this dissertation is to establish a choice set generation model and algorithm, and demonstrate its validity and

  16. Utilizing Multi-Modal Literacies in Middle Grades Science

    Science.gov (United States)

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  17. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  18. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  19. Multi-modal locomotion: from animal to application.

    Science.gov (United States)

    Lock, R J; Burgess, S C; Vaidyanathan, R

    2014-03-01

    The majority of robotic vehicles that can be found today are bound to operations within a single media (i.e. land, air or water). This is very rarely the case when considering locomotive capabilities in natural systems. Utility for small robots often reflects the exact same problem domain as small animals, hence providing numerous avenues for biological inspiration. This paper begins to investigate the various modes of locomotion adopted by different genus groups in multiple media as an initial attempt to determine the compromise in ability adopted by the animals when achieving multi-modal locomotion. A review of current biologically inspired multi-modal robots is also presented. The primary aim of this research is to lay the foundation for a generation of vehicles capable of multi-modal locomotion, allowing ambulatory abilities in more than one media, surpassing current capabilities. By identifying and understanding when natural systems use specific locomotion mechanisms, when they opt for disparate mechanisms for each mode of locomotion rather than using a synergized singular mechanism, and how this affects their capability in each medium, similar combinations can be used as inspiration for future multi-modal biologically inspired robotic platforms.

  20. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    Nijholt, A.; González González, G.R.

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply mo

  1. APEX version 2.0: latest version of the cross-platform analysis program for EXAFS.

    Science.gov (United States)

    Dimakis, N; Bunker, G

    2001-03-01

    This report describes recent progress on APEX, a free, open source, cross platform set of EXAFS data analysis software. In a previous report we described APEX 1.0 (Dimakis, N. and Bunker, G., 1999), a free and open source code suite of basic X-Ray Absorption Fine Structure (XAFS) data analysis programs for classical data reduction and single scattering analysis. The first version of APEX was the only cross platform (linux/irix/windows/MacOS) EXAFS analysis program to our knowledge, but it lacked important features like multiple scattering fitting, generic format conversion from ASCII to University of Washington (UW) binary-type files, and user friendly interactive graphics. In the enhanced version described here we have added cross-platform interactive graphics based on the BLT package, which is an extension to TCL/TK. Some of the utilities have been rewritten in native TCL/TK, allowing for faster and more integrated functionality with the main package. The package also has been ported to SunOS. APEX 2.0 in its current form is suitable for routine data analysis and training. Addition of more advanced methods of data analysis are planned.

  2. Penetration Tester's Open Source Toolkit

    CERN Document Server

    Faircloth, Jeremy

    2011-01-01

    Great commercial penetration testing tools can be very expensive and sometimes hard to use or of questionable accuracy. This book helps solve both of these problems. The open source, no-cost penetration testing tools presented do a great job and can be modified by the user for each situation. Many tools, even ones that cost thousands of dollars, do not come with any type of instruction on how and in which situations the penetration tester can best use them. Penetration Tester's Open Source Toolkit, Third Edition, expands upon existing instructions so that a professional can get the most accura

  3. Open source systems security certification

    CERN Document Server

    Damiani, Ernesto; El Ioini, Nabil

    2009-01-01

    Open Source Advances in Computer Applications book series provides timely technological and business information for: Enabling Open Source Systems (OSS) to become an integral part of systems and devices produced by technology companies; Inserting OSS in the critical path of complex network development and embedded products, including methodologies and tools for domain-specific OSS testing (lab code available), plus certification of security, dependability and safety properties for complex systems; Ensuring integrated systems, including OSS, meet performance and security requirements as well as achieving the necessary certifications, according to the overall strategy of OSS usage on the part of the adopter

  4. Multi-modal image registration using structural features.

    Science.gov (United States)

    Kasiri, Keyvan; Clausi, David A; Fieguth, Paul

    2014-01-01

    Multi-modal image registration has been a challenging task in medical images because of the complex intensity relationship between images to be aligned. Registration methods often rely on the statistical intensity relationship between the images which suffers from problems such as statistical insufficiency. The proposed registration method works based on extracting structural features by utilizing the complex phase and gradient-based information. By employing structural relationships between different modalities instead of complex similarity measures, the multi-modal registration problem is converted into a mono-modal one. Therefore, conventional mono-modal similarity measures can be utilized to evaluate the registration results. This new registration paradigm has been tested on magnetic resonance (MR) brain images of different modes. The method has been evaluated based on target registration error (TRE) to determine alignment accuracy. Quantitative results demonstrate that the proposed method is capable of achieving comparable registration accuracy compared to the conventional mutual information.

  5. MINERVA - A Multi-Modal Radiation Treatment Planning System

    Energy Technology Data Exchange (ETDEWEB)

    D. E. Wessol; C. A. Wemple; D. W. Nigg; J. J. Cogliati; M. L. Milvich; C. Frederickson; M. Perkins; G. A. Harkin

    2004-10-01

    Recently, research efforts have begun to examine the combination of BNCT with external beam photon radiotherapy (Barth et al. 2004). In order to properly prepare treatment plans for patients being treated with combinations of radiation modalities, appropriate planning tools must be available. To facilitiate this, researchers at the Idaho National Engineering and Environmental Laboratory (INEEL)and Montana State University (MSU) have undertaken development of a fully multi-modal radiation treatment planning system.

  6. MULTI MODAL ONTOLOGY SEARCH FOR SEMANTIC IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    R.I. Minu

    2012-08-01

    Full Text Available In this world of fast computing, automation plays an important role. In image retrieval technique automation is a great quest. Giving an image as a query and retrieving relevant images is a challenging research area. In this paper we are proposing a research of using Multi-Modality Ontology integration for image retrieval concept. The core strategy in multimodal information retrieval is the combination or fusion of different data modalities to expand and complement information. Here we use both visual and textual ontology contents to provide search functionalities. Both images and texts are complimentary information units as the human perspective will be different. So, the computational linguistic of images will lead to disambiguate text meaning when it is not quite clear in right sense of several words. That’s why the Multi-Modal information retrieval may lead to an improved operation of information retrieval system. If we go for automation we are in need of a fuzzy technique to predicate the result. So in this paper we using Support Vector Machine classifier to classify the image automatically by using the general feature such as color, texture and texton of an image , then by using this result we can create both feature and domain ontology for an particular image. Using this Multi-Modality Ontology we can refine our image searching system.

  7. THE OPEN SOURCING OF EPANET

    Science.gov (United States)

    A proposal was made at the 2009 EWRI Congress in Kansas City, MO to establish an Open Source Project (OSP) for the widely used EPANET pipe network analysis program. This would be an ongoing collaborative effort among a group of geographically dispersed advisors and developers, wo...

  8. Open Source and Open Standards

    NARCIS (Netherlands)

    Koper, Rob

    2006-01-01

    Publication reference: Koper, R. (2008). Open Source and Open Standards. In J. M. Spector, M. Merrill, J. van Merriënboer & M. P. Driscol (Eds.), Handbook of Research on Educational Communications and Technology (3rd ed., pp. 355-368). New York: Routledge.

  9. Open Source Wifi Hotspot Implementation

    Directory of Open Access Journals (Sweden)

    Tyler Sondag

    2007-06-01

    Full Text Available The goal of this paper is to describe a design—including the hardware, software, and configuration––for an open source wireless network. The network designed will require authentication. While care will be taken to keep the authentication exchange secure, the network will otherwise transmit data without encryption.

  10. An 'open source' networked identity

    DEFF Research Database (Denmark)

    Larsen, Malene Charlotte

    2013-01-01

    , but also an important part of their self-presentation online. It is discussed why these emotional statements are almost always publically available – or even strategically, intentionally placed on the young people's profiles. In relation to this, it is argued that young people – through their emotional...... communicative actions – are not only performing their own identity, but are becoming co-constructors of each other's identities, which the author characterizes as an 'open source' networked identity....

  11. Open Source Fundamental Industry Classification

    OpenAIRE

    Kakushadze, Zura; Yu, Willie

    2017-01-01

    We provide complete source code for building a fundamental industry classification based on publically available and freely downloadable data. We compare various fundamental industry classifications by running a horserace of short-horizon trading signals (alphas) utilizing open source heterotic risk models (https://ssrn.com/abstract=2600798) built using such industry classifications. Our source code includes various stand-alone and portable modules, e.g., for downloading/parsing web data, etc.

  12. Multi-modality image registration using the decomposition model

    Science.gov (United States)

    Ibrahim, Mazlinda; Chen, Ke

    2017-04-01

    In medical image analysis, image registration is one of the crucial steps required to facilitate automatic segmentation, treatment planning and other application involving imaging machines. Image registration, also known as image matching, aims to align two or more images so that information obtained can be compared and combined. Different imaging modalities and their characteristics make the task more challenging. We propose a decomposition model combining parametric and non-parametric deformation for multi-modality image registration. Numerical results show that the normalised gradient field perform better than the mutual information with the decomposition model.

  13. Coercive Region-level Registration for Multi-modal Images

    CERN Document Server

    Chen, Yu-Hui; Newstadt, Gregory; Simmons, Jeffrey; hero, Alfred

    2015-01-01

    We propose a coercive approach to simultaneously register and segment multi-modal images which share similar spatial structure. Registration is done at the region level to facilitate data fusion while avoiding the need for interpolation. The algorithm performs alternating minimization of an objective function informed by statistical models for pixel values in different modalities. Hypothesis tests are developed to determine whether to refine segmentations by splitting regions. We demonstrate that our approach has significantly better performance than the state-of-the-art registration and segmentation methods on microscopy images.

  14. Multi-modal intervention improved oral intake in hospitalized patients

    DEFF Research Database (Denmark)

    Holst, M; Beermann, T; Mortensen, M N

    2015-01-01

    : A 12-months observational multi-modal intervention study was done, using the top-down and bottom-up principle. All hospitalized patients (>3 days) were included. Setting: A university hospital with 758 beds and all specialities. Measurements: Record audit of GNP, energy- and protein-intake by 24-h......BACKGROUND: Good nutritional practice (GNP) includes screening, nutrition plan and monitoring, and is mandatory for targeted treatment of malnourished patients in hospital. AIMS: To optimize energy- and protein-intake in patients at nutritional risk and to improve GNP in a hospital setting. METHODS...

  15. A software framework for real-time multi-modal detection of microsleeps.

    Science.gov (United States)

    Knopp, Simon J; Bones, Philip J; Weddell, Stephen J; Jones, Richard D

    2017-06-01

    A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.

  16. Feature-based Alignment of Volumetric Multi-modal Images

    Science.gov (United States)

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  17. A multi-modal parcellation of human cerebral cortex.

    Science.gov (United States)

    Glasser, Matthew F; Coalson, Timothy S; Robinson, Emma C; Hacker, Carl D; Harwell, John; Yacoub, Essa; Ugurbil, Kamil; Andersson, Jesper; Beckmann, Christian F; Jenkinson, Mark; Smith, Stephen M; Van Essen, David C

    2016-08-11

    Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal 'fingerprint' of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.

  18. MINERVA - a multi-modal radiation treatment planning system

    Energy Technology Data Exchange (ETDEWEB)

    Wemple, C.A. E-mail: cew@enel.gov; Wessol, D.E.; Nigg, D.W.; Cogliati, J.J.; Milvich, M.L.; Frederickson, C.; Perkins, M.; Harkin, G.J

    2004-11-01

    Researchers at the Idaho National Engineering and Environmental Laboratory and Montana State University have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system. This system can be used for planning and analyzing several radiotherapy modalities, either singly or combined, using common modality independent image and geometry construction and dose reporting and guiding. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The MINERVA design also facilitates the future integration of improved planning technologies. The code is being developed with the Java Virtual Machine for interoperability. A full computation path has been established for molecular targeted radiotherapy treatment planning, with the associated transport plugin developed by researchers at the Lawrence Livermore National Laboratory. Development of the neutron transport plugin module is proceeding rapidly, with completion expected later this year. Future development efforts will include development of deformable registration methods, improved segmentation methods for patient model definition, and three-dimensional visualization of the patient images, geometry, and dose data. Transport and source plugins will be created for additional treatment modalities, including brachytherapy, external beam proton radiotherapy, and the EGSnrc/BEAMnrc codes for external beam photon and electron radiotherapy.

  19. Practical open source software for libraries

    CERN Document Server

    Engard, Nicole

    2010-01-01

    Open source refers to an application whose source code is made available for use or modification as users see fit. This means libraries gain more flexibility and freedom than with software purchased with license restrictions. Both the open source community and the library world live by the same rules and principles. Practical Open Source Software for Libraries explains the facts and dispels myths about open source. Chapters introduce librarians to open source and what it means for libraries. The reader is provided with links to a toolbox full of freely available open source products to use in

  20. When to make proprietary software open source

    NARCIS (Netherlands)

    Caulkins, J.P.; Feichtinger, G.; Grass, D.; Hartl, R.F.; Kort, P.M.; Seidl, A.

    2013-01-01

    Software can be distributed closed source (proprietary) or open source (developed collaboratively). While a firm cannot sell open source software, and so loses potential sales revenue, the open source software development process can have a substantial positive impact on the quality of a software, i

  1. Evolution of open source networks in industry

    NARCIS (Netherlands)

    de Laat, P.B.

    2004-01-01

    The open source software movement has become a threat to corporate software development. In response, companies started to develop products and services related to open source software. Subsequently, they also tried to come to terms with the processes that are characteristic of open source software

  2. Future of Open Source systems

    Directory of Open Access Journals (Sweden)

    Karel Charvát

    2010-02-01

    Full Text Available Software distribution strategies have many aspects and can be analysed by reviewing different incisions of a strategy. The focus of this paper is on Licensing aspect involves licensing strategy, licensing risks, licensing enforcement costs. Furthermore, by formulating licensing strategy main technical and logistical aspects are predicted also. The key issues of this paper are different business modes for FOSS software and also SWOT analysis of usage and development of FOSS software from point of view of different user groups. This analysis was provided as part of work of Humboldt IP and collaborative@rural IP. Currently this strategy are important issue of members of Czech Centre for Science and Society and WirelessInfo Living Lab, where the models based on dual licensing are key strategy.Keywords: Open Source, Licensing, FOSS base business models. SWOT analysis, Knowledge society, Knowledge economy

  3. Multi-modal cockpit interface for improved airport surface operations

    Science.gov (United States)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  4. Exploring Multi-Modal Distributions with Nested Sampling

    CERN Document Server

    Feroz, F

    2013-01-01

    In performing a Bayesian analysis, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multi-modal or exhibit pronounced (curving) degeneracies. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. Nested Sampling is a Monte Carlo method targeted at the efficient calculation of the evidence, but also produces posterior inferences as a by-product and therefore provides means to carry out parameter estimation as well as model selection. The main challenge in implementing Nested Sampling is to sample from a constrained probability distribution. One possible solution to this problem is provided by the Galilean Monte Carlo (GMC) algorithm. We show results of applying Nested Sampling with GMC to some problems which have proven very difficult for standard Markov Chain Monte Carlo (MC...

  5. Discovering Knowledge from Multi-modal Lecture Recordings

    CERN Document Server

    Kannan, Rajkumar

    2010-01-01

    Educational media mining is the process of converting raw media data from educational systems to useful information that can be used to design learning systems, answer research questions and allow personalized learning experiences. Knowledge discovery encompasses a wide range of techniques ranging from database queries to more recent developments in machine learning and language technology. Educational media mining techniques are now being used in IT Services research worldwide. Multi-modal Lecture Recordings is one of the important types of educational media and this paper explores the research challenges for mining lecture recordings for the efficient personalized learning experiences. Keywords: Educational Media Mining; Lecture Recordings, Multimodal Information System, Personalized Learning; Online Course Ware; Skills and Competences;

  6. Game of Objects: vicarious causation and multi-modal media

    Directory of Open Access Journals (Sweden)

    Aaron Pedinotti

    2013-09-01

    Full Text Available This paper applies philosopher Graham Harman's object-oriented theory of "vicarious causation" to an analysis of the multi-modal media phenomenon known as "Game of Thrones." Examining the manner in which George R.R. Martin's best-selling series of fantasy novels has been adapted into a board game, a video game, and a hit HBO television series, it uses the changes entailed by these processes to trace the contours of vicariously generative relations. In the course of the resulting analysis, it provides new suggestions concerning the eidetic dimensions of Harman's causal model, particularly with regard to causation in linear networks and in differing types of game systems.

  7. A multi-modal approach to perceptual tone mapping

    Directory of Open Access Journals (Sweden)

    Vicent Caselles

    2013-06-01

    Full Text Available We present an improvement of TSTM, a recently proposed tone mapping operator for High Dynamic Range (HDR images, based on a multi-modal analysis. One of the key features of TSTM is a suitable implementation of the Naka-Rushton equation that mimics the visual adaptation performed by the human visual system coherently with Weber-Fechner's law of contrast perception. In the present paper we use the Gaussian Mixture Model (GMM in order to detect the modes of the log-scale luminance histogram of a given HDR image and then we use the information provided by GMM to properly devise a Naka-Rushton equation for each mode. Finally, we properly select the parameters in order to merge those equations into a continuous function. Tests and comparisons to show how this new method is capable of improving the performances of TSTM are provided and commented, as well as comparisons with state of the art methods.

  8. Wearable Brain Imaging with Multi-Modal Physiological Recording.

    Science.gov (United States)

    Strangman, Gary E; Ivkovic, Vladimir; Zhang, Quan

    2017-07-13

    The brain is a central component of cognitive and physical human performance. Measures including functional brain activation, cerebral perfusion, cerebral oxygenation, evoked electrical responses, and resting hemodynamic and electrical activity are all related to, or can predict health status or performance decrements. However, measuring brain physiology typically requires large, stationary machines that are not suitable for mobile or self-monitoring. Moreover, when individuals are ambulatory, systemic physiological fluctuations-e.g., in heart rate, blood pressure, skin perfusion and more-can interfere with non-invasive brain measurements. In efforts to address the physiological monitoring and performance assessment needs for astronauts during spaceflight, we have developed easy-to-use, wearable prototypes- NINscan, for near-infrared scanning-that can collect synchronized multi-modal physiology data, including hemodynamic deep-tissue imaging (including brain and muscles), electroencephalography, electrocardiography, electromyography, electrooculography, accelerometry, gyroscopy, pressure, respiration and temperature measurements. Given their self-contained and portable nature, these devices can be deployed in a much broader range of settings-including austere environments-thereby enabling a wider range of novel medical and research physiology applications. We review these, including high-altitude assessments, self-deployable multi-modal e.g., (polysomnographic) recordings in remote or low-resource environments, fluid shifts in variable-gravity or spaceflight analog environments, intra-cranial brain motion during high-impact sports, and long-duration monitoring for clinical symptom-capture in various clinical conditions. In addition to further enhancing sensitivity and miniaturization, advanced computational algorithms could help support real-time feedback and alerts regarding performance and health. Copyright © 2017, Journal of Applied Physiology.

  9. Multi-modal myocontrol: Testing combined force- and electromyography.

    Science.gov (United States)

    Nowak, Markus; Eiband, Thomas; Castellini, Claudio

    2017-07-01

    Myocontrol, that is control of prostheses using bodily signals, has proved in the decades to be a surprisingly hard problem for the scientific community of assistive and rehabilitation robotics. In particular, traditional surface electromyography (sEMG) seems to be no longer enough to guarantee dexterity (i.e., control over several degrees of freedom) and, most importantly, reliability. Multi-modal myocontrol is concerned with the idea of using novel signal gathering techniques as a replacement of, or alongside, sEMG, to provide high-density and diverse signals to improve dexterity and make the control more reliable. In this paper we present an offline and online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. A total number of twenty sEMG and FMG sensors were used simultaneously, in several combined configurations, to predict opening/closing of the hand and activation of two degrees of freedom of the wrist of ten intact subjects. The analysis was targeted at determining the optimal sensor combination and control parameters; the experimental results indicate that sEMG sensors alone perform worst, yielding a nRMSE of 9.1%, while mixing FMG and sEMG or using FMG only reduces the nRMSE to 5.2-6.6%. To validate these results, we engaged the subject with median performance in an online goal-reaching task. Analysis of this further experiment reveals that the online behaviour is similar to the offline one.

  10. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  11. Development of Convergence Nanoparticles for Multi-Modal Bio-Medical Imaging

    Science.gov (United States)

    2008-09-18

    Multi-Modal Bio- Medical Imaging Key researchers: Jinwoo Cheon Affiliation: Department of Chemistry, Yonsei University Address: 134 Shinchon...01-02-2008 4. TITLE AND SUBTITLE Development of Convergence Nanoparticles for Multi-Modal Bio- Medical Imaging 5a. CONTRACT NUMBER FA48690714016

  12. Phenomena of Open Source and Slovenia's Adaption

    Directory of Open Access Journals (Sweden)

    Matej Mertik

    2011-06-01

    Full Text Available Although free open source libre (FLOSS software is increasing its presence in the media and in debates between IT professionals, and although even citizens in general are starting to talk about it, FLOSS is still in general quite an unknown quantity. But this is changing. There is more and more interest in open source software by general users and also its influence on the public and business sector is undeniable. What is open source? Where did it all start? How is Slovenia using open source? What have we learned? Where are the opportunities for FLOSS? These are the sort of questions we will try to answer in this document. Slovenia implemented some of the important steps in introducing open source, however we are still far behind in terms of open source adoption and great opportunities exist for its better use in future.

  13. Open source innovation phenomenon, participant behaviour, impact

    CERN Document Server

    Herstatt, Cornelius

    2015-01-01

    Open Source Innovation (OSI) has gained considerable momentum within the last years. Academic and management practice interest grows as more and more end-users consider and even participate in Open Source product development like Linux, Android, or Wikipedia. Open Source Innovation: Phenomenon, Participant Behaviour, Impact brings together rigorous academic research and business importance in scrutinizing OCI from three perspectives: The Phenomenon, Participants' Behavior, and Business Implications. The first section introduces OCI artefacts, including who is participating and why, and provide

  14. Free for All: Open Source Software

    Science.gov (United States)

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  15. Free for All: Open Source Software

    Science.gov (United States)

    Schneider, Karen

    2008-01-01

    Open source software has become a catchword in libraryland. Yet many remain unclear about open source's benefits--or even what it is. So what is open source software (OSS)? It's software that is free in every sense of the word: free to download, free to use, and free to view or modify. Most OSS is distributed on the Web and one doesn't need to…

  16. Contribution to Asterisk Open Source Project

    OpenAIRE

    González Martín, Sergio

    2009-01-01

    With this final master thesis we are going to contribute to the Asterisk open source project. Asterisk is an open source project that started with the main objective of develop an IP telephony platform, completely based on Software (so not hardware dependent) and under an open license like GPL. This project was started on 1999 by the software engineer Mark Spencer at Digium. The main motivation of that open source project was that the telecommunications sector is lack of open solutions, and m...

  17. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, unistructural, multistructural or relational learning. The research concludes that ‘The Open Source Learning Stream’ can catalyze deep learning and that there are four types of ‘Open Source Learning streams’; individual/ asynchronous, individual/synchronous, shared/asynchronous and shared...

  18. Open-source hardware for medical devices.

    Science.gov (United States)

    Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold

    2016-04-01

    Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.

  19. Free software and open source databases

    Directory of Open Access Journals (Sweden)

    Napoleon Alexandru SIRITEANU

    2006-01-01

    Full Text Available The emergence of free/open source software -FS/OSS- enterprises seeks to push software development out of the academic stream into the commercial mainstream, and as a result, end-user applications such as open source database management systems (PostgreSQL, MySQL, Firebird are becoming more popular. Companies like Sybase, Oracle, Sun, IBM are increasingly implementing open source strategies and porting programs/applications into the Linux environment. Open source software is redefining the software industry in general and database development in particular.

  20. Open Source Data Warehousing and Business Intelligence

    CERN Document Server

    Bulusu, Lakshman

    2012-01-01

    Open Source Data Warehousing and Business Intelligence is an all-in-one reference for developing open source based data warehousing (DW) and business intelligence (BI) solutions that are business-centric, cross-customer viable, cross-functional, cross-technology based, and enterprise-wide. Considering the entire lifecycle of an open source DW & BI implementation, its comprehensive coverage spans from basic concepts all the way through to customization. Highlighting the key differences between open source and vendor DW and BI technologies, the book identifies end-to-end solutions that are scala

  1. The origin of human multi-modal communication.

    Science.gov (United States)

    Levinson, Stephen C; Holler, Judith

    2014-09-19

    One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    Science.gov (United States)

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Multi-Modal Inference in Animacy Perception for Artificial Object

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2011-10-01

    Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.

  4. Intrusion Detection using Open Source Tools

    Directory of Open Access Journals (Sweden)

    Jack TIMOFTE

    2008-01-01

    Full Text Available We have witnessed in the recent years that open source tools have gained popularity among all types of users, from individuals or small businesses to large organizations and enterprises. In this paper we will present three open source IDS tools: OSSEC, Prelude and SNORT.

  5. Usability in open source software development

    DEFF Research Database (Denmark)

    Andreasen, M. S.; Nielsen, H. V.; Schrøder, S. O.

    2006-01-01

    Open Source Software (OSS) development has gained significant importance in the production of soft-ware products. Open Source Software developers have produced systems with a functionality that is competitive with similar proprietary software developed by commercial software organizations. Yet OSS...

  6. 7 Questions to Ask Open Source Vendors

    Science.gov (United States)

    Raths, David

    2012-01-01

    With their budgets under increasing pressure, many campus IT directors are considering open source projects for the first time. On the face of it, the savings can be significant. Commercial emergency-planning software can cost upward of six figures, for example, whereas the open source Kuali Ready might run as little as $15,000 per year when…

  7. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  8. Een boekje open over Open Source ERP

    NARCIS (Netherlands)

    Sneller, A.C.W.(L.)

    2009-01-01

    Er zijn vele ERP-systemen die met behulp van open source worden ontwikkeld. Organisaties die open source ERP willen implementeren staan voor twee strategische keuzes: hoe zit het met de continuïteit en wie gaat het systeem onderhouden?

  9. 77 FR 20097 - Intent To Prepare an Environmental Impact Statement for the Georgia Multi-Modal Passenger Terminal

    Science.gov (United States)

    2012-04-03

    ... surrounding destinations and districts from one another. The purpose of the Georgia Multi-modal Passenger... environment. 4. Alternatives FTA and GDOT will consider all reasonable alternatives to provide a multi-modal... horizon. The Build alternatives will involve construction of a new multi-modal transit terminal and...

  10. Deformable registration of multi-modal data including rigid structures

    Energy Technology Data Exchange (ETDEWEB)

    Huesman, Ronald H.; Klein, Gregory J.; Kimdon, Joey A.; Kuo, Chaincy; Majumdar, Sharmila

    2003-05-02

    Multi-modality imaging studies are becoming more widely utilized in the analysis of medical data. Anatomical data from CT and MRI are useful for analyzing or further processing functional data from techniques such as PET and SPECT. When data are not acquired simultaneously, even when these data are acquired on a dual-imaging device using the same bed, motion can occur that requires registration between the reconstructed image volumes. As the human torso can allow non-rigid motion, this type of motion should be estimated and corrected. We report a deformation registration technique that utilizes rigid registration for bony structures, while allowing elastic transformation of soft tissue to more accurately register the entire image volume. The technique is applied to the registration of CT and MR images of the lumbar spine. First a global rigid registration is performed to approximately align features. Bony structures are then segmented from the CT data using semi-automated process, and bounding boxes for each vertebra are established. Each CT subvolume is then individually registered to the MRI data using a piece-wise rigid registration algorithm and a mutual information image similarity measure. The resulting set of rigid transformations allows for accurate registration of the parts of the CT and MRI data representing the vertebrae, but not the adjacent soft tissue. To align the soft tissue, a smoothly-varying deformation is computed using a thin platespline(TPS) algorithm. The TPS technique requires a sparse set of landmarks that are to be brought into correspondence. These landmarks are automatically obtained from the segmented data using simple edge-detection techniques and random sampling from the edge candidates. A smoothness parameter is also included in the TPS formulation for characterization of the stiffness of the soft tissue. Estimation of an appropriate stiffness factor is obtained iteratively by using the mutual information cost function on the result

  11. Open3DALIGN: an open-source software aimed at unsupervised ligand alignment

    Science.gov (United States)

    Tosco, Paolo; Balle, Thomas; Shiri, Fereshteh

    2011-08-01

    An open-source, cross-platform software aimed at conformer generation and unsupervised rigid-body molecular alignment is presented. Different algorithms have been implemented to perform single and multi-conformation superimpositions on one or more templates. Alignments can be accomplished by matching pharmacophores, heavy atoms or a combination of the two. All methods have been successfully validated on eight comprehensive datasets previously gathered by Sutherland and co-workers. High computational performance has been attained through efficient parallelization of the code. The unsupervised nature of the alignment algorithms, together with its scriptable interface, make Open3DALIGN an ideal component of high-throughput, automated cheminformatics workflows.

  12. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    Science.gov (United States)

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com.

  13. Open Source And New Media Artists

    Directory of Open Access Journals (Sweden)

    Katri Halonen

    2007-01-01

    Full Text Available This paper deals with the open source method practiced within the new media art context. I present a case study on an international festival, PixelACHE 2005, which was organized by and for new media artists and served as a platform for demonstrations of new media projects and as a meeting place for experimental new media artists. In this article I discuss how new media artists adapted the open source ideology. Open source is seen both as a more liberal method of distribution and as an open joint creative process. I was particularly interested in what kind of motives the new media artists had for taking part in the PixelACHE festival and the joint artistic creative process. In my analysis, I found four different groups that have diverse motives for participating in open source art projects. One group contains the key persons who use the open source network as an important reference in their professional image. Members of the second and third group are new media artists who earn their main income in either the public or corporate sector and use open source projects as a learning platform. The fourth group comprises young enthusiasts who are seeking jobs and professional networking opportunities in the open source network.

  14. Multi-Modal Intelligent Traffic Signal Systems Vehicle Trajectories for Roadside Equipment

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  15. A new region descriptor for multi-modal medical image registration and region detection.

    Science.gov (United States)

    Xiaonan Wan; Dongdong Yu; Feng Yang; Caiyun Yang; Chengcai Leng; Min Xu; Jie Tian

    2015-08-01

    Establishing accurate anatomical correspondences plays a critical role in multi-modal medical image registration and region detection. Although many features based registration methods have been proposed to detect these correspondences, they are mostly based on the point descriptor which leads to high memory cost and could not represent local region information. In this paper, we propose a new region descriptor which depicts the features in each region, instead of in each point, as a vector. First, feature attributes of each point are extracted by a Gabor filter bank combined with a gradient filter. Then, the region descriptor is defined as the covariance of feature attributes of each point inside the region, based on which a cost function is constructed for multi-modal image registration. Finally, our proposed region descriptor is applied to both multi-modal region detection and similarity metric measurement in multi-modal image registration. Experiments demonstrate the feasibility and effectiveness of our proposed region descriptor.

  16. Multi-Modal Intelligent Traffic Signal Systems Signal Plans for Roadside Equipment

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  17. Lessons from an Open Source Business

    Directory of Open Access Journals (Sweden)

    Fred Dixon

    2011-04-01

    Full Text Available Creating a successful company is difficult; but creating a successful company, a successful open source project, and a successful ecosystem all at the same time is much more difficult. This article takes a retrospective look at some of the lessons we have learned in building BigBlueButton, an open source web conferencing system for distance education, and in building Blindside Networks, a company following the traditional business model of providing support and services to paying customers. Our main message is that the focus must be on creating a successful open source project first, for without it, no company in the ecosystem can flourish.

  18. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  19. Multi-modal Virtual Scenario Enhances Neurofeedback Learning

    Directory of Open Access Journals (Sweden)

    Avihay Cohen

    2016-08-01

    Full Text Available In the past decade neurofeedback has become the focus of a growing body of research. With real-time fMRI enabling on-line monitoring of emotion related areas such as the amygdala, many have begun testing its therapeutic benefits. However most existing neurofeedback procedures still use monotonic uni-modal interfaces, thus possibly limiting user engagement and weakening learning efficiency. The current study tested a novel multi-sensory neurofeedback animated scenario aimed at enhancing user experience and improving learning. We examined whether relative to a simple uni-modal 2D interface, learning via an interface of complex multi-modal 3D scenario will result in improved neurofeedback learning. As a neural-probe, we used the recently developed fMRI-inspired EEG model of amygdala activity (amygdala-EEG finger print; amygdala-EFP, enabling low-cost and mobile limbic neurofeedback training. Amygdala-EFP was reflected in the animated scenario by the unrest level of a hospital waiting-room in which virtual characters become impatient, approach the admission-desk and complain loudly. Successful down-regulation was reflected as an ease in the room unrest-level. We tested whether relative to a standard uni-modal 2D graphic thermometer interface, this animated scenario could facilitate more effective learning and improve the training experience. Thirty participants underwent two separated neurofeedback sessions (one-week apart practicing down-regulation of the amygdala-EFP signal. In the first session, half trained via the animated scenario and half via a thermometer interface. Learning efficiency was tested by three parameters: (a effect-size of the change in amygdala-EFP following training, (b sustainability of the learned down-regulation in the absence of online feedback, and (c transferability to an unfamiliar context. Comparing amygdala-EFP signal amplitude between the last and the first neurofeedback trials revealed that the animated scenario

  20. Cross-platform digital assessment forms for evaluating surgical skills

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts

    2015-01-01

    containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were...... developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion...... assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database...

  1. On the Ambiguity of Commercial Open Source

    Directory of Open Access Journals (Sweden)

    Lucian Luca

    2006-01-01

    Full Text Available . Open source and commercial applications used to be two separate worlds. The former was the work of amateurs who had little interest in making a profit, while the latter was only profit oriented and was produced by big companies. Nowadays open source is a threat and an opportunity to serious businesses of all kinds, generating good profits while delivering low costs products to customers. The competition between commercial and open source software has impacted the industry and the society as a whole. But in the last years, the markets for commercial and open source software are converging rapidly and it is interesting to resume and discuss the implications of this new paradigm, taking into account arguments pro and against it.

  2. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    This chapter presents research on deep learning in a digital learning environment and raises the question if digital instructional designs can catalyze deeper learning than traditional classroom teaching. As a theoretical point of departure the notion of ‘situated learning’ is utilized...... and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, unistructural, multistructural or relational learning. The research concludes that ‘The Open Source Learning Stream’ can catalyze deep learning and that there are four types of ‘Open Source Learning streams’; individual/ asynchronous, individual/synchronous, shared/asynchronous and shared...

  3. On Open- source Multi-robot simulators

    CSIR Research Space (South Africa)

    Namoshe, M

    2008-07-01

    Full Text Available Open source software simulators play a major role in robotics design and research as platforms for developing, testing and improving architectures, concepts and algorithms for cooperative/multi-robot systems. Simulation environment enables control...

  4. The Efficient Utilization of Open Source Information

    Energy Technology Data Exchange (ETDEWEB)

    Baty, Samuel R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Intelligence and Systems Analysis

    2016-08-11

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide key insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.

  5. Weather forecasting with open source software

    Science.gov (United States)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  6. Making a Business out of Open Source

    CERN Document Server

    CERN. Geneva

    2007-01-01

    Marc Fleury, a physicist by training, retired in his thirties after selling the company JBoss, which made an open-source application server, to Red Hat. He will talk about the various business models of open source software. From leveraging available open source software and casual contributions, to on-ramp models and subscription models, various business models have been explored and function. Not all models work for all software fields and business types. He will review those business models in context and survey "state-of-the-art" economic models for open source software production. Speaker Bio: Marc Fleury is the creator of JBoss, an open-source Java application server. Fleury holds a degree in mathematics and a Doctorate in physics from the École Polytechnique in Paris and a Master in Theoretical Physics from the École Normale. He worked in France for Sun Microsystems before moving to the United States where he has worked on various Java projects. Fleury's research interest focused on middleware, a...

  7. Freeing Worldview's development process: Open source everything!

    Science.gov (United States)

    Gunnoe, T.

    2016-12-01

    Freeing your code and your project are important steps for creating an inviting environment for collaboration, with the added side effect of keeping a good relationship with your users. NASA Worldview's codebase was released with the open source NOSA (NASA Open Source Agreement) license in 2014, but this is only the first step. We also have to free our ideas, empower our users by involving them in the development process, and open channels that lead to the creation of a community project. There are many highly successful examples of Free and Open Source Software (FOSS) projects of which we can take note: the Linux kernel, Debian, GNOME, etc. These projects owe much of their success to having a passionate mix of developers/users with a great community and a common goal in mind. This presentation will describe the scope of this openness and how Worldview plans to move forward with a more community-inclusive approach.

  8. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  9. Free and Open Source Software for Development

    CERN Document Server

    van Reijswoud, Victor

    2008-01-01

    Development organizations and International Non-Governmental Organizations have been emphasizing the high potential of Free and Open Source Software for the Less Developed Countries. Cost reduction, less vendor dependency and increased potential for local capacity development have been their main arguments. In spite of its advantages, Free and Open Source Software is not widely adopted at the African continent. In this book the authors will explore the grounds on with these expectations are based. Where do they come from and is there evidence to support these expectations? Over the past years several projects have been initiated and some good results have been achieved, but at the same time many challenges were encountered. What lessons can be drawn from these experiences and do these experiences contain enough evidence to support the high expectations? Several projects and their achievements will be considered. In the final part of the book the future of Free and Open Source Software for Development will be ...

  10. Open Source Approach to Project Management Tools

    Directory of Open Access Journals (Sweden)

    Romeo MARGEA

    2011-01-01

    Full Text Available Managing large projects involving different groups of people and complex tasks can be challenging. The solution is to use Project management software, which allows a more efficient management of projects. However, famous project management systems can be costly and may require expensive custom servers. Even if free software is not as complex as Microsoft Project, is noteworthy to think that not all projects need all the features, amenities and power of such systems. There are free and open source software alternatives that meet the needs of most projects, and that allow Web access based on different platforms and locations. A starting stage in adopting an OSS in-house is finding and identifying existing open source solution. In this paper we present an overview of Open Source Project Management Software (OSPMS based on articles, reviews, books and developers’ web sites, about those that seem to be the most popular software in this category.

  11. Cost Optimization Through Open Source Software

    Directory of Open Access Journals (Sweden)

    Mark VonFange

    2010-12-01

    Full Text Available The cost of information technology (IT as a percentage of overall operating and capital expenditures is growing as companies modernize their operations and as IT becomes an increasingly indispensable part of company resources. The price tag associated with IT infrastructure is a heavy one, and, in today's economy, companies need to look for ways to reduce overhead while maintaining quality operations and staying current with technology. With its advancements in availability, usability, functionality, choice, and power, free/libre open source software (F/LOSS provides a cost-effective means for the modern enterprise to streamline its operations. iXsystems wanted to quantify the benefits associated with the use of open source software at their company headquarters. This article is the outgrowth of our internal analysis of using open source software instead of commercial software in all aspects of company operations.

  12. Open Source in Canada's Public Sector

    Directory of Open Access Journals (Sweden)

    Evan Leibovitch

    2008-03-01

    Full Text Available The story of the growth of open source use in Canada has been far more a matter of evolution than revolution, so quiet in its pace that its progress has been difficult to measure. This has posed many challenges to Canadian open source advocates in their efforts to ensure that their country does not lag behind the rest of the world in understanding the social and business benefits open source provides. Perhaps some of the leading soldiers in the trenches might be our civil servants who protect the public purse. In addition to managing and minimizing the costs of delivering necessary services, public sector projects should also advance the social good through the delicate balance of transparency and efficiency.

  13. Open source bioimage informatics for cell biology.

    Science.gov (United States)

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  14. Web accessibility and open source software.

    Science.gov (United States)

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  15. DEIMOS – an Open Source Image Database

    Directory of Open Access Journals (Sweden)

    M. Blazek

    2011-12-01

    Full Text Available The DEIMOS (DatabasE of Images: Open Source is created as an open-source database of images and videos for testing, verification and comparing of various image and/or video processing techniques such as enhancing, compression and reconstruction. The main advantage of DEIMOS is its orientation to various application fields – multimedia, television, security, assistive technology, biomedicine, astronomy etc. The DEIMOS is/will be created gradually step-by-step based upon the contributions of team members. The paper is describing basic parameters of DEIMOS database including application examples.

  16. Open Source Interoperability: It's More than Technology

    Directory of Open Access Journals (Sweden)

    Dominic Sartorio

    2008-01-01

    Full Text Available The Open Solutions Alliance is a consortium of leading commercial open source vendors, integrators and end users dedicated to the growth of open source based solutions in the enterprise. We believe Linux and other infrastructure software, such as Apache, has become mainstream, and packaged solutions represent the next great growth opportunity. However some unique challenges can temper that opportunity. These challenges include getting the word out about the maturity and enterprise-readiness of those solutions, ensuring interoperability both with each other and with other proprietary and legacy solutions, and ensuring healthy collaboration between vendors and their respective customer and developer communities.

  17. Low-Rank and Joint Sparse Representations for Multi-Modal Recognition.

    Science.gov (United States)

    Zhang, Heng; Patel, Vishal M; Chellappa, Rama

    2017-10-01

    We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

  18. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  19. Predicting the multi-modal binding propensity of small molecules: towards an understanding of drug promiscuity.

    Science.gov (United States)

    Park, Keunwan; Lee, Soyoung; Ahn, Hee-Sung; Kim, Dongsup

    2009-08-01

    Drug promiscuity is one of the key issues in current drug development. Many famous drugs have turned out to behave unexpectedly due to their propensity to bind to multiple targets. One of the primary reasons for this promiscuity is that drugs bind to multiple distinctive target environments, a feature that we call multi-modal binding. Accordingly, investigations into whether multi-modal binding propensities can be predicted, and if so, whether the features determining this behavior can be found, would be an important advance. In this study, we have developed a structure-based classifier that predicts whether small molecules will bind to multiple distinct binding sites. The binding sites for all ligands in the Protein Data Bank (PDB) were clustered by binding site similarity, and the ligands that bind to many dissimilar binding sites were identified as multi-modal binding ligands. The mono-binding ligands were also collected, and the classifiers were built using various machine-learning algorithms. A 10-fold cross-validation procedure showed 70-85% accuracy depending on the choice of machine-learning algorithm, and the different definitions used to identify multi-modal binding ligands. In addition, a quantified importance measurement for global and local descriptors was also provided, which suggests that the local features are more likely to have an effect on multi-modal binding than the global ones. The interpretable global and local descriptors were also ranked by their importance. To test the classifier on real examples, several test sets including well-known promiscuous drugs were collected by a literature and database search. Despite the difficulty in constructing appropriate testable sets, the classifier showed reasonable results that were consistent with existing information on drug behavior. Finally, a test on natural enzyme substrates and artificial drugs suggests that the natural compounds tend to exhibit a broader range of multi-modal binding than the

  20. Open source software migration: Best practices

    CSIR Research Space (South Africa)

    Molefe, Onkgopotse M

    2010-09-01

    Full Text Available Open source software (OSS) has gained prominence worldwide, largely due to cost savings and security considerations. This has caused a change in the IT sector and has led to the migration of desktops from proprietary to OSS. The problem...

  1. Crime analysis using open source information

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Shah, Azhar Ali;

    2015-01-01

    In this paper, we present a method of crime analysis from open source information. We employed un-supervised methods of data mining to explore the facts regarding the crimes of an area of interest. The analysis is based on well known clustering and association techniques. The results show...

  2. Of Birkenstocks and Wingtips: Open Source Licenses

    Science.gov (United States)

    Gandel, Paul B.; Wheeler, Brad

    2005-01-01

    The notion of collaborating to create open source applications for higher education is rapidly gaining momentum. From course management systems to ERP financial systems, higher education institutions are working together to explore whether they can in fact build a better mousetrap. As Lois Brooks, of Stanford University, recently observed, the…

  3. An open-source thermodynamic software library

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Gaspar, Jozsef; Capolei, Andrea

    This is a technical report which accompanies the article ”An open-source thermodynamic software library” which describes an efficient Matlab and C implementation for evaluation of thermodynamic properties. In this technical report we present the model equations, that are also presented in the paper...

  4. Communal Resources in Open Source Software Development

    Science.gov (United States)

    Spaeth, Sebastian; Haefliger, Stefan; von Krogh, Georg; Renzl, Birgit

    2008-01-01

    Introduction: Virtual communities play an important role in innovation. The paper focuses on the particular form of collective action in virtual communities underlying as Open Source software development projects. Method: Building on resource mobilization theory and private-collective innovation, we propose a theory of collective action in…

  5. The Spaces of Open-Source Politics

    DEFF Research Database (Denmark)

    Husted, Emil; Plesner, Ursula

    constructed by the Alternative as techniques for practicing open-source politics, and observe that physical and digital spaces create a vacillation between openness and closure. This vacillation produces a dialectic relationship between practices of imagination and affirmation. Curiously, it seems...

  6. The SAMI2 Open Source Project

    Science.gov (United States)

    Huba, J. D.; Joyce, G.

    2001-05-01

    In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.

  7. Scribus – Open Source Desktop Publishing

    Directory of Open Access Journals (Sweden)

    Christoph Kaindel

    2010-12-01

    Full Text Available Professionelle Layout-Programme wie Quark XPress oder Adobe InDesign sind unerschwinglich teuer und daher im privaten oder Schulbereich kaum legal verfügbar. Das Open Source Programm Scribus stellt hier eine brauchbare Alternative dar.

  8. CONTENT MANAGEMENT SYSTEMS (CMS OPEN SOURCE WEBSITES

    Directory of Open Access Journals (Sweden)

    Marinela Lăzărică

    2013-01-01

    Full Text Available Firms need flexible software applications, which may be adaptable to dynamic changes of the modern business environment; they also need more control over their software costs, security and trust in purchased and implemented software. Moreover, they need to be free of individual software vendors and license costs for the software, etc. The solution of this problem consisted in open source applications and open source technology has proven that it can often provide high-quality software being a challenge for old models of software development and maintenance. The first content management system was announced in the late 90s. The offer of such software systems is varied and each of them has its own characteristics. This requires a comparative analysis of viable open-source systems in order to choose the most appropriate imposed goals. In this context, the paper illustrates the use of an open source content management system, like WordPress, to develop a content site in design of websites and analyzeits characteristics.

  9. Understanding open source communities: an organizational perspective

    NARCIS (Netherlands)

    Van Wendel de Joode, R.

    2005-01-01

    Open source communities are groups of sometimes hundreds if not thousands of individuals with different interests, backgrounds and motives. Many participants are volunteers, who are not paid to take part in the communities. Furthermore, many never get to meet each other in real life. They meet virtu

  10. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  11. Intrinsic Motivation in Open Source Software Development

    DEFF Research Database (Denmark)

    Bitzer, J.; W., Schrettl,; Schröder, Philipp

    2004-01-01

    This papers sheds light on the puzzling evidence that even though open source software (OSS) is a public good, it is developed for free by highly qualified, young and motivated individuals, and evolves at a rapid pace. We show that once OSS development is understood as the private provision...

  12. Crime analysis using open source information

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Shah, Azhar Ali

    2015-01-01

    In this paper, we present a method of crime analysis from open source information. We employed un-supervised methods of data mining to explore the facts regarding the crimes of an area of interest. The analysis is based on well known clustering and association techniques. The results show...

  13. Open-source syringe pump library.

    Science.gov (United States)

    Wijnen, Bas; Hunt, Emily J; Anzalone, Gerald C; Pearce, Joshua M

    2014-01-01

    This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications.

  14. Open access and open source in chemistry

    Directory of Open Access Journals (Sweden)

    Todd Matthew H

    2007-02-01

    Full Text Available Abstract Scientific data are being generated and shared at ever-increasing rates. Two new mechanisms for doing this have developed: open access publishing and open source research. We discuss both, with recent examples, highlighting the differences between the two, and the strengths of both.

  15. Open-source syringe pump library.

    Directory of Open Access Journals (Sweden)

    Bas Wijnen

    Full Text Available This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications.

  16. Open source OCR framework using mobile devices

    Science.gov (United States)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  17. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...... independently or with the Stanford NLP toolkit....

  18. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh;

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  19. Editorial: Humanitarian Open Source (December 2010

    Directory of Open Access Journals (Sweden)

    Leslie Hawthorn

    2010-11-01

    Full Text Available In recent years, our increasingly connected world has provided us with a greater understanding of the needs of our fellow global citizens. The devastating worldwide impact of natural disasters, disease, and poverty has been raised in our collective awareness and our ability to collectively alleviate this suffering has been brought to the fore. While many of us are familiar with donating our funds to better the lives of those less fortunate than ourselves, it is often easy to overlook a core component of facing these global challenges: information technology. The humanitarian open source movement seeks to ameliorate these sufferings through the creation of IT infrastructure to support a wide array of goals for the public good, such as providing effective healthcare or microloans to the poorest of the poor. Achieving these goals requires a sophisticated set of software and hardware tools, all of which work to save and improve lives in some of the most difficult of situations where the availability of electricity, data, IT knowledge, etc. may be low or lacking altogether. It should come as no surprise that the humanitarian open source domain attracts a great deal of attention from software developers, engineers, and others who find that they are able to both solve intense technical challenges while helping to improve the lives of others. However, to support ongoing humanitarian needs, the communities who produce humanitarian free and open source software (HFOSS and hardware have increasingly identified the need for business models to support their efforts. While the lower cost of using open source software and hardware solutions means that more funds can be directed to aid and comfort those in need, the goodwill of developer communities and the funds of grantees alone cannot grow the ecosystem sufficiently to meet ever-growing global needs. To face these challenges - poverty, global health crises, disaster relief, etc. - humanitarian open source

  20. Predicting the Attitude Flow in Dialogue Based on Multi-Modal Speech Cues

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter; Allwood, Jens

    2013-01-01

    We present our experiments on attitude detection based on annotated multi-modal dialogue data1. Our long-term goal is to establish a computational model able to predict the attitudinal patterns in humanhuman dialogue. We believe, such prediction algorithms are useful tools in the pursuit of reali......We present our experiments on attitude detection based on annotated multi-modal dialogue data1. Our long-term goal is to establish a computational model able to predict the attitudinal patterns in humanhuman dialogue. We believe, such prediction algorithms are useful tools in the pursuit...

  1. Common and uncommon vascular rings and slings: a multi-modality review

    Energy Technology Data Exchange (ETDEWEB)

    Dillman, Jonathan R.; Agarwal, Prachi P.; Hernandez, Ramiro J.; Strouse, Peter J. [University of Michigan Health System, C.S. Mott Children' s Hospital, Department of Radiology, Section of Pediatric Radiology, Ann Arbor, MI (United States); Attili, Anil K. [University of Kentucky College of Medicine, Department of Radiology, Lexington, KY (United States); Dorfman, Adam L. [University of Michigan Health System, C.S. Mott Children' s Hospital, Department of Radiology, Section of Pediatric Radiology, Ann Arbor, MI (United States); University of Michigan Health System, C.S. Mott Children' s Hospital, Department of Pediatrics and Communicable Diseases, Division of Pediatric Cardiology, Ann Arbor, MI (United States)

    2011-11-15

    Vascular rings and pulmonary slings are congenital anomalies of the aortic arch/great vessels and pulmonary arteries, respectively, that commonly present early during infancy and childhood with respiratory and/or feeding difficulties. The diagnosis of these conditions frequently utilizes a multi-modality radiological approach, commonly utilizing some combination of radiography, esophagography, CT angiography and MR angiography. The purpose of this pictorial review is to illustrate the radiological findings of common and uncommon vascular rings and pulmonary slings in children using a state-of-the-art multi-modality imaging approach. (orig.)

  2. A Cross-Platform Tactile Capabilities Interface for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Jie eMa

    2016-04-01

    Full Text Available This article presents the core elements of a cross-platform tactile capabilities interface (TCI for humanoid arms. The aim of the interface is to reduce the cost of developing humanoid robot capabilities by supporting reuse through cross-platform deployment. The article presents a comparative analysis of existing robot middleware frameworks, as well as the technical details of the TCI framework that builds on the the existing YARP platform. The TCI framework currently includes robot arm actuators with robot skin sensors. It presents such hardware in a platform independent manner, making it possible to write robot control software that can be executed on different robots through the TCI frameworks. The TCI framework supports multiple humanoid platforms and this article also presents a case study of a cross-platform implementation of a set of tactile protective withdrawal reflexes that have been realised on both the Nao and iCub humanoid robot platforms using the same high-level source code.

  3. Cross-platform digital assessment forms for evaluating surgical skills

    Directory of Open Access Journals (Sweden)

    Steven Arild Wuyts Andersen

    2015-04-01

    Full Text Available A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.

  4. Cross-platform digital assessment forms for evaluating surgical skills.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts

    2015-01-01

    A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.

  5. An open-source framework for testing tracking devices using Lego Mindstorms

    Science.gov (United States)

    Jomier, Julien; Ibanez, Luis; Enquobahrie, Andinet; Pace, Danielle; Cleary, Kevin

    2009-02-01

    In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.

  6. (In)Flexibility of Constituency in Japanese in Multi-Modal Categorial Grammar with Structured Phonology

    Science.gov (United States)

    Kubota, Yusuke

    2010-01-01

    This dissertation proposes a theory of categorial grammar called Multi-Modal Categorial Grammar with Structured Phonology. The central feature that distinguishes this theory from the majority of contemporary syntactic theories is that it decouples (without completely segregating) two aspects of syntax--hierarchical organization (reflecting…

  7. Manifold-based feature point matching for multi-modal image registration.

    Science.gov (United States)

    Hu, Liang; Wang, Manning; Song, Zhijian

    2013-03-01

    Images captured using different modalities usually have significant variations in their intensities, which makes it difficult to reveal their internal structural similarities and achieve accurate registration. Most conventional feature-based image registration techniques are fast and efficient, but they cannot be used directly for the registration of multi-modal images because of these intensity variations. This paper introduces the theory of manifold learning to transform the original images into mono-modal modalities, which is a feature-based method that is applicable to multi-modal image registration. Subsequently, scale-invariant feature transform is used to detect highly distinctive local descriptors and matches between corresponding images, and a point-based registration is executed. The algorithm was tested with T1- and T2-weighted magnetic resonance (MR) images obtained from BrainWeb. Both qualitative and quantitative evaluations of the method were performed and the results compared with those produced previously. The experiments showed that feature point matching after manifold learning achieved more accurate results than did the similarity measure for multi-modal image registration. This study provides a new manifold-based feature point matching method for multi-modal medical image registration, especially for MR images. The proposed method performs better than do conventional intensity-based techniques in terms of its registration accuracy and is suitable for clinical procedures. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Information content and analysis methods for multi-modal high-throughput biomedical data.

    Science.gov (United States)

    Ray, Bisakha; Henaff, Mikael; Ma, Sisi; Efstathiadis, Efstratios; Peskin, Eric R; Picone, Marco; Poli, Tito; Aliferis, Constantin F; Statnikov, Alexander

    2014-03-21

    The spectrum of modern molecular high-throughput assaying includes diverse technologies such as microarray gene expression, miRNA expression, proteomics, DNA methylation, among many others. Now that these technologies have matured and become increasingly accessible, the next frontier is to collect "multi-modal" data for the same set of subjects and conduct integrative, multi-level analyses. While multi-modal data does contain distinct biological information that can be useful for answering complex biology questions, its value for predicting clinical phenotypes and contributions of each type of input remain unknown. We obtained 47 datasets/predictive tasks that in total span over 9 data modalities and executed analytic experiments for predicting various clinical phenotypes and outcomes. First, we analyzed each modality separately using uni-modal approaches based on several state-of-the-art supervised classification and feature selection methods. Then, we applied integrative multi-modal classification techniques. We have found that gene expression is the most predictively informative modality. Other modalities such as protein expression, miRNA expression, and DNA methylation also provide highly predictive results, which are often statistically comparable but not superior to gene expression data. Integrative multi-modal analyses generally do not increase predictive signal compared to gene expression data.

  9. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    Science.gov (United States)

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  10. Multi-criteria appraisal of multi-modal urban public transport systems

    NARCIS (Netherlands)

    Keyvan Ekbatani, M.; Cats, O.

    2015-01-01

    This study proposes a multi-criteria decision making (MCDM) modelling framework for the appraisal of multi-modal urban public transportation services. MCDM is commonly used to obtain choice alternatives that satisfy a range of performance indicators. The framework embraces both compensatory and

  11. Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge

    Science.gov (United States)

    Blown, Eric; Bryce, Tom G. K.

    2010-01-01

    The astronomy concepts of 345 young people were studied over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The purpose of the research was to search for evidence to clarify competing theories on "conceptual…

  12. DASC: Robust Dense Descriptor for Multi-Modal and Multi-Spectral Correspondence Estimation.

    Science.gov (United States)

    Kim, Seungryong; Min, Dongbo; Ham, Bumsub; Do, Minh N; Sohn, Kwanghoon

    2017-09-01

    Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence between multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate dense multi-modal and multi-spectral correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of dense multi-modal and multi-spectral correspondences.

  13. Dispersive multi-modal mud-roll estimation and removal using feedback-loop approach

    NARCIS (Netherlands)

    Ishiyama, T.; Blacquiere, G.

    2013-01-01

    In a shallow water environment, mud-rolls are often dominant and appear as a prevailing coherent linear noise in OBC seismic data. Their complex properties make the noise removal notably challenging in seismic processing. To address the challenges, we propose a method of dispersive multi-modal

  14. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    Science.gov (United States)

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  15. A Hybrid FPGA/Coarse Parallel Processing Architecture for Multi-modal Visual Feature Descriptors

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Alonso, Javier Díaz

    2008-01-01

    This paper describes the hybrid architecture developed for speeding up the processing of so-called multi-modal visual primitives which are sparse image descriptors extracted along contours. In the system, the first stages of visual processing are implemented on FPGAs due to their highly parallel...

  16. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    Science.gov (United States)

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  17. Ultrasmall Biocompatible WO3- x Nanodots for Multi-Modality Imaging and Combined Therapy of Cancers.

    Science.gov (United States)

    Wen, Ling; Chen, Ling; Zheng, Shimin; Zeng, Jianfeng; Duan, Guangxin; Wang, Yong; Wang, Guanglin; Chai, Zhifang; Li, Zhen; Gao, Mingyuan

    2016-07-01

    Ultrasmall biocompatible WO3 - x nanodots with an outstanding X-ray radiation sensitization effect are prepared, and demonstrated to be applicable for multi-modality tumor imaging through computed tomography and photoacoustic imaging (PAI), and effective cancer treatment combining both photothermal therapy and radiation therapy.

  18. Multi-modal Discourse Analysis of Peng Liyuan’s Dress

    Institute of Scientific and Technical Information of China (English)

    顾伟红

    2016-01-01

    Traditional discourse analysis basically focuses on language rather than non-linguistic symbol resources in terms of meaning construction. The latter emerging multi-modal discourse analysis breaks this limitation into a large extent. This paper analyzed Peng Liyuan’s dress with semiotics of Saussure and visual grammar of Kress and Van Ixeuwen as theoretical framework.

  19. Multi-modal affect induction for affective brain-computer interfaces

    NARCIS (Netherlands)

    Mühl, C.; Broek, E.L. van den; Brouwer, A.M.; Nijboer, F.; Wouwe, N.C. van; Heylen, D.

    2011-01-01

    Reliable applications of affective brain-computer interfaces (aBCI) in realistic, multi-modal environments require a detailed understanding of the processes involved in emotions. To explore the modalityspecific nature of affective responses, we studied neurophysiological responses (i.e., EEG) of 24

  20. Multi-modal affect induction for affective brain-computer interfaces

    NARCIS (Netherlands)

    Mühl, C.; Broek, E.L. van den; Brouwer, A.M.; Nijboer, F.; Wouwe, N.C. van; Heylen, D.

    2011-01-01

    Reliable applications of affective brain-computer interfaces (aBCI) in realistic, multi-modal environments require a detailed understanding of the processes involved in emotions. To explore the modalityspecific nature of affective responses, we studied neurophysiological responses (i.e., EEG) of 24

  1. Multi-criteria appraisal of multi-modal urban public transport systems

    NARCIS (Netherlands)

    Keyvan Ekbatani, M.; Cats, O.

    2015-01-01

    This study proposes a multi-criteria decision making (MCDM) modelling framework for the appraisal of multi-modal urban public transportation services. MCDM is commonly used to obtain choice alternatives that satisfy a range of performance indicators. The framework embraces both compensatory and non-

  2. Distributed Network Control for Mobile Multi-Modal Wireless Sensor Networks

    Science.gov (United States)

    2010-08-19

    Distributed Network Control for Mobile Multi-Modal Wireless Sensor Networks Doina Bein , Yicheng Wen, Shashi Phoha1, Bharat B. Madan, and Asok Ray The...Journal of High Perfor- mance Computing Applications, Special Issue on Sensor Networks 16 (3) (2002) 235–241. [30] Y. Wen, D. Bein , S. Phoha

  3. Multi-Modal Obstacle Detection in Unstructured Environments with Conditional Random Fields

    DEFF Research Database (Denmark)

    Kragh, Mikkel; Underwood, James

    2017-01-01

    explicitly handling sparse point cloud data and exploiting both spatial, temporal, and multi-modal links between corresponding 2D and 3D regions. The proposed method is evaluated on a diverse dataset, comprising a dairy paddock and a number of different orchards gathered with a perception research robot...

  4. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    This chapter presents research on deep learning in a digital learning environment and raises the question if digital instructional designs can catalyze deeper learning than traditional classroom teaching. As a theoretical point of departure the notion of ‘situated learning’ is utilized...... and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, Twitter hashtags etc.). The ‘learning instances’ are described as mediated signs of learning expressed through text in the stream that is further developed by peers and by the teacher. The expressions of ‘learning instances’ are analyzed and categorized according to whether it expresses prestructural...

  5. Usability in open source software development

    DEFF Research Database (Denmark)

    Andreasen, M. S.; Nielsen, H. V.; Schrøder, S. O.

    2006-01-01

    Open Source Software (OSS) development has gained significant importance in the production of soft-ware products. Open Source Software developers have produced systems with a functionality that is competitive with similar proprietary software developed by commercial software organizations. Yet OSS...... is usually designed for and by power-users, and OSS products have been criticized for having little or no emphasis on usability. We have conducted an empirical study of the developers’ opinions about usability and the way usability engineering is practiced in a variety of OSS projects. The study included...... a questionnaire survey and a series of interviews, where we interviewed OSS contributors with both technical and usability backgrounds. Overall we found that OSS developers are interested in usability, but in practice it is not top priority, and OSS projects rarely employs systematic usability evaluation. Most...

  6. Integrated open source mine workers compensation system

    CSIR Research Space (South Africa)

    Coetzee, L

    2006-08-01

    Full Text Available the solution using open-source technologies. Previously the department had already ADMINISTRATIO PUBLICA - VOL 14 NO 2 AUGUST 2006 126 invested heavily in Oracle as persistence store. It was therefore decided to continue with Oracle as back-end store, even... (the Oracle database) was obtained using JDBC. To support collaboration between the distributed users (each with different tasks and requirements) a role-based access control module was developed. Each registered user has various roles assigned...

  7. Computer Forensics Education - the Open Source Approach

    Science.gov (United States)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  8. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  9. From open source communications to knowledge

    Science.gov (United States)

    Preece, Alun; Roberts, Colin; Rogers, David; Webberley, Will; Innes, Martin; Braines, Dave

    2016-05-01

    Rapid processing and exploitation of open source information, including social media sources, in order to shorten decision-making cycles, has emerged as an important issue in intelligence analysis in recent years. Through a series of case studies and natural experiments, focussed primarily upon policing and counter-terrorism scenarios, we have developed an approach to information foraging and framing to inform decision making, drawing upon open source intelligence, in particular Twitter, due to its real-time focus and frequent use as a carrier for links to other media. Our work uses a combination of natural language (NL) and controlled natural language (CNL) processing to support information collection from human sensors, linking and schematising of collected information, and the framing of situational pictures. We illustrate the approach through a series of vignettes, highlighting (1) how relatively lightweight and reusable knowledge models (schemas) can rapidly be developed to add context to collected social media data, (2) how information from open sources can be combined with reports from trusted observers, for corroboration or to identify con icting information; and (3) how the approach supports users operating at or near the tactical edge, to rapidly task information collection and inform decision-making. The approach is supported by bespoke software tools for social media analytics and knowledge management.

  10. Web Server Security on Open Source Environments

    Science.gov (United States)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  11. Use of Multi-Modal Media and Tools in an Online Information Literacy Course: College Students' Attitudes and Perceptions

    Science.gov (United States)

    Chen, Hsin-Liang; Williams, James Patrick

    2009-01-01

    This project studies the use of multi-modal media objects in an online information literacy class. One hundred sixty-two undergraduate students answered seven surveys. Significant relationships are found among computer skills, teaching materials, communication tools and learning experience. Multi-modal media objects and communication tools are…

  12. Open Source GIS based integrated watershed management

    Science.gov (United States)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address

  13. Building Energy Management Open Source Software

    Energy Technology Data Exchange (ETDEWEB)

    2017-06-20

    This is the repository for Building Energy Management Open Source Software (BEMOSS), which is an open source operating system that is engineered to improve sensing and control of equipment in small- and medium-sized commercial buildings. BEMOSS offers the following key features: (1) Open source, open architecture – BEMOSS is an open source operating system that is built upon VOLTTRON – a distributed agent platform developed by Pacific Northwest National Laboratory (PNNL). BEMOSS was designed to make it easy for hardware manufacturers to seamlessly interface their devices with BEMOSS. Software developers can also contribute to adding additional BEMOSS functionalities and applications. (2) Plug & play – BEMOSS was designed to automatically discover supported load controllers (including smart thermostats, VAV/RTUs, lighting load controllers and plug load controllers) in commercial buildings. (3) Interoperability – BEMOSS was designed to work with load control devices form different manufacturers that operate on different communication technologies and data exchange protocols. (4) Cost effectiveness – Implementation of BEMOSS deemed to be cost-effective as it was built upon a robust open source platform that can operate on a low-cost single-board computer, such as Odroid. This feature could contribute to its rapid deployment in small- or medium-sized commercial buildings. (5) Scalability and ease of deployment – With its multi-node architecture, BEMOSS provides a distributed architecture where load controllers in a multi-floor and high occupancy building could be monitored and controlled by multiple single-board computers hosting BEMOSS. This makes it possible for a building engineer to deploy BEMOSS in one zone of a building, be comfortable with its operation, and later on expand the deployment to the entire building to make it more energy efficient. (6) Ability to provide local and remote monitoring – BEMOSS provides both local and remote monitoring

  14. Free, cross-platform gRaphical software

    DEFF Research Database (Denmark)

    Dethlefsen, Claus

    2006-01-01

    -recursive graphical models, and models defined using the BUGS language. Today, there exists a wide range of packages to support the analysis of data using graphical models. Here, we focus on Open Source software, making it possible to extend the functionality by integrating these packages into more general tools. We...... will attempt to give an overview of the available Open Source software, with focus on the gR project. This project was launched in 2002 to make facilities in R for graphical modelling. Several R packages have been developed within the gR project both for display and analysis of graphical models...

  15. Free, cross-platform gRaphical software

    DEFF Research Database (Denmark)

    Dethlefsen, Claus

    2006-01-01

    -recursive graphical models, and models defined using the BUGS language. Today, there exists a wide range of packages to support the analysis of data using graphical models. Here, we focus on Open Source software, making it possible to extend the functionality by integrating these packages into more general tools. We...... will attempt to give an overview of the available Open Source software, with focus on the gR project. This project was launched in 2002 to make facilities in R for graphical modelling. Several R packages have been developed within the gR project both for display and analysis of graphical models...

  16. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  17. OpenCFU, a new free and open-source software to count cell colonies and other circular objects.

    Science.gov (United States)

    Geissmann, Quentin

    2013-01-01

    Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.

  18. Professional Cross-Platform Mobile Development in C#

    CERN Document Server

    Olson, Scott; Horgen, Ben; Goers, Kenny

    2012-01-01

    Develop mobile enterprise applications in a language you already know! With employees, rather than the IT department, now driving the decision of which devices to use on the job, many companies are scrambling to integrate enterprise applications. Fortunately, enterprise developers can now create apps for all major mobile devices using C#/.NET and Mono, languages most already know. A team of authors draws on their vast experiences to teach you how to create cross-platform mobile applications, while delivering the same functionality to PC's, laptops and the web from a single technology platform

  19. OpenMS: a flexible open-source software platform for mass spectrometry data analysis.

    Science.gov (United States)

    Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver

    2016-08-30

    High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.

  20. An Open-Source ITS Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Torp, Kristian

    2012-01-01

    , the trip-based approach requires more GPS data and of a higher quality than the point-based approach. The system has been completely implemented using open-source software and is in production. A detailed performance study, using a desktop PC, shows that the system can handle large data sizes...... and that the performance scales, for some components, linearly with the number of processor cores available. The main conclusion is that large quantity of GPS data can, with a very limited budget, used for estimating travel times, if enough GPS data is available....

  1. Open Source Live Distributions for Computer Forensics

    Science.gov (United States)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  2. Software development an open source approach

    CERN Document Server

    Tucker, Allen; de Silva, Chamindra

    2011-01-01

    Overview and Motivation Software Free and Open Source Software (FOSS)Two Case Studies Working with a Project Team Key FOSS Activities Client-Oriented vs. Community-Oriented Projects Working on a Client-Oriented Project Joining a Community-Oriented Project Using Project Tools Collaboration Tools Code Management Tools Run-Time System ConstraintsSoftware Architecture Architectural Patterns Layers, Cohesion, and Coupling Security Concurrency, Race Conditions, and DeadlocksWorking with Code Bad Smells and Metrics Refactoring Testing Debugging Extending the Software for a New ProjectDeveloping the D

  3. An Open-Source Based ITS Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Torp, Kristian

    2013-01-01

    In this paper, a complete platform used to compute travel times from GPS data is described. Two approaches to computing travel time are proposed one based on points and one based on trips. Overall both approaches give reasonable results compared to existing manual estimated travel times. However......, the trip-based approach requires more GPS data and of a higher quality than the point-based approach. The platform has been completely implemented using open-source software. The main conclusion is that large quantity of GPS data can be managed, with a limited budget and that GPS data is a good source...

  4. Developing Open Source System Expertise in Europe

    DEFF Research Database (Denmark)

    Nyborg, Mads; Gustafsson, Finn; Christensen, Jørgen Erik

    2011-01-01

    are interested in knowing which factors play a role in information systems and what the similarities and differences between the various national approaches in open source software systems and techniques are. The event forms a unique opportunity in promoting active learning in an international environment....... Students get experience working in teams across country boundaries. In the paper we will describe the structure and our experiences from participating in this IP with relation to the CDIO initiative. Finally we draw conclusions and give our recommendations based on those....

  5. An Open-Source Microscopic Traffic Simulator

    CERN Document Server

    Treiber, Martin; 10.1109/MITS.2010.939208

    2010-01-01

    We present the interactive Java-based open-source traffic simulator available at www.traffic-simulation.de. In contrast to most closed-source commercial simulators, the focus is on investigating fundamental issues of traffic dynamics rather than simulating specific road networks. This includes testing theories for the spatiotemporal evolution of traffic jams, comparing and testing different microscopic traffic models, modeling the effects of driving styles and traffic rules on the efficiency and stability of traffic flow, and investigating novel ITS technologies such as adaptive cruise control, inter-vehicle and vehicle-infrastructure communication.

  6. Google and Open Source (Keynote Talk)

    OpenAIRE

    DiBona, Chris

    2015-01-01

    In this talk Chris DiBona will review Google's use of open source projects and the history of prominent releases like Android, Chromium, Angular.js and some 3500 other projects (though not all of them will be surveyed!). Keeping such releases on track and efficient and, in some cases, retiring them has been his focus since he started at the company. He'll review the various ways Google supports the worldwide community of software developers that it has derived so much value from. Specifically...

  7. New Open Source E-Learning Book

    Directory of Open Access Journals (Sweden)

    TOJDE

    2004-04-01

    Full Text Available We would like to promote and introduce a new book that Terry Anderson and Fathi Elloumi have edited from Athabasca University. It is entitled Theory and Practice of Online Learning. The book is licensed for educational and non commercial use, download and printing under a Creative Commons license. We have released it as open source so that it can be more easily accessed by professionals and hopefully used in coursework by learners throughout the world. During the two weeks since its release in mid February over 4300 individual downloads were made of the whole book, in addition to many for individualchapters.

  8. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    Science.gov (United States)

    Dryden, Michael D M; Wheeler, Aaron R

    2015-01-01

    Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat), an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously), and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.

  9. DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration.

    Directory of Open Access Journals (Sweden)

    Michael D M Dryden

    Full Text Available Most electroanalytical techniques require the precise control of the potentials in an electrochemical cell using a potentiostat. Commercial potentiostats function as "black boxes," giving limited information about their circuitry and behaviour which can make development of new measurement techniques and integration with other instruments challenging. Recently, a number of lab-built potentiostats have emerged with various design goals including low manufacturing cost and field-portability, but notably lacking is an accessible potentiostat designed for general lab use, focusing on measurement quality combined with ease of use and versatility. To fill this gap, we introduce DStat (http://microfluidics.utoronto.ca/dstat, an open-source, general-purpose potentiostat for use alone or integrated with other instruments. DStat offers picoampere current measurement capabilities, a compact USB-powered design, and user-friendly cross-platform software. DStat is easy and inexpensive to build, may be modified freely, and achieves good performance at low current levels not accessible to other lab-built instruments. In head-to-head tests, DStat's voltammetric measurements are much more sensitive than those of "CheapStat" (a popular open-source potentiostat described previously, and are comparable to those of a compact commercial "black box" potentiostat. Likewise, in head-to-head tests, DStat's potentiometric precision is similar to that of a commercial pH meter. Most importantly, the versatility of DStat was demonstrated through integration with the open-source DropBot digital microfluidics platform. In sum, we propose that DStat is a valuable contribution to the "open source" movement in analytical science, which is allowing users to adapt their tools to their experiments rather than alter their experiments to be compatible with their tools.

  10. Open Source CRM Systems for SMEs

    CERN Document Server

    Tereso, Marco

    2011-01-01

    Customer Relationship Management (CRM) systems are very common in large companies. However, CRM systems are not very common in Small and Medium Enterprises (SMEs). Most SMEs do not implement CRM systems due to several reasons, such as lack of knowledge about CRM or lack of financial resources to implement CRM systems. SMEs have to start implementing Information Systems (IS) technology into their business operations in order to improve business values and gain more competitive advantage over rivals. CRM system has the potential to help improve the business value and competitive capabilities of SMEs. Given the high fixed costs of normal activity of companies, we intend to promote free and viable solutions for small and medium businesses. In this paper, we explain the reasons why SMEs do not implement CRM system and the benefits of using open source CRM system in SMEs. We also describe the functionalities of top open source CRM systems, examining the applicability of these tools in fitting the needs of SMEs.

  11. Open source software to control Bioflo bioreactors.

    Directory of Open Access Journals (Sweden)

    David A Burdge

    Full Text Available Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.

  12. Open source software to control Bioflo bioreactors.

    Science.gov (United States)

    Burdge, David A; Libourel, Igor G L

    2014-01-01

    Bioreactors are designed to support highly controlled environments for growth of tissues, cell cultures or microbial cultures. A variety of bioreactors are commercially available, often including sophisticated software to enhance the functionality of the bioreactor. However, experiments that the bioreactor hardware can support, but that were not envisioned during the software design cannot be performed without developing custom software. In addition, support for third party or custom designed auxiliary hardware is often sparse or absent. This work presents flexible open source freeware for the control of bioreactors of the Bioflo product family. The functionality of the software includes setpoint control, data logging, and protocol execution. Auxiliary hardware can be easily integrated and controlled through an integrated plugin interface without altering existing software. Simple experimental protocols can be entered as a CSV scripting file, and a Python-based protocol execution model is included for more demanding conditional experimental control. The software was designed to be a more flexible and free open source alternative to the commercially available solution. The source code and various auxiliary hardware plugins are publicly available for download from https://github.com/LibourelLab/BiofloSoftware. In addition to the source code, the software was compiled and packaged as a self-installing file for 32 and 64 bit windows operating systems. The compiled software will be able to control a Bioflo system, and will not require the installation of LabVIEW.

  13. The open-source neuroimaging research enterprise.

    Science.gov (United States)

    Marcus, Daniel S; Archie, Kevin A; Olsen, Timothy R; Ramaratnam, Mohana

    2007-11-01

    While brain imaging in the clinical setting is largely a practice of looking at images, research neuroimaging is a quantitative and integrative enterprise. Images are run through complex batteries of processing and analysis routines to generate numeric measures of brain characteristics. Other measures potentially related to brain function - demographics, genetics, behavioral tests, neuropsychological tests - are key components of most research studies. The canonical scanner - PACS - viewing station axis used in clinical practice is therefore inadequate for supporting neuroimaging research. Here, we model the neuroimaging research enterprise as a workflow. The principal components of the workflow include data acquisition, data archiving, data processing and analysis, and data utilization. We also describe a set of open-source applications to support each step of the workflow and the transitions between these steps. These applications include DIGITAL IMAGING AND COMMUNICATIONS IN MEDICINE viewing and storage tools, the EXTENSIBLE NEUROIMAGING ARCHIVE TOOLKIT data archiving and exploration platform, and an engine for running processing/analysis pipelines. The overall picture presented is aimed to motivate open-source developers to identify key integration and communication points for interoperating with complimentary applications.

  14. Repositorios digitales y software open source

    Directory of Open Access Journals (Sweden)

    Doria, María Vanesa

    2015-06-01

    Full Text Available En la actualidad las universidades se encuentran en constante evolución, fruto de la transformación generada por la sociedad de la información y el conocimiento, donde la transversalidad son las Tecnologías de la Información y Comunicación (TIC, que tienen como objetivo ampliar el acceso a la información y el conocimiento a través de su herramienta digital más distinguida, Internet. Las universidades retroalimentan su conocimiento e información, mediante las producciones científicasacadémicas, y para promover el acceso a ellas, muchas universidades están inclinándose al movimiento del Acceso Abierto (AA siguiendo la vía verde en el desarrollo de repositorios digitales (RD. Para la creación de RD es necesario analizar los software open source disponibles, dado que estos son las herramientas que facilitan la automatización de los mismos. El presente estudio se centra en el análisis de los software open source existentes en el mercado.

  15. Distributed Detection in Sensor Networks with Limited Range Multi-Modal Sensors

    CERN Document Server

    Ermis, E

    2008-01-01

    We consider a multi-object detection problem over a sensor network (SNET) with limited range multi-modal sensors. Limited range sensing environment arises in a sensing field prone to signal attenuation and path losses. The general problem complements the widely considered decentralized detection problem where all sensors observe the same object. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR) and the associated BH test procedure. The BH procedure is based on rank ordering of scalar test statistics. We first develop scalar test statistics for multidimensional data to handle multi-modal sensor observations and establish its optimality in terms of the BH procedure. We then propose a distributed algorithm in the ideal case of infinite attenuation for identification of sensors that are in the immediate vicinity of an object. We demonstrate communication message scalability to large SNETs by showing that the upper bound on the communication mes...

  16. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    Science.gov (United States)

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  17. Design and experimental study of a multi-modal piezoelectric energy harvester

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, Xing Yu [School of Energy, Power and Mechanical Engineering, North China Electric Power University, Beijing (China); Oyadiji, S. Olutunde [School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester (United States)

    2017-01-15

    A multi-modal piezoelectric vibration energy harvester is designed in this article. It consists of a cantilevered base beam and some upper and lower layer beams with rigid masses bonded between the beams as spacers. For a four-layer harvester subjected to random base excitations, relocating the mass positions leads to the generation of up to four close resonance frequencies over the frequency range from 10 Hz to 100 Hz with relative large power output. The harvesters are connected with a resistance decade box and the frequency response functions of the voltage and power on resistive loads are determined. The experimental results are validated with the simulation results using the finite element method. On a certain level of power output, the experimental results show that the multi-modal harvesters can generate a frequency band that is more than two times greater than the frequency band produced by a cantilevered beam harvester.

  18. Outcome of transarterial chemoembolization-based multi-modal treatment in patients with unresectable hepatocellular carcinoma.

    Science.gov (United States)

    Song, Do Seon; Nam, Soon Woo; Bae, Si Hyun; Kim, Jin Dong; Jang, Jeong Won; Song, Myeong Jun; Lee, Sung Won; Kim, Hee Yeon; Lee, Young Joon; Chun, Ho Jong; You, Young Kyoung; Choi, Jong Young; Yoon, Seung Kew

    2015-02-28

    To investigate the efficacy and safety of transarterial chemoembolization (TACE)-based multimodal treatment in patients with large hepatocellular carcinoma (HCC). A total of 146 consecutive patients were included in the analysis, and their medical records and radiological data were reviewed retrospectively. In total, 119 patients received TACE-based multi-modal treatments, and the remaining 27 received conservative management. Overall survival (Pmulti-modal treatment group compared with the TACE-only group (P=0.002) but also in the surgical treatment group compared with the loco-regional treatment-only group (Pmulti-modal treatment (P=0.002) were identified as independent post-treatment prognostic factors. TACE-based multi-modal treatments were safe and more beneficial than conservative management. Salvage surgery after successful downstaging resulted in long-term survival in patients with large, unresectable HCC.

  19. A flexible graphical model for multi-modal parcellation of the cortex.

    Science.gov (United States)

    Parisot, Sarah; Glocker, Ben; Ktena, Sofia Ira; Arslan, Salim; Schirmer, Markus D; Rueckert, Daniel

    2017-09-06

    Advances in neuroimaging have provided a tremendous amount of in-vivo information on the brain's organisation. Its anatomy and cortical organisation can be investigated from the point of view of several imaging modalities, many of which have been studied for mapping functionally specialised cortical areas. There is strong evidence that a single modality is not sufficient to fully identify the brain's cortical organisation. Combining multiple modalities in the same parcellation task has the potential to provide more accurate and robust subdivisions of the cortex. Nonetheless, existing brain parcellation methods are typically developed and tested on single modalities using a specific type of information. In this paper, we propose Graph-based Multi-modal Parcellation (GraMPa), an iterative framework designed to handle the large variety of available input modalities to tackle the multi-modal parcellation task. At each iteration, we compute a set of parcellations from different modalities and fuse them based on their local reliabilities. The fused parcellation is used to initialise the next iteration, forcing the parcellations to converge towards a set of mutually informed modality specific parcellations, where correspondences are established. We explore two different multi-modal configurations for group-wise parcellation using resting-state fMRI, diffusion MRI tractography, myelin maps and task fMRI. Quantitative and qualitative results on the Human Connectome Project database show that integrating multi-modal information yields a stronger agreement with well established atlases and more robust connectivity networks that provide a better representation of the population. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records.

    Science.gov (United States)

    Peissig, Peggy L; Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B

    2012-01-01

    There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries.

  1. Making Faces - State-Space Models Applied to Multi-Modal Signal Processing

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue

    2005-01-01

    The two main focus areas of this thesis are State-Space Models and multi modal signal processing. The general State-Space Model is investigated and an addition to the class of sequential sampling methods is proposed. This new algorithm is denoted as the Parzen Particle Filter. Furthermore, the Ma...... application an information theoretic vector quantizer is also proposed. Based on interactions between particles, it is shown how a quantizing scheme based on an analytic cost function can be derived....

  2. Multi-modal gesture recognition using integrated model of motion, audio and video

    Science.gov (United States)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  3. Spatio-temporal multi-modality ontology for indexing and retrieving satellite images

    OpenAIRE

    MESSOUDI, Wassim; FARAH, Imed Riadh; SAHEB ETTABAA, Karim; Ben Ghezala, Henda; SOLAIMAN, Basel

    2009-01-01

    International audience; This paper presents spatio-temporal multi-modality ontology for indexing and retrieving satellite images in the high level to improve the quality of the system retrieval and to perform semantic in the retrieval process.Our approach is based on three modules: (1) regions and features extraction, (2) ontological indexing and (3) semantic image retrieval. The first module allows extracting regions from the satellite image using the fuzzy c-means FCM) segmentation algorith...

  4. A computer vision integration model for a multi-modal cognitive system

    OpenAIRE

    Vrecko A.; Skocaj D.; Hawes N.; Leonardis A.

    2009-01-01

    We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can combine an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanis...

  5. Automatic extraction of geometric lip features with application to multi-modal speaker identification

    OpenAIRE

    Arsic, I.; Vilagut Abad, R.; Thiran, J.

    2006-01-01

    In this paper we consider the problem of automatic extraction of the geometric lip features for the purposes of multi-modal speaker identification. The use of visual information from the mouth region can be of great importance for improving the speaker identification system performance in noisy conditions. We propose a novel method for automated lip features extraction that utilizes color space transformation and a fuzzy-based c-means clustering technique. Using the obtained visual cues close...

  6. Multi-modal Gesture Recognition using Integrated Model of Motion, Audio and Video

    Institute of Scientific and Technical Information of China (English)

    GOUTSU Yusuke; KOBAYASHI Takaki; OBARA Junya; KUSAJIMAIkuo; TAKEICHI Kazunari; TAKANO Wataru; NAKAMURA Yoshihiko

    2015-01-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  7. Adaptive Multi-Modal Data Mining and Fusion for Autonomous Intelligence Discovery

    Science.gov (United States)

    2009-03-01

    Final DATES COVERED (From To) From 15-12-2006 to 15-12-2007 4. TITLE AND SUBTITLE Adaptive Multi-Modal Data Mining and Fusion For Autonomous...well as geospatial mapping of documents and images. 15. SUBJECT TERMS automated data mining , streaming data, geospatial Internet localization, Arabic...streaming text data mining . 1.1 Mixed Language Text Database Search A particularly useful component that was under development was on a mixed language

  8. Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data

    OpenAIRE

    Clark, Ronald; Wang, Sen; Wen, Hongkai; Trigoni, Niki; Markham, Andrew

    2016-01-01

    Localization is a key requirement for mobile robot autonomy and human-robot interaction. Vision-based localization is accurate and flexible, however, it incurs a high computational burden which limits its application on many resource-constrained platforms. In this paper, we address the problem of performing real-time localization in large-scale 3D point cloud maps of ever-growing size. While most systems using multi-modal information reduce localization time by employing side-channel informat...

  9. International free and open source software law review

    National Research Council Canada - National Science Library

    2009-01-01

    "The International Free and Open Source Software Law Review (IFOSS L. Rev.) is a collaborative legal publication aiming to increase knowledge and understanding among lawyers about Free and Open Source Software issues...

  10. Multi-modal image registration based on gradient orientations of minimal uncertainty.

    Science.gov (United States)

    De Nigris, Dante; Collins, D Louis; Arbel, Tal

    2012-12-01

    In this paper, we propose a new multi-scale technique for multi-modal image registration based on the alignment of selected gradient orientations of reduced uncertainty. We show how the registration robustness and accuracy can be improved by restricting the evaluation of gradient orientation alignment to locations where the uncertainty of fixed image gradient orientations is minimal, which we formally demonstrate correspond to locations of high gradient magnitude. We also embed a computationally efficient technique for estimating the gradient orientations of the transformed moving image (rather than resampling pixel intensities and recomputing image gradients). We have applied our method to different rigid multi-modal registration contexts. Our approach outperforms mutual information and other competing metrics in the context of rigid multi-modal brain registration, where we show sub-millimeter accuracy with cases obtained from the retrospective image registration evaluation project. Furthermore, our approach shows significant improvements over standard methods in the highly challenging clinical context of image guided neurosurgery, where we demonstrate misregistration of less than 2 mm with relation to expert selected landmarks for the registration of pre-operative brain magnetic resonance images to intra-operative ultrasound images.

  11. Multi-modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment.

    Science.gov (United States)

    Li, Qing; Wu, Xia; Xu, Lele; Chen, Kewei; Yao, Li; Li, Rui

    2017-10-01

    The differentiation of mild cognitive impairment (MCI), which is the prodromal stage of Alzheimer's disease (AD), from normal control (NC) is important as the recent research emphasis on early pre-clinical stage for possible disease abnormality identification, intervention and even possible prevention. The current study puts forward a multi-modal supervised within-class-similarity discriminative dictionary learning algorithm (SCDDL) we introduced previously for distinguishing MCI from NC. The proposed new algorithm was based on weighted combination and named as multi-modality SCDDL (mSCDDL). Structural magnetic resonance imaging (sMRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET data of 113 AD patients, 110 MCI patients and 117 NC subjects from the Alzheimer's disease Neuroimaging Initiative database were adopted for classification between MCI and NC, as well as between AD and NC. Adopting mSCDDL, the classification accuracy achieved 98.5% for AD vs. NC and 82.8% for MCI vs. NC, which were superior to or comparable with the results of some other state-of-the-art approaches as reported in recent multi-modality publications. The mSCDDL procedure was a promising tool in assisting early diseases diagnosis using neuroimaging data. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. EVolution: an edge-based variational method for non-rigid multi-modal image registration.

    Science.gov (United States)

    Denis de Senneville, B; Zachiu, C; Ries, M; Moonen, C

    2016-10-21

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  13. A multi-modal face recognition method using complete local derivative patterns and depth maps.

    Science.gov (United States)

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-10-20

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features.

  14. EVolution: an edge-based variational method for non-rigid multi-modal image registration

    Science.gov (United States)

    de Senneville, B. Denis; Zachiu, C.; Ries, M.; Moonen, C.

    2016-10-01

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  15. Discriminative multi-task feature selection for multi-modality classification of Alzheimer's disease.

    Science.gov (United States)

    Ye, Tingting; Zu, Chen; Jie, Biao; Shen, Dinggang; Zhang, Daoqiang

    2016-09-01

    Recently, multi-task based feature selection methods have been used in multi-modality based classification of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, in traditional multi-task feature selection methods, some useful discriminative information among subjects is usually not well mined for further improving the subsequent classification performance. Accordingly, in this paper, we propose a discriminative multi-task feature selection method to select the most discriminative features for multi-modality based classification of AD/MCI. Specifically, for each modality, we train a linear regression model using the corresponding modality of data, and further enforce the group-sparsity regularization on weights of those regression models for joint selection of common features across multiple modalities. Furthermore, we propose a discriminative regularization term based on the intra-class and inter-class Laplacian matrices to better use the discriminative information among subjects. To evaluate our proposed method, we perform extensive experiments on 202 subjects, including 51 AD patients, 99 MCI patients, and 52 healthy controls (HC), from the baseline MRI and FDG-PET image data of the Alzheimer's Disease Neuroimaging Initiative (ADNI). The experimental results show that our proposed method not only improves the classification performance, but also has potential to discover the disease-related biomarkers useful for diagnosis of disease, along with the comparison to several state-of-the-art methods for multi-modality based AD/MCI classification.

  16. A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.

    Science.gov (United States)

    Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua

    2015-12-01

    In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.

  17. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    Directory of Open Access Journals (Sweden)

    Shouyi Yin

    2014-10-01

    Full Text Available In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP. It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP based features.

  18. Building Eclectic Personal Learning Landscapes with Open Source Tools

    NARCIS (Netherlands)

    Kalz, Marco

    2008-01-01

    Kalz, M. (2005). Building Eclectic Personal Learning Landscapes with Open Source Tools. In F. de Vries, G. Attwell, R. Elferink & A. Tödt (Eds.), Open Source for Education in Europe. Research & Practice (= Proceedings of the Open Source for Education in Europe Conference) (pp. 163-168). 2005, Heerle

  19. A survey of open source tools for business intelligence

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach

    2005-01-01

    The industrial use of open source Business Intelligence (BI) tools is not yet common. It is therefore of interest to explore which possibilities are available for open source BI and compare the tools. In this survey paper, we consider the capabilities of a number of open source tools for BI...

  20. An Analysis of Open Source Security Software Products Downloads

    Science.gov (United States)

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  1. Building Eclectic Personal Learning Landscapes with Open Source Tools

    NARCIS (Netherlands)

    Kalz, Marco

    2008-01-01

    Kalz, M. (2005). Building Eclectic Personal Learning Landscapes with Open Source Tools. In F. de Vries, G. Attwell, R. Elferink & A. Tödt (Eds.), Open Source for Education in Europe. Research & Practice (= Proceedings of the Open Source for Education in Europe Conference) (pp. 163-168). 2005,

  2. An Analysis of Open Source Security Software Products Downloads

    Science.gov (United States)

    Barta, Brian J.

    2014-01-01

    Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software…

  3. Behind Linus's Law: Investigating Peer Review Processes in Open Source

    Science.gov (United States)

    Wang, Jing

    2013-01-01

    Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…

  4. Open Source Software The Challenge Ahead

    CERN Document Server

    CERN. Geneva

    2007-01-01

    The open source community has done amazingly well in terms of challenging the historical epicenter of computing - the supercomputer and data center - and driving change there. Linux now represents a healthy and growing share of infrastructure in large organisations globally. Apache and other infrastructural components have established the new de facto standard for software in the back office: freedom. It would be easy to declare victory. But the real challenge lies ahead - taking free software to the mass market, to your grandparents, to your nieces and nephews, to your friends. This is the next wave, and if we are to be successful we need to articulate the audacious goals clearly and loudly - because that's how the community process works best. Speaker Bio: Mark Shuttleworth founded the Ubuntu Project in early 2004. Ubuntu is an enterprise Linux distribution that is freely available worldwide and has both desktop and enterprise server editions. Mark studied finance and information technology at the Universit...

  5. Evolution of an Open Source Strategy

    Directory of Open Access Journals (Sweden)

    Ronald Baecker

    2007-10-01

    Full Text Available On June 8th, 2005, we officially launched the ePresence (http://epresence.tv/ Interactive Media Open Source Consortium, at the Knowledge Media Design Institute (KMDI, University of Toronto (UofT. We had been researching and developing ePresence, our webcasting, webconferencing, and archiving software project for about five years. Throughout the early phase of the project we used the system to produce live webcasts of KMDI's annual lecture series. Eventually word spread about our webcasting system and other universities, such as Memorial University in Newfoundland, became interested. It was obvious that the time to share our project with the world had come, but what wasn't obvious to us at the time was how we were going to do that.

  6. Open Source Energy Simulation for Elementary School

    CERN Document Server

    Lye, Sze Yee

    2012-01-01

    With the interactivity and multiple representation features, computer simulations lend itself to the guided inquiry learning. However, these simulations are usually designed for post-elementary students. Thus, the aim of this study is to investigate how the use of guided inquiry approach with customized energy simulation can improve students' understanding of this topic. In this ongoing research, the case study is adopted. In the first phase of the study, we have modified open source energy simulation based on principles for reducing extraneous processing, existing energy simulation and guided inquiry approach. The modified simulation is sent to teachers for evaluation and the feedback is encouraging. In the next phase of the study, the guided inquiry lesson package involving the energy simulation would be designed and deployed in an elementary classroom. Multiple data sources would be collected to seek a deeper understanding on how this learning package can possibly impact students' understanding of the phys...

  7. HERAFitter, Open Source QCD Fit Project

    CERN Document Server

    Alekhin, S.; Belov, P.; Borroni, S.; Botje, M.; Britzger, D.; Camarda, S.; Cooper-Sarkar, A.M.; Daum, K.; Diaconu, C.; Feltesse, J.; Gizhko, A.; Glazov, A.; Guffanti, A.; Guzzi, M.; Hautmann, F.; Jung, A.; Jung, H.; Kolesnikov, V.; Kowalski, H.; Kuprash, O.; Kusina, A.; Levonian, S.; Lipka, K.; Lobodzinski, B.; Lohwasser, K.; Luszczak, A.; Malaescu, B.; McNulty, R.; Myronenko, V.; Naumann-Emme, S.; Nowak, K.; Olness, F.; Perez, E.; Pirumov, H.; Plačakytė, R.; Rabbertz, K.; Radescu, V.; Sadykov, R.; Salam, G.P.; Sapronov, A.; Schöning, A.; Schörner-Sadenius, T.; Shushkevich, S.; Slominski, W.; Spiesberger, H.; Starovoitov, P.; Sutton, M.; Tomaszewska, J.; Turkot, O.; Vargas, A.; Watt, G.; Wichmann, K.

    2015-07-02

    HERAFitter is an open-source package that provides a framework for the determination of the parton distribution functions (PDFs) of the proton and for many different kinds of analyses in Quantum Chromodynamics (QCD). It encodes results from a wide range of experimental measurements in lepton-proton deep inelastic scattering and proton-proton (proton-antiproton) collisions at hadron colliders. These are complemented with a variety of theoretical options for calculating PDF-dependent cross section predictions corresponding to the measurements. The framework covers a large number of the existing methods and schemes used for PDF determination. The data and theoretical predictions are brought together through numerous methodological options for carrying out PDF fits and plotting tools to help visualise the results. While primarily based on the approach of collinear factorisation, HERAFitter also provides facilities for fits of dipole models and transverse-momentum dependent PDFs. The package can be used to study t...

  8. The architecture of open source applications

    CERN Document Server

    Wilson, Greg

    2012-01-01

    Architects look at thousands of buildings during their training, and study critiques of those buildings written by masters. In contrast, most software developers only ever get to know a handful of large programs well - usually programs they wrote themselves - and never study the great programs of history. As a result, they repeat one another's mistakes rather than building on one another's successes. This book's goal is to change that. In it, the authors of twenty-five open source applications explain how their software is structured, and why. What are each program's major components? How do they interact? And what did their builders learn during their development? In answering these questions, the contributors to this book provide unique insights into how they think.

  9. An Affordable Open-Source Turbidimeter

    Science.gov (United States)

    Kelley, Christopher D.; Krolick, Alexander; Brunner, Logan; Burklund, Alison; Kahn, Daniel; Ball, William P.; Weber-Shirk, Monroe

    2014-01-01

    Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring. PMID:24759114

  10. Building Energy Management Open Source Software

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Saifur [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States)

    2017-08-25

    Funded by the U.S. Department of Energy in November 2013, a Building Energy Management Open Source Software (BEMOSS) platform was engineered to improve sensing and control of equipment in small- and medium-sized commercial buildings. According to the Energy Information Administration (EIA), small- (5,000 square feet or smaller) and medium-sized (between 5,001 to 50,000 square feet) commercial buildings constitute about 95% of all commercial buildings in the U.S. These buildings typically do not have Building Automation Systems (BAS) to monitor and control building operation. While commercial BAS solutions exist, including those from Siemens, Honeywell, Johnsons Controls and many more, they are not cost effective in the context of small- and medium-sized commercial buildings, and typically work with specific controller products from the same company. BEMOSS targets small and medium-sized commercial buildings to address this gap.

  11. A identidade profissional no jornalismo open source

    Directory of Open Access Journals (Sweden)

    Ana Maria Brambilla

    2005-06-01

    Full Text Available Ao aprofundar o conceito de jornalismo open source e seus valores de aplicação na Internet, o presente artigo focaliza a preocupação despertada em jornalistas profissionais em um cenário onde cada cidadão é um repórter. Para ilustrar essa pesquisa foi estudado o noticiário coreano OhmyNews, online desde 2000, publicando artigos escritos por pessoas que não têm formação jornalística. A intenção desta reflexão é ampliar as discussões pela busca da reconfiguração do papel do jornalista em uma mídia que está cada vez mais sob domínio do público.

  12. Intrinsic Motivation in Open Source Software Development

    DEFF Research Database (Denmark)

    Bitzer, J.; W., Schrettl,; Schröder, Philipp

    2004-01-01

    of a public good, these features emerge quite naturally. We adapt a dynamic private-provision-of-public-goods model to reflects key aspects of the OSS phenomenon. In particular, instead of relying on extrinsic motives for programmers (e.g. signaling) the present model is driven by intrinsic motives of OSS...... programmers, such as user-programmers, play value or \\emph{homo ludens} payoff, and gift culture benefits. Such intrinsic motives feature extensively in the wider OSS literature and turn out to add new insights to the economic analysis.......This papers sheds light on the puzzling evidence that even though open source software (OSS) is a public good, it is developed for free by highly qualified, young and motivated individuals, and evolves at a rapid pace. We show that once OSS development is understood as the private provision...

  13. HERAFitter, Open Source QCD Fit Project

    CERN Document Server

    Alekhin, S; Belov, P; Borroni, S; Botje, M; Britzger, D; Camarda, S; Cooper-Sarkar, A M; Daum, K; Diaconu, C; Feltesse, J; Gizhko, A; Glazov, A; Guffanti, A; Guzzi, M; Hautmann, F; Jung, A; Jung, H; Kolesnikov, V; Kowalski, H; Kuprash, O; Kusina, A; Levonian, S; Lipka, K; Lobodzinski, B; Lohwasser, K; Luszczak, A; Malaescu, B; McNulty, R; Myronenko, V; Naumann-Emme, S; Nowak, K; Olness, F; Perez, E; Pirumov, H; Plačakytė, R; Rabbertz, K; Radescu, V; Sadykov, R; Salam, G P; Sapronov, A; Schöning, A; Schörner-Sadenius, T; Shushkevich, S; Slominski, W; Spiesberger, H; Starovoitov, P; Sutton, M; Tomaszewska, J; Turkot, O; Vargas, A; Watt, G; Wichmann, K

    2015-01-01

    HERAFitter is an open-source package that provides a framework for the determination of the parton distribution functions (PDFs) of the proton and for many different kinds of analyses in Quantum Chromodynamics (QCD). It encodes results from a wide range of experimental measurements in lepton-proton deep inelastic scattering and proton-proton (proton-antiproton) collisions at hadron colliders. These are complemented with a variety of theoretical options for calculating PDF-dependent cross section predictions corresponding to the measurements. The framework covers a large number of the existing methods and schemes used for PDF determination. The data and theoretical predictions are brought together through numerous methodological options for carrying out PDF fits and plotting tools to help visualise the results. While primarily based on the approach of collinear factorisation, HERAFitter also provides facilities for fits of dipole models and transverse-momentum dependent PDFs. The package can be used to study t...

  14. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  15. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  16. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  17. Open Source Hardware for DIY Environmental Sensing

    Science.gov (United States)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  18. An Open Source Tool to Test Interoperability

    Science.gov (United States)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and

  19. COLOMBOS: access port for cross-platform bacterial expression compendia.

    Directory of Open Access Journals (Sweden)

    Kristof Engelen

    Full Text Available BACKGROUND: Microarrays are the main technology for large-scale transcriptional gene expression profiling, but the large bodies of data available in public databases are not useful due to the large heterogeneity. There are several initiatives that attempt to bundle these data into expression compendia, but such resources for bacterial organisms are scarce and limited to integration of experiments from the same platform or to indirect integration of per experiment analysis results. METHODOLOGY/PRINCIPAL FINDINGS: We have constructed comprehensive organism-specific cross-platform expression compendia for three bacterial model organisms (Escherichia coli, Bacillus subtilis, and Salmonella enterica serovar Typhimurium together with an access portal, dubbed COLOMBOS, that not only provides easy access to the compendia, but also includes a suite of tools for exploring, analyzing, and visualizing the data within these compendia. It is freely available at http://bioi.biw.kuleuven.be/colombos. The compendia are unique in directly combining expression information from different microarray platforms and experiments, and we illustrate the potential benefits of this direct integration with a case study: extending the known regulon of the Fur transcription factor of E. coli. The compendia also incorporate extensive annotations for both genes and experimental conditions; these heterogeneous data are functionally integrated in the COLOMBOS analysis tools to interactively browse and query the compendia not only for specific genes or experiments, but also metabolic pathways, transcriptional regulation mechanisms, experimental conditions, biological processes, etc. CONCLUSIONS/SIGNIFICANCE: We have created cross-platform expression compendia for several bacterial organisms and developed a complementary access port COLOMBOS, that also serves as a convenient expression analysis tool to extract useful biological information. This work is relevant to a large community

  20. Open source PIV software applied to streaming, time-resolved PIV data

    Science.gov (United States)

    Taylor, Zachary; Gurka, Roi; Liberzon, Alex; Kopp, Gregory

    2008-11-01

    The data handling requirements for time resolved PIV data have increased substantially in recent years as the advent in high speed imaging and real time streaming. Therefore, there is a need for new hardware and software solutions for data storage and analysis. The presented solution is based on open source software (OSS) which has proven to be a successful means of development. This includes the PIV algorithms and flow analysis software. The solution, based on OSS known as ``URAPIV,'' originally was developed in Matlab and recently available in Python. The advantage of these scripting languages lies within their highly customizable platform; however, their routines cannot compete with commercially available software for computational speed. Thus, an effort has been undertaken to develop URAPIV-C++, a GUI based on the Qt 4 cross-platform open source library. This provides users with features commonly found in commercial packages and is comparable in processing speed to the commercial packages. The uniqueness of this package is in its complete handling of PIV experiments from the algorithms to post analysis under OSS license for large data sets. The package and its features are utilized in the recent STR-PIV system, which will be operable at the Advanced Facility for Avian Research at UWO. The wake flow behind an elongated body will be presented as a demonstration.

  1. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    Science.gov (United States)

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X).

  2. The Open Source Snowpack modelling ecosystem

    Science.gov (United States)

    Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael

    2016-04-01

    As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is

  3. Open Source Testing Capability for Geospatial Software

    Science.gov (United States)

    Bermudez, L. E.

    2013-12-01

    resource for technologists responsible for interoperability among scientific tools that are used for sharing data and linking models, both within and between Earth science disciplines. This presentation will focus on the OGC compliance infrastructure and its open source tools, open source tests and and open issue tracker that can be used to improve scientific software. [1] http://www.opengeospatial.org/resource/products/stats [2] http://cite.opengeospatial.org/teamengine/ [3] http://cite.opengeospatial.org/te2

  4. HERAFitter. Open source QCD fit project

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute for High Energy Physics, Protvino, Moscow Region (Russian Federation); Behnke, O.; Borroni, S.; Britzger, D.; Camarda, S.; Gizhko, A.; Glazov, A.; Guzzi, M.; Kowalski, H.; Kuprash, O.; Levonian, S.; Lipka, K.; Myronenko, V.; Naumann-Emme, S.; Pirumov, H.; Placakyte, R.; Radescu, V.; Schoerner-Sadenius, T.; Shushkevich, S.; Starovoitov, P.; Turkot, O.; Vargas, A.; Wichmann, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Belov, P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); St. Petersburg State University, Department of Physics, St. Petersburg (Russian Federation); Botje, M. [Nikhef, Amsterdam (Netherlands); Cooper-Sarkar, A.M. [University of Oxford, Department of Physics, Oxford (United Kingdom); Daum, K. [Universitaet Wuppertal, Fachbereich C, Wuppertal (Germany); Universitaet Wuppertal, Rechenzentrum, Wuppertal (Germany); Diaconu, C. [Aix Marseille Universite, CNRS/IN2P3, CPPM UMR 7346, Marseille (France); Feltesse, J. [CEA, DSM/Irfu, CE-Saclay, Gif-sur-Yvette (France); Guffanti, A. [University of Copenhagen, Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Copenhagen (Denmark); Hautmann, F. [University of Southampton, School of Physics and Astronomy, Southampton (United Kingdom); Rutherford Appleton Laboratory, Chilton (United Kingdom); University of Oxford, Department of Theoretical Physics, Oxford (United Kingdom); Jung, A. [FERMILAB, Batavia, IL (United States); Jung, H. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Universiteit Antwerpen, Elementaire Deeltjes Fysica, Antwerp (Belgium); Kolesnikov, V.; Sadykov, R.; Sapronov, A. [Joint Institute for Nuclear Research (JINR), Dubna, Moscow Region (Russian Federation); Kusina, A.; Olness, F. [Southern Methodist University, Dallas, TX (United States); Lobodzinski, B. [Werner Heisenberg Institut, Max Planck Institut fuer Physik, Muenchen (Germany); Lohwasser, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Luszczak, A. [T. Kosciuszko University of Technology, Krakow (Poland); Malaescu, B. [UPMC and Universite, Paris-Diderot and CNRS/IN2P3, Laboratoire de Physique Nucleaire et de Hautes Energies, Paris (France); McNulty, R. [University College Dublin, Dublin 4 (Ireland); Nowak, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); University of Oxford, Department of Physics, Oxford (United Kingdom); Perez, E. [CERN, European Organization for Nuclear Research, Geneva (Switzerland); Rabbertz, K. [Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Salam, G.P. [CERN, PH-TH, Geneva 23 (Switzerland); Leave from LPTHE, CNRS UMR 7589, UPMC Univ. Paris 6, Paris (France); Schoening, A. [Universitaet Heidelberg, Physikalisches Institut, Heidelberg (Germany); Slominski, W. [Jagiellonian University, Institute of Physics, Krakow (Poland); Spiesberger, H. [Johannes-Gutenberg-Universitaet, PRISMA Cluster of Excellence, Institut fuer Physik (WA THEP), Mainz (Germany); Sutton, M. [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom); Tomaszewska, J. [Warsaw University of Technology, Faculty of Physics, Warsaw (Poland); Watt, G. [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom)

    2015-07-15

    HERAFitter is an open-source package that provides a framework for the determination of the parton distribution functions (PDFs) of the proton and for many different kinds of analyses in Quantum Chromodynamics (QCD). It encodes results from a wide range of experimental measurements in lepton-proton deep inelastic scattering and proton-proton (proton-antiproton) collisions at hadron colliders. These are complemented with a variety of theoretical options for calculating PDF-dependent cross section predictions corresponding to the measurements. The framework covers a large number of the existing methods and schemes used for PDF determination. The data and theoretical predictions are brought together through numerous methodological options for carrying out PDF fits and plotting tools to help to visualise the results. While primarily based on the approach of collinear factorisation, HERAFitter also provides facilities for fits of dipole models and transverse-momentum dependent PDFs. The package can be used to study the impact of new precise measurements from hadron colliders. This paper describes the general structure of HERAFitter and its wide choice of options. (orig.)

  5. XNAT Central: Open sourcing imaging research data.

    Science.gov (United States)

    Herrick, Rick; Horton, William; Olsen, Timothy; McKay, Michael; Archie, Kevin A; Marcus, Daniel S

    2016-01-01

    XNAT Central is a publicly accessible medical imaging data repository based on the XNAT open-source imaging informatics platform. It hosts a wide variety of research imaging data sets. The primary motivation for creating XNAT Central was to provide a central repository to host and provide access to a wide variety of neuroimaging data. In this capacity, XNAT Central hosts a number of data sets from research labs and investigative efforts from around the world, including the OASIS Brains imaging studies, the NUSDAST study of schizophrenia, and more. Over time, XNAT Central has expanded to include imaging data from many different fields of research, including oncology, orthopedics, cardiology, and animal studies, but continues to emphasize neuroimaging data. Through the use of XNAT's DICOM metadata extraction capabilities, XNAT Central provides a searchable repository of imaging data that can be referenced by groups, labs, or individuals working in many different areas of research. The future development of XNAT Central will be geared towards greater ease of use as a reference library of heterogeneous neuroimaging data and associated synthetic data. It will also become a tool for making data available supporting published research and academic articles. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Closed-loop, open-source electrophysiology

    Directory of Open Access Journals (Sweden)

    John D Rolston

    2010-09-01

    Full Text Available Multiple extracellular microelectrodes (multi-electrode arrays, or MEAs effectively record rapidly varying neural signals, and can also be used for electrical stimulation. Multi-electrode recording can serve as artificial output (efferents from a neural system, while complex spatially and temporally targeted stimulation can serve as artificial input (afferents to the neuronal network. Multi-unit or local field potential recordings can not only be used to control real world artifacts, such as prostheses, computers or robots, but can also trigger or alter subsequent stimulation. Real-time feedback stimulation may serve to modulate or normalize aberrant neural activity, to induce plasticity, or to serve as artificial sensory input. Despite promising closed-loop applications, commercial electrophysiology systems do not yet take advantage of the bidirectional capabilities of multi-electrodes, especially for use in freely moving animals. We addressed this lack of tools for closing the loop with NeuroRighter, an open-source system including recording hardware, stimulation hardware, and control software with a graphical user interface. The integrated system is capable of multi-electrode recording and simultaneous patterned microstimulation triggered by recordings with minimal stimulation artifact. The potential applications of closed-loop systems as research tools and clinical treatments are broad; we provide one example where epileptic activity recorded by a multi-electrode probe is used to trigger targeted stimulation, via that probe, to freely moving rodents.

  7. An open-source laser electronics suite

    Science.gov (United States)

    Pisenti, Neal C.; Reschovsky, Benjamin J.; Barker, Daniel S.; Restelli, Alessandro; Campbell, Gretchen K.

    2016-05-01

    We present an integrated set of open-source electronics for controlling external-cavity diode lasers and other instruments in the laboratory. The complete package includes a low-noise circuit for driving high-voltage piezoelectric actuators, an ultra-stable current controller based on the design of, and a high-performance, multi-channel temperature controller capable of driving thermo-electric coolers or resistive heaters. Each circuit (with the exception of the temperature controller) is designed to fit in a Eurocard rack equipped with a low-noise linear power supply capable of driving up to 5 A at +/- 15 V. A custom backplane allows signals to be shared between modules, and a digital communication bus makes the entire rack addressable by external control software over TCP/IP. The modular architecture makes it easy for additional circuits to be designed and integrated with existing electronics, providing a low-cost, customizable alternative to commercial systems without sacrificing performance.

  8. A hypo-status in drug-dependent brain revealed by multi-modal MRI.

    Science.gov (United States)

    Wang, Ze; Suh, Jesse; Duan, Dingna; Darnley, Stefanie; Jing, Ying; Zhang, Jian; O'Brien, Charles; Childress, Anna Rose

    2016-09-22

    Drug addiction is a chronic brain disorder with no proven effective cure. Assessing both structural and functional brain alterations by using multi-modal, rather than purely unimodal imaging techniques, may provide a more comprehensive understanding of the brain mechanisms underlying addiction, which in turn may facilitate future treatment strategies. However, this type of research remains scarce in the literature. We acquired multi-modal magnetic resonance imaging from 20 cocaine-addicted individuals and 19 age-matched controls. Compared with controls, cocaine addicts showed a multi-modal hypo-status with (1) decreased brain tissue volume in the medial and lateral orbitofrontal cortex (OFC); (2) hypo-perfusion in the prefrontal cortex, anterior cingulate cortex, insula, right temporal cortex and dorsolateral prefrontal cortex and (3) reduced irregularity of resting state activity in the OFC and limbic areas, as well as the cingulate, visual and parietal cortices. In the cocaine-addicted brain, larger tissue volume in the medial OFC, anterior cingulate cortex and ventral striatum and smaller insular tissue volume were associated with higher cocaine dependence levels. Decreased perfusion in the amygdala and insula was also correlated with higher cocaine dependence levels. Tissue volume, perfusion, and brain entropy in the insula and prefrontal cortex, all showed a trend of negative correlation with drug craving scores. The three modalities showed voxel-wise correlation in various brain regions, and combining them improved patient versus control brain classification accuracy. These results, for the first time, demonstrate a comprehensive cocaine-dependence and craving-related hypo-status regarding the tissue volume, perfusion and resting brain irregularity in the cocaine-addicted brain. © 2016 Society for the Study of Addiction.

  9. Multi-modal imaging and cancer therapy using lanthanide oxide nanoparticles: current status and perspectives.

    Science.gov (United States)

    Park, J Y; Chang, Y; Lee, G H

    2015-01-01

    Biomedical imaging is an essential tool for diagnosis and therapy of diseases such as cancers. It is likely true that medicine has developed with biomedical imaging methods. Sensitivity and resolution of biomedical imaging methods can be improved with imaging agents. Furthermore, it will be ideal if imaging agents could be also used as therapeutic agents. Therefore, one dose can be used for both diagnosis and therapy of diseases (i.e., theragnosis). This will simplify medical treatment of diseases, and will be also a benefit to patients. Mixed (Ln(1x)Ln(2y)O3, x + y = 2) or unmixed (Ln2O3) lanthanide (Ln) oxide nanoparticles (Ln = Eu, Gd, Dy, Tb, Ho, Er) are potential multi-modal imaging and cancer therapeutic agents. The lanthanides have a variety of magnetic and optical properties, useful for magnetic resonance imaging (MRI) and fluorescent imaging (FI), respectively. They also highly attenuate X-ray beam, useful for X-ray computed tomography (CT). In addition gadolinium-157 ((157)Gd) has the highest thermal neutron capture cross section among stable radionuclides, useful for gadolinium neutron capture therapy (GdNCT). Therefore, mixed or unmixed lanthanide oxide nanoparticles can be used for multi-modal imaging methods (i.e., MRI-FI, MRI-CT, CT-FI, and MRICT- FI) and cancer therapy (i.e., GdNCT). Since mixed or unmixed lanthanide oxide nanoparticles are single-phase and solid-state, they can be easily synthesized, and are compact and robust, which will be beneficial to biomedical applications. In this review physical properties of the lanthanides, synthesis, characterizations, multi-modal imagings, and cancer therapy of mixed and unmixed lanthanide oxide nanoparticles are discussed.

  10. Multi-modal 2D-3D non-rigid registration

    Science.gov (United States)

    Prümmer, M.; Hornegger, J.; Pfister, M.; Dörfler, A.

    2006-03-01

    In this paper, we propose a multi-modal non-rigid 2D-3D registration technique. This method allows a non-rigid alignment of a patient pre-operatively computed tomography (CT) to few intra operatively acquired fluoroscopic X-ray images obtained with a C-arm system. This multi-modal approach is especially focused on the 3D alignment of high contrast reconstructed volumes with intra-interventional low contrast X-ray images in order to make use of up-to-date information for surgical guidance and other interventions. The key issue of non-rigid 2D-3D registration is how to define the distance measure between high contrast 3D data and low contrast 2D projections. In this work, we use algebraic reconstruction theory to handle this problem. We modify the Euler-Lagrange equation by introducing a new 3D force. This external force term is computed from the residual of the algebraic reconstruction procedures. In the multi-modal case we replace the residual between the digitally reconstructed radiographs (DRR) and observed X-ray images with a statistical based distance measure. We integrate the algebraic reconstruction technique into a variational registration framework, so that the 3D displacement field is driven to minimize the reconstruction distance between the volumetric data and its 2D projections using mutual information (MI). The benefits of this 2D-3D registration approach are its scalability in the number of used X-ray reference images and the proposed distance that can handle low contrast fluoroscopies as well. Experimental results are presented on both artificial phantom and 3D C-arm CT images.

  11. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-03-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Sustainable Multi-Modal Sensing by a Single Sensor Utilizing the Passivity of an Elastic Actuator

    Directory of Open Access Journals (Sweden)

    Takashi Takuma

    2014-05-01

    Full Text Available When a robot equipped with compliant joints driven by elastic actuators contacts an object and its joints are deformed, multi-modal information, including the magnitude and direction of the applied force and the deformation of the joint, is used to enhance the performance of the robot such as dexterous manipulation. In conventional approaches, some types of sensors used to obtain the multi-modal information are attached to the point of contact where the force is applied and at the joint. However, this approach is not sustainable for daily use in robots, i.e., not durable or robust, because the sensors can undergo damage due to the application of excessive force and wear due to repeated contacts. Further, multiple types of sensors are required to measure such physical values, which add to the complexity of the device system of the robot. In our approach, a single type of sensor is used and it is located at a point distant from the contact point and the joint, and the information is obtained indirectly by the measurement of certain physical parameters that are influenced by the applied force and the joint deformation. In this study, we employ the McKibben pneumatic actuator whose inner pressure changes passively when a force is applied to the actuator. We derive the relationships between information and the pressures of a two-degrees-of-freedom (2-DoF joint mechanism driven by four pneumatic actuators. Experimental results show that the multi-modal information can be obtained by using the set of pressures measured before and after the force is applied. Further, we apply our principle to obtain the stiffness values of certain contacting objects that can subsequently be categorized by using the aforementioned relationships.

  13. Multi-modal spectroscopic imaging with synchrotron light to study mechanisms of brain disease

    Science.gov (United States)

    Summers, Kelly L.; Fimognari, Nicholas; Hollings, Ashley; Kiernan, Mitchell; Lam, Virginie; Tidy, Rebecca J.; Takechi, Ryu; George, Graham N.; Pickering, Ingrid J.; Mamo, John C.; Harris, Hugh H.; Hackett, Mark J.

    2017-04-01

    The international health care costs associated with Alzheimer's disease (AD) and dementia have been predicted to reach $2 trillion USD by 2030. As such, there is urgent need to develop new treatments and diagnostic methods to stem an international health crisis. A major limitation to therapy and diagnostic development is the lack of complete understanding about the disease mechanisms. Spectroscopic methods at synchrotron light sources, such as FTIR, XRF, and XAS, offer a "multi-modal imaging platform" to reveal a wealth of important biochemical information in situ within ex vivo tissue sections, to increase our understanding of disease mechanisms.

  14. Multi-Modal Imaging with a Toolbox of Influenza A Reporter Viruses.

    Science.gov (United States)

    Tran, Vy; Poole, Daniel S; Jeffery, Justin J; Sheahan, Timothy P; Creech, Donald; Yevtodiyenko, Aleksey; Peat, Andrew J; Francis, Kevin P; You, Shihyun; Mehle, Andrew

    2015-10-13

    Reporter viruses are useful probes for studying multiple stages of the viral life cycle. Here we describe an expanded toolbox of fluorescent and bioluminescent influenza A reporter viruses. The enhanced utility of these tools enabled kinetic studies of viral attachment, infection, and co-infection. Multi-modal bioluminescence and positron emission tomography-computed tomography (PET/CT) imaging of infected animals revealed that antiviral treatment reduced viral load, dissemination, and inflammation. These new technologies and applications will dramatically accelerate in vitro and in vivo influenza virus studies.

  15. A low-power multi-modal body sensor network with application to epileptic seizure monitoring.

    Science.gov (United States)

    Altini, Marco; Del Din, Silvia; Patel, Shyamal; Schachter, Steven; Penders, Julien; Bonato, Paolo

    2011-01-01

    Monitoring patients' physiological signals during their daily activities in the home environment is one of the challenge of the health care. New ultra-low-power wireless technologies could help to achieve this goal. In this paper we present a low-power, multi-modal, wearable sensor platform for the simultaneous recording of activity and physiological data. First we provide a description of the wearable sensor platform, and its characteristics with respect to power consumption. Second we present the preliminary results of the comparison between our sensors and a reference system, on healthy subjects, to test the reliability of the detected physiological (electrocardiogram and respiration) and electromyography signals.

  16. Using Multi-Modal 3D Contours and Their Relations for Vision and Robotics

    DEFF Research Database (Denmark)

    Baseski, Emre; Pugeault, Nicolas; Kalkan, Sinan

    2010-01-01

    In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information....... We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D...

  17. Research on Satellite Fault Diagnosis and Prediction Using Multi-modal Reasoning

    Institute of Scientific and Technical Information of China (English)

    YangTianshe; SunYanhong; CaoYuping

    2004-01-01

    Diagnosis and prediction of satellite fault are more difficult than that of other equipment due to the complex structure of satellites and the presence of muhi-excite sources of satellite faults. Generally, one kind of reasoning model can only diagnose and predict one kind of satellite faults. In this paper the author introduces an application of a new method using multi-modal reasoning to diagnose and predict satellite faults. The method has been used in the development of knowledge-based satellite fault diagnosis and recovery system (KSFDRS) successfully. It is shown that the method is effective.

  18. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    Science.gov (United States)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  19. Multi-Modal Imaging with a Toolbox of Influenza AReporter Viruses

    Directory of Open Access Journals (Sweden)

    Vy Tran

    2015-10-01

    Full Text Available Reporter viruses are useful probes for studying multiple stages of the viral life cycle. Here we describe an expanded toolbox of fluorescent and bioluminescent influenza A reporter viruses. The enhanced utility of these tools enabled kinetic studies of viral attachment, infection, and co-infection. Multi-modal bioluminescence and positron emission tomography–computed tomography (PET/CT imaging of infected animals revealed that antiviral treatment reduced viral load, dissemination, and inflammation. These new technologies and applications will dramatically accelerate in vitro and in vivo influenza virus studies.

  20. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  1. Awareness of Open Source Software (OSS: Promises, Reality and Future

    Directory of Open Access Journals (Sweden)

    Anil (Student of M.Tech.(CSE

    2011-10-01

    Full Text Available Open source is a development method for software that harnessesthe power of distributed peer review and transparency of process.The Open Source Initiative Approved License trademark andprogram creates a nexus of trust around which developers, users,corporations and governments can organize open sourcecooperation. The promise of open source is better quality, higherreliability, more flexibility, lower cost, and an end to predatoryvendor lock-in.

  2. The successes and challenges of open-source biopharmaceutical innovation.

    Science.gov (United States)

    Allarakhia, Minna

    2014-05-01

    Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.

  3. REUSABILITY ASSESSMENT OF OPEN SOURCE COMPONENTS FOR SOFTWARE PRODUCT LINES

    OpenAIRE

    Fazal-e- Amin; Ahmad Kamil Mahmood; Alan Oxley

    2011-01-01

    Software product lines and open source software are two emerging paradigms in software engineering. A common theme in both of these paradigms is „reuse‟. Software product lines are a reuse centered approach that makes use of existing assets to develop new products. At the moment, a motivation for using open source software is so as to gain access to source code, which can then be reused. The product line community is being attracted to open source components. The use of open source softwa...

  4. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  5. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    Full Text Available Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  6. Multi-modal face parts fusion based on Gabor feature for face recognition

    Institute of Scientific and Technical Information of China (English)

    Xiang Yan; Su Guangda; Shang Yan; Li Congcong

    2009-01-01

    A novel face recognition method, which is a fusion of multi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved with a family of Gabor kernels, and then according to the face structure and the key-points locations, the calculated Gabor images were divided into five parts: Gabor face, Gabor eyebrow, Gabor eye, Gabor nose and Gabor mouth. After that multi-modal Gabor features were spatially partitioned into non-overlapping regions and the averages of regions were concatenated to be a low dimension feature vector, whose dimension was further reduced by principal component analysis (PCA). In the decision level fusion, match results respectively calculated based on the five parts were combined according to linear discriminant analysis (LDA) and a normalized matching algorithm was used to improve the performance. Experiments on FERET database show that the proposed MMP-GF method achieves good robustness to the expression and age variations.

  7. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    Science.gov (United States)

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  8. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    Science.gov (United States)

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  9. Progressive Graph-Based Transductive Learning for Multi-modal Classification of Brain Disorder Disease.

    Science.gov (United States)

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Zu, Chen; Nie, Feiping; Shen, Dinggang; Wu, Guorong

    2016-10-01

    Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer's disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.

  10. Aggregation for Computing Multi-Modal Stationary Distributions in 1-D Gene Regulatory Networks.

    Science.gov (United States)

    Avcu, Neslihan; Pekergin, Nihal; Pekergin, Ferhan; Guzelis, Cuneyt

    2017-04-27

    This paper proposes aggregation-based, three-stage algorithms to overcome the numerical problems encountered in computing stationary distributions and mean first passage times for multi-modal birth-death processes of large state space sizes. The considered birth-death processes which are defined by Chemical Master Equations are used in modeling stochastic behavior of gene regulatory networks. Computing stationary probabilities for a multi-modal distribution from Chemical Master Equations is subject to have numerical problems due to the probability values running out of the representation range of the standard programming languages with the increasing size of the state space. The aggregation is shown to provide a solution to this problem by analyzing first reduced size subsystems in isolation and then considering the transitions between these subsystems. The proposed algorithms are applied to study the bimodal behavior of the lac operon of E. coli described with a one-dimensional birth-death model. Thus the determination of the entire parameter range of bimodality for the stochastic model of lac operon is achieved.

  11. Multi-modal use of a socially directed call in bonobos.

    Directory of Open Access Journals (Sweden)

    Emilie Genty

    Full Text Available 'Contest hoots' are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus. These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, 'contest hoots' were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction.

  12. Multi-Modal Use of a Socially Directed Call in Bonobos

    Science.gov (United States)

    Genty, Emilie; Clay, Zanna; Hobaiter, Catherine; Zuberbühler, Klaus

    2014-01-01

    ‘Contest hoots’ are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus). These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, ‘contest hoots’ were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction. PMID:24454745

  13. Multi-modal signal acquisition using a synchronized wireless body sensor network in geriatric patients.

    Science.gov (United States)

    Pflugradt, Maik; Mann, Steffen; Tigges, Timo; Görnig, Matthias; Orglmeister, Reinhold

    2016-02-01

    Wearable home-monitoring devices acquiring various biosignals such as the electrocardiogram, photoplethysmogram, electromyogram, respirational activity and movements have become popular in many fields of research, medical diagnostics and commercial applications. Especially ambulatory settings introduce still unsolved challenges to the development of sensor hardware and smart signal processing approaches. This work gives a detailed insight into a novel wireless body sensor network and addresses critical aspects such as signal quality, synchronicity among multiple devices as well as the system's overall capabilities and limitations in cardiovascular monitoring. An early sign of typical cardiovascular diseases is often shown by disturbed autonomic regulations such as orthostatic intolerance. In that context, blood pressure measurements play an important role to observe abnormalities like hypo- or hypertensions. Non-invasive and unobtrusive blood pressure monitoring still poses a significant challenge, promoting alternative approaches including pulse wave velocity considerations. In the scope of this work, the presented hardware is applied to demonstrate the continuous extraction of multi modal parameters like pulse arrival time within a preliminary clinical study. A Schellong test to diagnose orthostatic hypotension which is typically based on blood pressure cuff measurements has been conducted, serving as an application that might significantly benefit from novel multi-modal measurement principles. It is further shown that the system's synchronicity is as precise as 30 μs and that the integrated analog preprocessing circuits and additional accelerometer data provide significant advantages in ambulatory measurement environments.

  14. Treating psychological trauma in first responders: a multi-modal paradigm.

    Science.gov (United States)

    Flannery, Raymond B

    2015-06-01

    Responding to critical incidents may result in 5.9-22% of first responders developing psychological trauma and posttraumatic stress disorder. These impacts may be physical, mental, and/or behavioral. This population remains at risk, given the daily occurrence of critical incidents. Current treatments, primarily focused on combat and rape victims, have included single and double interventions, which have proven helpful to some but not all victims and one standard of care has remained elusive. However, even though the need is established, research on the treatment interventions of first responders has been limited. Given the multiplicity of impacts from psychological trauma and the inadequacies of responder treatment intervention research thus far, this paper proposes a paradigmatic shift from single/double treatment interventions to a multi-modal approach to first responder victim needs. A conceptual framework based on psychological trauma is presented and possible multi-modal interventions selected from the limited, extant first responder research are utilized to illustrate how the approach would work and to encourage clinical and experimental research into first responder treatment needs.

  15. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  16. Eigenanatomy: sparse dimensionality reduction for multi-modal medical image analysis.

    Science.gov (United States)

    Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B

    2015-02-01

    Rigorous statistical analysis of multimodal imaging datasets is challenging. Mass-univariate methods for extracting correlations between image voxels and outcome measurements are not ideal for multimodal datasets, as they do not account for interactions between the different modalities. The extremely high dimensionality of medical images necessitates dimensionality reduction, such as principal component analysis (PCA) or independent component analysis (ICA). These dimensionality reduction techniques, however, consist of contributions from every region in the brain and are therefore difficult to interpret. Recent advances in sparse dimensionality reduction have enabled construction of a set of image regions that explain the variance of the images while still maintaining anatomical interpretability. The projections of the original data on the sparse eigenvectors, however, are highly collinear and therefore difficult to incorporate into multi-modal image analysis pipelines. We propose here a method for clustering sparse eigenvectors and selecting a subset of the eigenvectors to make interpretable predictions from a multi-modal dataset. Evaluation on a publicly available dataset shows that the proposed method outperforms PCA and ICA-based regressions while still maintaining anatomical meaning. To facilitate reproducibility, the complete dataset used and all source code is publicly available. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. An Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup

    Directory of Open Access Journals (Sweden)

    Alina Miron

    2015-06-01

    Full Text Available The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR and modalities (intensity, depth and motion. In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1 we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2 we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS, local binary patterns (LBP, local gradient patterns (LGP and histogram of oriented gradients (HOG, computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.

  18. An arbitrary boundary triangle mesh generation method for multi-modality imaging

    Science.gov (United States)

    Zhang, Xuanxuan; Deng, Yong; Gong, Hui; Meng, Yuanzheng; Yang, Xiaoquan; Luo, Qingming

    2012-03-01

    Low-resolution and ill-posedness are the major challenges in diffuse optical tomography(DOT)/fluorescence molecular tomography(FMT). Recently, the multi-modality imaging technology that combines micro-computed tomography (micro-CT) with DOT/FMT is developed to improve resolution and ill-posedness. To take advantage of the fine priori anatomical maps obtained from micro-CT, we present an arbitrary boundary triangle mesh generation method for FMT/DOT/micro-CT multi-modality imaging. A planar straight line graph (PSLG) based on the image of micro-CT is obtained by an adaptive boundary sampling algorithm. The subregions of mesh are accurately matched with anatomical structures by a two-step solution, firstly, the triangles and nodes during mesh refinement are labeled respectively, and then a revising algorithm is used to modifying meshes of each subregion. The triangle meshes based on a regular model and a micro-CT image are generated respectively. The results show that the subregions of triangle meshes can match with anatomical structures accurately and triangle meshes have good quality. This provides an arbitrary boundaries triangle mesh generation method with the ability to incorporate the fine priori anatomical information into DOT/FMT reconstructions.

  19. Visual tracking for multi-modality computer-assisted image guidance

    Science.gov (United States)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  20. Coronary plaque morphology on multi-modality imagining and periprocedural myocardial infarction after percutaneous coronary intervention

    Directory of Open Access Journals (Sweden)

    Akira Sato

    2016-06-01

    Full Text Available Percutaneous coronary intervention (PCI may be complicated by periprocedural myocardial infarction (PMI as manifested by elevated cardiac biomarkers such as creatine kinase (CK-MB or troponin T. The occurrence of PMI has been shown to be associated with worse short- and long-term clinical outcome. However, recent studies suggest that PMI defined by biomarker levels alone is a marker of atherosclerosis burden and procedural complexity but in most cases does not have independent prognostic significance. Diagnostic multi-modality imaging such as intravascular ultrasound, optical coherence tomography, coronary angioscopy, near-infrared spectroscopy, multidetector computed tomography, and magnetic resonance imaging can be used to closely investigate the atherosclerotic lesion in order to detect morphological markers of unstable and vulnerable plaques in the patients undergoing PCI. With the improvement of technical aspects of multimodality coronary imaging, clinical practice and research are increasingly shifting toward defining the clinical implication of plaque morphology and patients outcomes. There were numerous published data regarding the relationship between pre-PCI lesion subsets on multi-modality imaging and post-PCI biomarker levels. In this review, we discuss the relationship between coronary plaque morphology estimated by invasive or noninvasive coronary imaging and the occurrence of PMI. Furthermore, this review underlies that the value of the multimodality coronary imaging approach will become the gold standard for invasive or noninvasive prediction of PMI in clinical practice.

  1. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    Directory of Open Access Journals (Sweden)

    Bishnu P. Joshi

    2010-06-01

    Full Text Available Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research.

  2. Performances evaluation of different open source

    Directory of Open Access Journals (Sweden)

    Arun Patel

    2016-06-01

    Full Text Available In Open sources DEMs such as SRTM, ASTER and Cartosat-1, various factors affecting the accuracy of satellite based DEM such as errors during data collection, systematic errors and unknown errors that are geographically dependent on terrain conditions cannot be avoided. For these reasons it is very necessary to check and compare the performances and validation of the above mentioned different satellite based DEMs. Accuracy assessment of these DEM has been done using DGPS points. For these points proper interpolation of the surface was developed using different interpolation techniques. For the generation of the surface the first step was converting the satellite based DEMs height into linear interpolation contour maps of 1 m interval. Then came selecting random sample points on the contour line and generating the interpolated surface using different interpolation techniques such as IDW, GPI, RBF, OK and UK, LPI, TR and BI, which are commonly used in geomorphology research. This interpolated surface helps in proper representation of the terrain and was checked under different terrain surfaces. For validation of DGPS points the height was taken for ground control points and standard statistical tests such as ME and RMSE were applied. From above investigation, it is reveals that above mention DEMs which are used for study. Cartosat-1 (30 m data product is better than SRTM (90 m and ASTER (30 m because it had produced low RMSE of 3.49 m without applying the interpolation method. Investigation also reveals after applying the interpolation techniques on this data error can be reduced. In the case of Cartosat-1 and SRTM, low RMSE and ME were produced by the BI method, where Cartosat-1 DEM had an RMSE of 3.36 m with ME of −2.74 m, respectively. But in this case, RMSE and ME of SRTM is 2.73 m and −0.36 m, respectively. BI is designed for image processing and can be used for imagery were a maximum height variation in satellite DEM and terrain

  3. Beyond Open Source Software: Solving Common Library Problems Using the Open Source Hardware Arduino Platform

    Directory of Open Access Journals (Sweden)

    Jonathan Younker

    2013-06-01

    Full Text Available Using open source hardware platforms like the Arduino, libraries have the ability to quickly and inexpensively prototype custom hardware solutions to common library problems. The authors present the Arduino environment, what it is, what it does, and how it was used at the James A. Gibson Library at Brock University to create a production portable barcode-scanning utility for in-house use statistics collection as well as a prototype for a service desk statistics tabulation program’s hardware interface.

  4. OpenADR Open Source Toolkit: Developing Open Source Software for the Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    McParland, Charles

    2011-02-01

    Demand response (DR) is becoming an increasingly important part of power grid planning and operation. The advent of the Smart Grid, which mandates its use, further motivates selection and development of suitable software protocols to enable DR functionality. The OpenADR protocol has been developed and is being standardized to serve this goal. We believe that the development of a distributable, open source implementation of OpenADR will benefit this effort and motivate critical evaluation of its capabilities, by the wider community, for providing wide-scale DR services

  5. Task Characterisation and Cross-Platform Programming Through System Identification

    Directory of Open Access Journals (Sweden)

    Roberto Iglesias

    2008-11-01

    Full Text Available Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, it is subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour, based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process. Such iterative refinement could be reduced, we argue, if a more profound theoretical understanding of robot-environment interaction existed. In this paper, we therefore present a modelling method that generates a faithful model of a robot's interaction with its environment, based on data logged while observing a physical robot's behaviour. Because this modelling method - nonlinear modelling using polynomials - is commonly used in the engineering discipline of system identification, we refer to it here as "robot identification". We show in this paper that using robot identification to obtain a computer model of robot environment interaction offers several distinct advantages:
    1. Very compact representations (one-line programs of the robot control program are generated
    2.The model can be analysed, for example through sensitivity analysis, leading to a better understanding of the essential parameters underlying the robot's behaviour, and
    3. The generated, compact robot code can be used for cross-platform robot programming, allowing fast transfer of robot code from one type of robot to another.
    We demonstrate these points through experiments with a Magellan Pro and a Nomad 200 mobile robot.

  6. Interactive, open source, travel time scenario modelling: tools to facilitate participation in health service access analysis.

    Science.gov (United States)

    Fisher, Rohan; Lassa, Jonatan

    2017-04-18

    Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and

  7. Open-Source as a strategy for operational software - the case of Enki

    Science.gov (United States)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has

  8. A Survey of Open Source Tools for Business Intelligence

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach

    2009-01-01

    The industrial use of open source Business Intelligence (BI) tools is becoming more common, but is still not as widespread as for other types of software. It is therefore of interest to explore which possibilities are available for open source BI and compare the tools. In this survey paper, we...

  9. How Hard Can It Be? : Developing in Open Source

    Directory of Open Access Journals (Sweden)

    Rosalie Blake

    2009-06-01

    Full Text Available In 2000 a small public library system in New Zealand developed and released Koha, the world’s first open source library management system. This is the story of how that came to pass and why, and of the lessons learnt in their first foray into developing in open source.

  10. Open-Source Data and the Study of Homicide.

    Science.gov (United States)

    Parkin, William S; Gruenewald, Jeff

    2015-07-20

    To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events.

  11. Open Source Initiative Powers Real-Time Data Streams

    Science.gov (United States)

    2014-01-01

    Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.

  12. Open source engineering and sustainability tools for the built environment

    NARCIS (Netherlands)

    Coenders, J.L.

    2013-01-01

    This paper presents two novel open source software developments for design and engineering in the built environment. The first development, called “sustainability-open” [1], aims on providing open source design, analysis and assessment software source code for (environmental) performance of building

  13. Evaluation and Customization of Different Open Source Desktop GIS Software

    Institute of Scientific and Technical Information of China (English)

    QIU Ruqiong; LI Bing

    2012-01-01

    This paper gave a general evaluation on existing three popular free and open source desktop GIS projects,according to the selected evaluation criteria.To further the understanding of the open source software,this paper also presented a customization example of QGIS with python and PyQT.

  14. Integrating an Automatic Judge into an Open Source LMS

    Science.gov (United States)

    Georgouli, Katerina; Guerreiro, Pedro

    2011-01-01

    This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…

  15. Open source engineering and sustainability tools for the built environment

    NARCIS (Netherlands)

    Coenders, J.L.

    2013-01-01

    This paper presents two novel open source software developments for design and engineering in the built environment. The first development, called “sustainability-open” [1], aims on providing open source design, analysis and assessment software source code for (environmental) performance of

  16. Migrations of the Mind: The Emergence of Open Source Education

    Science.gov (United States)

    Glassman, Michael; Bartholomew, Mitchell; Jones, Travis

    2011-01-01

    The authors describe an Open Source approach to education. They define Open Source Education (OSE) as a teaching and learning framework where the use and presentation of information is non-hierarchical, malleable, and subject to the needs and contributions of students as they become "co-owners" of the course. The course transforms itself into an…

  17. Open Source Library Management Systems: A Multidimensional Evaluation

    Science.gov (United States)

    Balnaves, Edmund

    2008-01-01

    Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…

  18. Open Source Communities in Technical Writing: Local Exigence, Global Extensibility

    Science.gov (United States)

    Conner, Trey; Gresham, Morgan; McCracken, Jill

    2011-01-01

    By offering open-source software (OSS)-based networks as an affordable technology alternative, we partnered with a nonprofit community organization. In this article, we narrate the client-based experiences of this partnership, highlighting the ways in which OSS and open-source culture (OSC) transformed our students' and our own expectations of…

  19. Open Source Library Management Systems: A Multidimensional Evaluation

    Science.gov (United States)

    Balnaves, Edmund

    2008-01-01

    Open source library management systems have improved steadily in the last five years. They now present a credible option for small to medium libraries and library networks. An approach to their evaluation is proposed that takes account of three additional dimensions that only open source can offer: the developer and support community, the source…

  20. BOOK REVIEW: The Success of Open Source by Steven Weber

    Directory of Open Access Journals (Sweden)

    Eric Lease Morgan

    2007-12-01

    Full Text Available The Success of Open Source by Steven Weber details the history, process, motivations, and possible long-term effects of open source software (OSS. Weber’s book can be used as a set of guidelines – a description of a framework – for building software solutions for the computing problems facing libraries.

  1. Open Genetic Code: on open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  2. A Multi-Modal Control Using a Hybrid Pole-Placement-Integral Resonant Controller (PPIR) with Experimental Investigations

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Basu, Biswajit

    2011-01-01

    Control of multi-modal structural vibrations has been an important and challenging problem in flexible structural systems. This paper proposes a new vibration control algorithm for multi-modal structural control. The proposed algorithm combines a pole-placement controller with an integral resonant...... controller. The pole-placement controller is used to achieve a target equivalent modal viscous damping in the system and helps in the suppression of higher modes, which contribute to the vibration response of flexible structures. The integral resonant controller successfully reduces the low frequency...... vibrations e.g. caused by broad-band turbulent wind excitations. Hence, the proposed hybrid controller can effectively suppress complex multi-modal vibrations in flexible systems. Both numerical and experimental studies have been carried out to demonstrate the effectiveness of the proposed algorithm using...

  3. Open source system options for librarians and archivists

    CERN Document Server

    Tomer, Christinger

    2017-01-01

    The importance of open source systems in the context of libraries and archives is perhaps greater now than ever before. This book explains the essentials of open source systems to benefit academic and public librarians and archivists who have a vested interest in the future of integrated online library systems. Author Christinger Tomer, who has studied open source systems for more than two decades and used them extensively in both teaching and consulting, provides brief histories of both library automation and open source software, in the latter instance focusing on aspects that have more directly influenced library and archival computing. He then describes and analyzes key open source systems and critically compares them to commercial systems in terms of design, functionality, and ease of administration. The book concludes with an in-depth description of how these systems are currently being employed as well as insightful predictions about how this segment of the software environment is likely to evolve.

  4. Open source software and minority languages: a priceless opportunity

    Directory of Open Access Journals (Sweden)

    Jordi Mas

    2003-04-01

    Full Text Available Open source software is a form of software that gives its users freedom. With the advent of the Internet, open source software has consolidated as a technically viable, financially sustainable alternative to proprietary software. Languages such as Breton, Galician, Gaelic and Catalan have seen very little development in the world of proprietary software because of the limitations imposed. In contrast, in the world of open source software these languages have been developed with notable success. Open source projects of the importance of the Mozilla browser, the GNOME environment and the GNU/Linux system have complete or partial translations in all these languages. Open source software presents an unprecedented opportunity for the development of minority languages, such as Catalan, in new technologies thanks to the freedom that they guarantee us.

  5. Institutional and pedagogical criteria for productive open source learning environments

    DEFF Research Database (Denmark)

    Svendsen, Brian Møller; Ryberg, Thomas; Semey, Ian Peter;

    2004-01-01

    In this article we present some institutional and pedagogical criteria for making an informed decision in relation to identifying and choosing a productive open source learning environment. We argue that three concepts (implementation, maintainability and further development) are important when...... considering the sustainability and cost efficiency of an open source system, and we outline a set of key points for evaluating an open source software in terms of cost of system adoption. Furthermore we identify a range of pedagogical concepts and criteria to emphasize the importance of considering...... the relation between the local pedagogical practice and the pedagogical design of the open source learning environment. This we illustrate through an analysis of an open source system and our own pedagogical practice at Aalborg University, Denmark (POPP)....

  6. Institutional and pedagogical criteria for productive open source learning environments

    DEFF Research Database (Denmark)

    Svendsen, Brian Møller; Ryberg, Thomas; Semey, Ian Peter

    2004-01-01

    In this article we present some institutional and pedagogical criteria for making an informed decision in relation to identifying and choosing a productive open source learning environment. We argue that three concepts (implementation, maintainability and further development) are important when...... considering the sustainability and cost efficiency of an open source system, and we outline a set of key points for evaluating an open source software in terms of cost of system adoption. Furthermore we identify a range of pedagogical concepts and criteria to emphasize the importance of considering...... the relation between the local pedagogical practice and the pedagogical design of the open source learning environment. This we illustrate through an analysis of an open source system and our own pedagogical practice at Aalborg University, Denmark (POPP)....

  7. Open source software and minority languages: a priceless opportunity

    Directory of Open Access Journals (Sweden)

    Jordi Mas

    2003-04-01

    Full Text Available Open source software is a form of software that gives its users freedom. With the advent of the Internet, open source software has consolidated as a technically viable, financially sustainable alternative to proprietary software. Languages such as Breton, Galician, Gaelic and Catalan have seen very little development in the world of proprietary software because of the limitations imposed. In contrast, in the world of open source software these languages have been developed with notable success. Open source projects of the importance of the Mozilla browser, the GNOME environment and the GNU/Linux system have complete or partial translations in all these languages. Open source software presents an unprecedented opportunity for the development of minority languages, such as Catalan, in new technologies thanks to the freedom that they guarantee us.

  8. Automatic quantification of multi-modal rigid registration accuracy using feature detectors.

    Science.gov (United States)

    Hauler, F; Furtado, H; Jurisic, M; Polanec, S H; Spick, C; Laprie, A; Nestle, U; Sabatini, U; Birkfellner, W

    2016-07-21

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error-mTRE-based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  9. Automatic quantification of multi-modal rigid registration accuracy using feature detectors

    Science.gov (United States)

    Hauler, F.; Furtado, H.; Jurisic, M.; Polanec, S. H.; Spick, C.; Laprie, A.; Nestle, U.; Sabatini, U.; Birkfellner, W.

    2016-07-01

    In radiotherapy, the use of multi-modal images can improve tumor and target volume delineation. Images acquired at different times by different modalities need to be aligned into a single coordinate system by 3D/3D registration. State of the art methods for validation of registration are visual inspection by experts and fiducial-based evaluation. Visual inspection is a qualitative, subjective measure, while fiducial markers sometimes suffer from limited clinical acceptance. In this paper we present an automatic, non-invasive method for assessing the quality of intensity-based multi-modal rigid registration using feature detectors. After registration, interest points are identified on both image data sets using either speeded-up robust features or Harris feature detectors. The quality of the registration is defined by the mean Euclidean distance between matching interest point pairs. The method was evaluated on three multi-modal datasets: an ex vivo porcine skull (CT, CBCT, MR), seven in vivo brain cases (CT, MR) and 25 in vivo lung cases (CT, CBCT). Both a qualitative (visual inspection by radiation oncologist) and a quantitative (mean target registration error—mTRE—based on selected markers) method were employed. In the porcine skull dataset, the manual and Harris detectors give comparable results but both overestimated the gold standard mTRE based on fiducial markers. For instance, for CT-MR-T1 registration, the mTREman (based on manually annotated landmarks) was 2.2 mm whereas mTREHarris (based on landmarks found by the Harris detector) was 4.1 mm, and mTRESURF (based on landmarks found by the SURF detector) was 8 mm. In lung cases, the difference between mTREman and mTREHarris was less than 1 mm, while the difference between mTREman and mTRESURF was up to 3 mm. The Harris detector performed better than the SURF detector with a resulting estimated registration error close to the gold standard. Therefore the Harris detector was shown to be the more suitable

  10. Multi-modal hard x-ray imaging with a laboratory source using selective reflection from a mirror.

    Science.gov (United States)

    Pelliccia, Daniele; Paganin, David M

    2014-04-01

    Multi-modal hard x-ray imaging sensitive to absorption, refraction, phase and scattering contrast is demonstrated using a simple setup implemented with a laboratory source. The method is based on selective reflection at the edge of a mirror, aligned to partially reflect a pencil x-ray beam after its interaction with a sample. Quantitative scattering contrast from a test sample is experimentally demonstrated using this method. Multi-modal imaging of a house fly (Musca domestica) is shown as proof of principle of the technique for biological samples.

  11. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    Science.gov (United States)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  12. A multi-biometric feature-fusion framework for improved uni-modal and multi-modal human identification

    CSIR Research Space (South Africa)

    Brown, K

    2016-05-01

    Full Text Available provide important guidelines that enable the sys- tematic implementation of multi-modal biometric systems for future research and applications. Feature-level fusion is in particular need of these guidelines because of the ”curse of dimensionality” problem.... The training samples were sequentially chosen from one to five and the rest were used for testing. To the best of our knowledge there are no studies that fuse face and fingerprint data acquired from the SDUMLA multi-modal database. B. Pre-processing Pixel...

  13. Multi-Modal Reasoning Medical Diagnosis System Integrated With Probabilistic Reasoning

    Institute of Scientific and Technical Information of China (English)

    Jia Tian; Xun Chen; Sheng-Ping Dong

    2005-01-01

    In this paper, a Multi Modal Reasoning (MMR) method integrated with probabilistic reasoning is proposed for the diagnosis support module of the open eHealth platform. MMR is based on both Rule Based Reasoning (RBR) and Case Based Reasoning (CBR). It is not only applied to the identification of diseases and syndromes based on medical guidelines,but also deals with exceptional cases and individual therapies in order to improve diagnostic accuracy. Moreover, a new rule expression frame is introduced to deal with uncertainty, which can represent and process vague, imprecise, and incomplete information. Furthermore, this system is capable of updating the attributes of rules and inducing rules with a small data sample.

  14. Multi-modal vibration energy harvesting approach based on nonlinear oscillator arrays under magnetic levitation

    Science.gov (United States)

    Abed, I.; Kacem, N.; Bouhaddi, N.; Bouazizi, M. L.

    2016-02-01

    We propose a multi-modal vibration energy harvesting approach based on arrays of coupled levitated magnets. The equations of motion which include the magnetic nonlinearity and the electromagnetic damping are solved using the harmonic balance method coupled with the asymptotic numerical method. A multi-objective optimization procedure is introduced and performed using a non-dominated sorting genetic algorithm for the cases of small magnet arrays in order to select the optimal solutions in term of performances by bringing the eigenmodes close to each other in terms of frequencies and amplitudes. Thanks to the nonlinear coupling and the modal interactions even for only three coupled magnets, the proposed method enable harvesting the vibration energy in the operating frequency range of 4.6-14.5 Hz, with a bandwidth of 190% and a normalized power of 20.2 {mW} {{cm}}-3 {{{g}}}-2.

  15. Development of internal solitary waves in various thermocline regimes - a multi-modal approach

    Directory of Open Access Journals (Sweden)

    T. Gerkema

    2003-01-01

    Full Text Available A numerical analysis is made on the appearance of oceanic internal solitary waves in a multi-modal setting. This is done for observed profiles of stratification from the Sulu Sea and the Bay of Biscay, in which thermocline motion is dominated by the first and third mode, respectively. The results show that persistent solitary waves occur only in the former case, in accordance with the observations. In the Bay of Biscay much energy is transferred from the third mode to lower modes, implying that a uni-modal approach would not have been appropriate. To elaborate on these results in a systematic way, a simple model for the stratification is used; an interpretation is given in terms of regimes of thermocline strength.

  16. Multi-modal human-machine interface of a telerobotic system for remote arc welding

    Institute of Scientific and Technical Information of China (English)

    Li Haichao; Gao Hongming; Wu Lin; Zhang Guangjun

    2008-01-01

    In telerobotic system for remote welding, human-machine interface is one of the most important factor for enhancing capability and efftciency. This paper presents an architecture design of human-machine interface for welding telerobotic system: welding multi-modal human-machine interface. The human-machine interface integrated several control modes, which are namely shared control, teleteaching, supervisory control and local autonomous control. Space mouse, panoramic vision camera and graphics simulation system are also integrated into the human-machine interface for welding teleoperation. Finally, weld seam tracing and welding experiments of U-shape seam are performed by these control modes respectively. The results show that the system has better performance of human-machine interaction and complexity environment welding.

  17. Creating multi-modal logistics centers: Prospect for development in Central Asia

    Directory of Open Access Journals (Sweden)

    Nodir Jumaniyazov

    2010-10-01

    Full Text Available All we have witnessed several summits of the so-called G-20 to overcome the crisis and attempt to delineate the ”look” of new rules of the emerging new world economic system. However, according to many experts, these rules will not be able to radically change the current system of economic relations, which is based on the processes of globalization and economic interpenetration of the world. One can list the many elements of the system. Among them, as a manifestation of a growing specialization of production, and deepening of cooperative relations in the world the special role is played by multi-modal logistics centers (MLC, of both regional and global concern. If stock and commodity exchanges are the link in the global economy, meanwhile a multimodal logistics centers serve as their practical and technical support.

  18. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    2017-01-01

    . The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... by the presented edge descriptor. The second work presents an approach for robust detection of multiple objects by combining feature descriptors that capture both surface and edge information. This work presents quantitative results, where the performance of the developed feature descriptor combination is compared......-of-the-art descriptor and to this date, constant improvements of it are presented. The evaluation of PPFs is performed on seven publicly available datasets and it presents not only the performance comparison towards other popularly used methods, but also investigations of the space of possible point pair relations...

  19. Using Multi-Modal 3D Contours and Their Relations for Vision and Robotics

    DEFF Research Database (Denmark)

    Baseski, Emre; Pugeault, Nicolas; Kalkan, Sinan

    2010-01-01

    In this work, we make use of 3D contours and relations between them (namely, coplanarity, cocolority, distance and angle) for four different applications in the area of computer vision and vision-based robotics. Our multi-modal contour representation covers both geometric and appearance information....... We show the potential of reasoning with global entities in the context of visual scene analysis for driver assistance, depth prediction, robotic grasping and grasp learning. We argue that, such 3D global reasoning processes complement widely-used 2D local approaches such as bag-of-features since 3D...... relations are invariant under camera transformations and 3D information can be directly linked to actions. We therefore stress the necessity of including both global and local features with different spatial dimensions within a representation. We also discuss the importance of an efficient use...

  20. A Distance Measure Comparison to Improve Crowding in Multi-Modal Problems.

    Energy Technology Data Exchange (ETDEWEB)

    D. Todd VOllmer; Terence Soule; Milos Manic

    2010-08-01

    Solving multi-modal optimization problems are of interest to researchers solving real world problems in areas such as control systems and power engineering tasks. Extensions of simple Genetic Algorithms, particularly types of crowding, have been developed to help solve these types of problems. This paper examines the performance of two distance measures, Mahalanobis and Euclidean, exercised in the processing of two different crowding type implementations against five minimization functions. Within the context of the experiments, empirical evidence shows that the statistical based Mahalanobis distance measure when used in Deterministic Crowding produces equivalent results to a Euclidean measure. In the case of Restricted Tournament selection, use of Mahalanobis found on average 40% more of the global optimum, maintained a 35% higher peak count and produced an average final best fitness value that is 3 times better.

  1. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    Science.gov (United States)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  2. Multi-modal Person Localization And Emergency Detection Using The Kinect

    Directory of Open Access Journals (Sweden)

    Georgios Galatas

    2013-01-01

    Full Text Available Person localization is of paramount importance in an ambient intelligence environment since it is the first step towards context-awareness. In this work, we present the development of a novel system for multi-modal person localization and emergency detection in an assistive ambient intelligence environment for the elderly. Our system is based on the depth sensor and microphone array of 2 Kinect devices. We use skeletal tracking conducted on the depth images and sound source localization conducted on the captured audio signal to estimate the location of a person. In conjunction with the location information, automatic speech recognition is used as a natural and intuitive means of communication in order to detect emergencies and accidents, such as falls. Our system attained high accuracy for both the localization and speech recognition tasks, verifying its effectiveness.

  3. Incidental acquisition of foreign language vocabulary through brief multi-modal exposure.

    Science.gov (United States)

    Bisson, Marie-Josée; van Heuven, Walter J B; Conklin, Kathy; Tunney, Richard J

    2013-01-01

    First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed.

  4. Incidental acquisition of foreign language vocabulary through brief multi-modal exposure.

    Directory of Open Access Journals (Sweden)

    Marie-Josée Bisson

    Full Text Available First language acquisition requires relatively little effort compared to foreign language acquisition and happens more naturally through informal learning. Informal exposure can also benefit foreign language learning, although evidence for this has been limited to speech perception and production. An important question is whether informal exposure to spoken foreign language also leads to vocabulary learning through the creation of form-meaning links. Here we tested the impact of exposure to foreign language words presented with pictures in an incidental learning phase on subsequent explicit foreign language learning. In the explicit learning phase, we asked adults to learn translation equivalents of foreign language words, some of which had appeared in the incidental learning phase. Results revealed rapid learning of the foreign language words in the incidental learning phase showing that informal exposure to multi-modal foreign language leads to foreign language vocabulary acquisition. The creation of form-meaning links during the incidental learning phase is discussed.

  5. The evolution of gadolinium based contrast agents: from single-modality to multi-modality.

    Science.gov (United States)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K

    2016-05-19

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  6. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    Energy Technology Data Exchange (ETDEWEB)

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  7. Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome

    Science.gov (United States)

    Choudhry, Netan; Rao, Rajesh C.

    2015-01-01

    Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849

  8. Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents

    Science.gov (United States)

    Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.

    2014-03-01

    Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.

  9. Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging

    Science.gov (United States)

    Zuo, Chao; Sun, Jiasong; Feng, Shijie; Zhang, Minliang; Chen, Qian

    2016-05-01

    We demonstrate a simple and cost-effective programmable aperture microscope to realize multi-modal computational imaging by integrating a programmable liquid crystal display (LCD) into a conventional wide-field microscope. The LCD selectively modulates the light distribution at the rear aperture of the microscope objective, allowing numerous imaging modalities, such as bright field, dark field, differential phase contrast, quantitative phase imaging, multi-perspective imaging, and full resolution light field imaging to be achieved and switched rapidly in the same setup, without requiring specialized hardwares and any moving parts. We experimentally demonstrate the success of our method by imaging unstained cheek cells, profiling microlens array, and changing perspective views of thick biological specimens. The post-exposure refocusing of a butterfly mouthpart and RFP-labeled dicot stem cross-section is also presented to demonstrate the full resolution light field imaging capability of our system for both translucent and fluorescent specimens.

  10. How Open Source Has Changed the Software Industry: Perspectives from Open Source Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Risto Rajala

    2012-01-01

    Full Text Available The emergence of F/LOSS (free/libre open source software has triggered several changes in the software industry. F/LOSS has been cited as an archetypal form of open innovation; it consists of the convergence and collaboration of like-minded parties. An increasing number of software firms have taken upon this approach to link outsiders into their service development and product design. Also, software firms have been increasingly grounded their business models on user-centric and service-oriented operations. This article describes a study that investigates these changes from the perspective of F/LOSS entrepreneurs. The findings are summarized into four issues that are critical in managing an F/LOSS business: i dealing with organizational changes in the innovation process; ii mastering user involvement; iii successfully using resources; and iv designing revenue models.

  11. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    Science.gov (United States)

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  12. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis.

    Science.gov (United States)

    Eshaghi, Arman; Riyahi-Alam, Sadjad; Saeedi, Roghayyeh; Roostaei, Tina; Nazeri, Arash; Aghsaei, Aida; Doosti, Rozita; Ganjgahi, Habib; Bodini, Benedetta; Shakourirad, Ali; Pakravan, Manijeh; Ghana'ati, Hossein; Firouznia, Kavous; Zarei, Mojtaba; Azimi, Amir Reza; Sahraian, Mohammad Ali

    2015-01-01

    Neuromyelitis optica (NMO) exhibits substantial similarities to multiple sclerosis (MS) in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI) and resting-state functional MRI (fMRI). In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI) and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls) with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis of NMO and MS.

  13. Multi-Modal Treatment Approach to Painful Rib Syndrome: Case Series and Review of the Literature.

    Science.gov (United States)

    Germanovich, Andrew; Ferrante, Francis Michael

    2016-03-01

    Mechanical chest wall pain is a common presenting complaint in the primary care office, emergency room, and specialty clinic. Diagnostic testing is often expensive due to similar presenting symptoms that may involve the heart or lungs. Since the chest wall biomechanics are poorly understood by many clinicians, few effective treatments are offered to patients with rib-related acute pain, which may lead to chronic pain. This case series and literature review illustrates biomechanics involved in the pathogenesis of rib-related chest wall pain and suggests an effective multi-modal treatment plan using interventional techniques with emphasis on manual manipulative techniques. Case series and literature review. Pain clinic in an academic medical center. This is a case series of 3 patients diagnosed with painful rib syndrome using osteopathic palpatory physical examination techniques. Ultrasound-guided intercostal nerve blocks were followed by manual manipulation of mechanically displaced ribs as a part of our multi-modal treatment plan. A review of the literature was undertaken to clarify nomenclature used in the description of rib-related pain, to describe the biomechanics involved in the pathogenesis of mechanical rib pain, and to illustrate the use of effective manual manipulation techniques. This review is introductory and not a complete review of all manual or interventional pain management techniques applicable to the treatment of mechanical rib-related pain. Manual diagnostic and therapeutic skills can be learned by physicians to treat biomechanically complex rib-related chest wall pain in combination with interventional image-guided techniques. Pain physicians should learn certain basic manual manipulation skills both for diagnostic and therapeutic purposes.

  14. Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor

    Institute of Scientific and Technical Information of China (English)

    Jian Chen; Jie Tian

    2009-01-01

    The purpose of image registration is to spatially align two or more single-modality images taken at different times,or several images acquired by multiple imaging modalities.Intensity-based registration usually requires optimization of the similarity metric between the images.However,global optimization techniques are too time-consuming,and local optimization techniques frequently fail to search the global transformation space because of the large initial misalignment of the two images.Moreover,for large non-overlapping area registration,the similarity metric cannot reach its optimum value when the two images are properly registered.In order to solve these problems,we propose a novel Symmetric Scale Invariant Feature Transform (symmetric-SIFT) descriptor and develop a fast multi-modal image registration technique.The proposed technique automatically generates a lot of highly distinctive symmetric-SIFT descriptors for two images,and the registration is performed by matching the corresponding descriptors over two images.These descriptors are invariant to image scale and rotation,and are partially invariant to affine transformation.Moreover,these descriptors are symmetric to contrast,which makes it suitable for multi-modal image registration.The proposed technique abandons the optimization and similarity metric strategy.It works with near real-time performance,and can deal with the large non-overlapping and large initial misalignment situations.Test cases involving scale change,large non-overlapping,and large initial misalignment on computed tomography (CT) and magnetic resonance (MR) datasets show that it needs much less runtime and achieves better accuracy when compared to other algorithms.(C) 2009 National Natural Science Foundation of China and Chinese Academy of Sciences.Published by Elsevier Limited and Science in China Press.All rights reserved.

  15. Classification algorithms with multi-modal data fusion could accurately distinguish neuromyelitis optica from multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Arman Eshaghi

    2015-01-01

    Full Text Available Neuromyelitis optica (NMO exhibits substantial similarities to multiple sclerosis (MS in clinical manifestations and imaging results and has long been considered a variant of MS. With the advent of a specific biomarker in NMO, known as anti-aquaporin 4, this assumption has changed; however, the differential diagnosis remains challenging and it is still not clear whether a combination of neuroimaging and clinical data could be used to aid clinical decision-making. Computer-aided diagnosis is a rapidly evolving process that holds great promise to facilitate objective differential diagnoses of disorders that show similar presentations. In this study, we aimed to use a powerful method for multi-modal data fusion, known as a multi-kernel learning and performed automatic diagnosis of subjects. We included 30 patients with NMO, 25 patients with MS and 35 healthy volunteers and performed multi-modal imaging with T1-weighted high resolution scans, diffusion tensor imaging (DTI and resting-state functional MRI (fMRI. In addition, subjects underwent clinical examinations and cognitive assessments. We included 18 a priori predictors from neuroimaging, clinical and cognitive measures in the initial model. We used 10-fold cross-validation to learn the importance of each modality, train and finally test the model performance. The mean accuracy in differentiating between MS and NMO was 88%, where visible white matter lesion load, normal appearing white matter (DTI and functional connectivity had the most important contributions to the final classification. In a multi-class classification problem we distinguished between all of 3 groups (MS, NMO and healthy controls with an average accuracy of 84%. In this classification, visible white matter lesion load, functional connectivity, and cognitive scores were the 3 most important modalities. Our work provides preliminary evidence that computational tools can be used to help make an objective differential diagnosis

  16. Multi-modal ECG Holter system for sleep-disordered breathing screening: a validation study.

    Science.gov (United States)

    Poupard, Laurent; Mathieu, Marc; Goldman, Michael; Chouchou, Florian; Roche, Frédéric

    2012-09-01

    The high prevalence of sleep disordered breathing (SDB) among heart diseases patients becomes increasingly recognized. A reliable exploring tool of SDB well adapted to cardiologists practice would be very useful for the management of these patients. We assessed a novel multi-modal electrocardiogram (ECG) Holter which incorporated both thoracic impedance and pulse oximetry signals. We compared in a home setting, a standard condition for Holter recordings, results from the novel device to a classical ambulatory polygraph in subjects with suspected SDB. The analysis of cardiac arrhythmias in relationship with SDB is also presented. A total of 118 patients clinically suspected of having SDB were evaluated (mean age 57 ± 14 years, mean body mass index [BMI] 32 ± 6 kg/m(2)). The new device allows calculating a new index called thoracic impedance (TI) disturbance index (TIDI+) evaluated from TI and SpO(2) signals recorded from a Holter monitor. In the population under study, 93% had more than 70% of usable TI signal and 95% had more than 90% for SpO(2) during sleep time recording. Screening performance results based on automatic analysis is accurate: TIDI + demonstrates a high level of sensitivity (96.8%), specificity (72.3%) as well as positive (82.4%) and negative (94.4%) predictive value for the detection of SDB. Moreover, detection of SDB periods permits us to observe a possible respiratory association of several nocturnal arrhythmias. The multi-modal Holter should be considered as a valuable evaluating tool for SDB screening and as a case selection technique for facilitating access to a full polysomnography for severe cases. Moreover, it offers a unique opportunity to study arrhythmia consequences with both respiratory and hypoxia disturbances.

  17. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.

    Science.gov (United States)

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.

  18. Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)

    Energy Technology Data Exchange (ETDEWEB)

    Aiken, R.J.; Carlson, R.A.; Foster, I.T. [and others

    1997-01-01

    The research and education (R&E) community requires persistent and scaleable network infrastructure to concurrently support production and research applications as well as network research. In the past, the R&E community has relied on supporting parallel network and end-node infrastructures, which can be very expensive and inefficient for network service managers and application programmers. The grand challenge in networking is to provide support for multiple, concurrent, multi-layer views of the network for the applications and the network researchers, and to satisfy the sometimes conflicting requirements of both while ensuring one type of traffic does not adversely affect the other. Internet and telecommunications service providers will also benefit from a multi-modal infrastructure, which can provide smoother transitions to new technologies and allow for testing of these technologies with real user traffic while they are still in the pre-production mode. The authors proposed approach requires the use of as much of the same network and end system infrastructure as possible to reduce the costs needed to support both classes of activities (i.e., production and research). Breaking the infrastructure into segments and objects (e.g., routers, switches, multiplexors, circuits, paths, etc.) gives the capability to dynamically construct and configure the virtual active networks to address these requirements. These capabilities must be supported at the campus, regional, and wide-area network levels to allow for collaboration by geographically dispersed groups. The Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet) described in this report is an initial architecture and framework designed to identify and support the capabilities needed for the proposed combined infrastructure and to address related research issues.

  19. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Directory of Open Access Journals (Sweden)

    Tewari Susanta

    2012-10-01

    Full Text Available Abstract Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach.

  20. NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences.

    Science.gov (United States)

    Riva, Giuseppe; Gaggioli, Andrea; Villani, Daniela; Preziosa, Alessandra; Morganti, Francesca; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca

    2007-01-01

    In the past decade, the use of virtual reality for clinical and research applications has become more widespread. However, the diffusion of this approach is still limited by three main issues: poor usability, lack of technical expertise among clinical professionals, and high costs. To address these challenges, we introduce NeuroVR (http://www.neurovr.org--http://www.neurotiv.org), a cost-free virtual reality platform based on open-source software, that allows non-expert users to adapt the content of a pre-designed virtual environment to meet the specific needs of the clinical or experimental setting. Using the NeuroVR Editor, the user can choose the appropriate psychological stimuli/stressors from a database of objects (both 2D and 3D) and videos, and easily place them into the virtual environment. The edited scene can then be visualized in the NeuroVR Player using either immersive or non-immersive displays. Currently, the NeuroVR library includes different virtual scenes (apartment, office, square, supermarket, park, classroom, etc.), covering two of the most studied clinical applications of VR: specific phobias and eating disorders. The NeuroVR Editor is based on Blender (http://www.blender.org), the open source, cross-platform suite of tools for 3D creation, and is available as a completely free resource. An interesting feature of the NeuroVR Editor is the possibility to add new objects to the database. This feature allows the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, i.e., by using photos or movies of objects/people that are part of the patient's daily life, thereby improving the efficacy of the exposure. The NeuroVR platform runs on standard personal computers with Microsoft Windows; the only requirement for the hardware is related to the graphics card, which must support OpenGL.

  1. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  2. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  3. Open source project to aid ionosphere physics research

    Science.gov (United States)

    Huba, J. D.; Joyce, G.

    In the past decade, the Open Source Model for software development has gained popularity and has had many major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, it may prove valuable in the development of scientific research codes. With this in mind, we are "open sourcing" the SAMI2 low- to mid-latitude ionospheric model that was recently developed at the Naval Research Laboratory.

  4. A Survey of Open Source Tools for Business Intelligence

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach

    2009-01-01

    The industrial use of open source Business Intelligence (BI) tools is becoming more common, but is still not as widespread as for other types of software. It is therefore of interest to explore which possibilities are available for open source BI and compare the tools. In this survey paper, we...... consider the capabilities of a number of open source tools for BI. In the paper, we consider a number of Extract-Transform-Load (ETL) tools, database management systems (DBMSs), On-Line Analytical Processing (OLAP) servers, and OLAP clients. We find that, unlike the situation a few years ago, there now...

  5. Open source R&D - an anomaly in innovation management?

    DEFF Research Database (Denmark)

    Ulhøi, John Parm

    2004-01-01

    This paper addresses innovations based on the principle of open source or non-proprietary knowledge. Viewed through the lens of private property theory, such agency appears to be a true anomaly. However, by a further turn of the theoretical kaleidoscope, we will show that there may be perfectly...... justifiable reasons for not regarding open source innovations as anomalies. The paper has identified three generic cases of open source innovation, which is an offspring of contemporary theory made possible by combining elements of the model of private agency with those of the model of collective agency...

  6. Epistemic Communities, Situated Learning and Open Source Software Development

    DEFF Research Database (Denmark)

    Edwards, Kasper

    2001-01-01

    This paper analyses open source software (OSS) development as an epistemic community where each individual project is perceived as a single epistemic community. OSS development is a learning process where the involved parties contribute to, and learn from the community. It is discovered that theory...... of epistemic communities does indeed contribute to the understanding of open source software development. But, the important learning process of open source software development is not readily explained. The paper then introduces situated learning and legitimate peripheral participation as theoretical...

  7. Open source R&D - an anomaly in innovation management?

    DEFF Research Database (Denmark)

    Ulhøi, John Parm

    2004-01-01

    This paper addresses innovations based on the principle of open source or non-proprietary knowledge. Viewed through the lens of private property theory, such agency appears to be a true anomaly. However, by a further turn of the theoretical kaleidoscope, we will show that there may be perfectly...... justifiable reasons for not regarding open source innovations as anomalies. The paper has identified three generic cases of open source innovation, which is an offspring of contemporary theory made possible by combining elements of the model of private agency with those of the model of collective agency...

  8. Learning by Doing: How to Develop a Cross-Platform Web App

    Science.gov (United States)

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  9. Learning by Doing: How to Develop a Cross-Platform Web App

    Science.gov (United States)

    Huynh, Minh; Ghimire, Prashant

    2015-01-01

    As mobile devices become prevalent, there is always a need for apps. How hard is it to develop an app, especially a cross-platform app? The paper shares an experience in a project that involved the development of a student services web app that can be run on cross-platform mobile devices. The paper first describes the background of the project,…

  10. A CMake-Based Cross Platform Build System for Tcl/Tk

    Science.gov (United States)

    2011-11-01

    A CMake-Based Cross Platform Build System for Tcl / Tk by Clifford Yapp ARL-RP-347 November 2011 A reprint from...Proceedings of the 18th Annual Tcl / Tk Conference, Manassas, VA, 26 October 2011. Approved for public release...Proving Ground, MD 21005-5068 ARL-RP-347 November 2011 A CMake-Based Cross Platform Build System for Tcl / Tk Clifford Yapp Quantum

  11. Making Dynamic Digital Maps Cross-Platform and WWW Capable

    Science.gov (United States)

    Condit, C. D.

    2001-05-01

    High-quality color geologic maps are an invaluable information resource for educators, students and researchers. However, maps with large datasets that include images, or various types of movies, in addition to site locations where analytical data has been collected, are difficult to publish in a format that facilitates their easy access, distribution and use. The development of capable desktop computers and object oriented graphical programming environments has facilitated publication of such data sets in an encapsulated form. The original Dynamic Digital Map (DDM) programs, developed using the Macintosh based SuperCard programming environment, exemplified this approach, in which all data are included in a single package designed so that display and access to the data did not depend on proprietary programs. These DDMs were aimed for ease of use, and allowed data to be displayed by several methods, including point-and-click at icons pin-pointing sample (or image) locations on maps, and from clicklists of sample or site numbers. Each of these DDMs included an overview and automated tour explaining the content organization and program use. This SuperCard development culminated in a "DDM Template", which is a SuperCard shell into which SuperCard users could insert their own content and thus create their own DDMs, following instructions in an accompanying "DDM Cookbook" (URL http://www.geo.umass.edu/faculty/condit/condit2.html). These original SuperCard-based DDMs suffered two critical limitations: a single user platform (Macintosh) and, although they can be downloaded from the web, their use lacked an integration into the WWW. Over the last eight months I have been porting the DDM technology to MetaCard, which is aggressively cross-platform (11 UNIX dialects, WIN32 and Macintosh). The new MetaCard DDM is redesigned to make the maps and images accessible either from CD or the web, using the "LoadNGo" concept. LoadNGo allows the user to download the stand-alone DDM

  12. Real Space Multigrid (RMG) Open Source Software Suite for Multi-Petaflops Electronic Structure Calculations

    Science.gov (United States)

    Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry; Li, Yan

    RMG is a cross platform open source package for ab initio electronic structure calculations that uses real-space grids, multigrid pre-conditioning, and subspace diagonalization to solve the Kohn-Sham equations. The code has been successfully used for a wide range of problems ranging from complex bulk materials to multifunctional electronic devices and biological systems. RMG makes efficient use of GPU accelerators, if present, but does not require them. Recent work has extended GPU support to systems with multiple GPU's per computational node, as well as optimized both CPU and GPU memory usage to enable large problem sizes, which are no longer limited by the memory of the GPU board. Additional enhancements include increased portability, scalability and performance. New versions of the code are regularly released at sourceforge.net/projects/rmgdft/. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms.

  13. An open source hydroeconomic model for California's water supply system: PyVIN

    Science.gov (United States)

    Dogan, M. S.; White, E.; Herman, J. D.; Hart, Q.; Merz, J.; Medellin-Azuara, J.; Lund, J. R.

    2016-12-01

    Models help operators and decision makers explore and compare different management and policy alternatives, better allocate scarce resources, and predict the future behavior of existing or proposed water systems. Hydroeconomic models are useful tools to increase benefits or decrease costs of managing water. Bringing hydrology and economics together, these models provide a framework for different disciplines that share similar objectives. This work proposes a new model to evaluate operation and adaptation strategies under existing and future hydrologic conditions for California's interconnected water system. This model combines the network structure of CALVIN, a statewide optimization model for California's water infrastructure, along with an open source solver written in the Python programming language. With the flexibilities of the model, reservoir operations, including water supply and hydropower, groundwater pumping, and the Delta water operations and requirements can now be better represented. Given time series of hydrologic inputs to the model, typical outputs include urban, agricultural and wildlife refuge water deliveries and shortage costs, conjunctive use of surface and groundwater systems, and insights into policy and management decisions, such as capacity expansion and groundwater management policies. Water market operations also represented in the model, allocating water from lower-valued users to higher-valued users. PyVIN serves as a cross-platform, extensible model to evaluate systemwide water operations. PyVIN separates data from the model structure, enabling model to be easily applied to other parts of the world where water is a scarce resource.

  14. Interactive multicentre teleconferences using open source software in a team of thoracic surgeons.

    Science.gov (United States)

    Ito, Kazuhiro; Shimada, Junichi; Katoh, Daishiro; Nishimura, Motohiro; Yanada, Masashi; Okada, Satoru; Ishihara, Shunta; Ichise, Kaori

    2012-12-01

    Real-time consultation between a team of thoracic surgeons is important for the management of difficult cases. We established a system for interactive teleconsultation between multiple sites, based on open-source software. The graphical desktop-sharing system VNC (virtual network computing) was used for remotely controlling another computer. An image-processing package (OsiriX) was installed on the server to share the medical images. We set up a voice communication system using Voice Chatter, a free, cross-platform voice communication application. Four hospitals participated in the trials. One was connected by gigabit ethernet, one by WiMAX and one by ADSL. Surgeons at three of the sites found that it was comfortable to view images and consult with each other using the teleconferencing system. However, it was not comfortable using the client that connected via WiMAX, because of dropped frames. Apart from the WiMAX connection, the VNC-based screen-sharing system transferred the clinical images efficiently and in real time. We found the screen-sharing software VNC to be a good application for medical image interpretation, especially for a team of thoracic surgeons using multislice CT scans.

  15. Providing University Education in Physical Geography across the South Pacific Islands: Multi-Modal Course Delivery and Student Grade Performance

    Science.gov (United States)

    Terry, James P.; Poole, Brian

    2012-01-01

    Enormous distances across the vast South Pacific hinder student access to the main Fiji campus of the regional tertiary education provider, the University of the South Pacific (USP). Fortunately, USP has been a pioneer in distance education (DE) and promotes multi-modal delivery of programmes. Geography has embraced DE, but doubts remain about…

  16. Multi-modal assessment of neurovascular coupling during cerebral ischaemia and reperfusion using remote middle cerebral artery occlusion

    DEFF Research Database (Denmark)

    Sutherland, Brad A; Fordsmann, Jonas C; Martin, Chris

    2017-01-01

    how neurovascular coupling is affected hyperacutely during cerebral ischaemia and reperfusion. We have developed a remote middle cerebral artery occlusion model in the rat, which enables multi-modal assessment of neurovascular coupling immediately prior to, during and immediately following reperfusion...

  17. Effective Beginning Handwriting Instruction: Multi-Modal, Consistent Format for 2 Years, and Linked to Spelling and Composing

    Science.gov (United States)

    Wolf, Beverly; Abbott, Robert D.; Berninger, Virginia W.

    2017-01-01

    In Study 1, the treatment group (N = 33 first graders, M = 6 years 10 months, 16 girls) received Slingerland multi-modal (auditory, visual, tactile, motor through hand, and motor through mouth) manuscript (unjoined) handwriting instruction embedded in systematic spelling, reading, and composing lessons; and the control group (N = 16 first graders,…

  18. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    Science.gov (United States)

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  19. Observing Coaching and Reflecting: A Multi-modal Natural Language-based Dialogue System in a Learning Context

    NARCIS (Netherlands)

    Van Helvert, Joy; Van Rosmalen, Peter; Börner, Dirk; Petukhova, Volha; Alexandersson, Jan

    2016-01-01

    The Metalogue project aims to develop a multi-modal, multi-party dialogue system with metacognitive abilities that will advance our understanding of natural conversational human-machine interaction and dialogue interfaces. This paper introduces the vision for the system and discusses its application

  20. Hopc: a Novel Similarity Metric Based on Geometric Structural Properties for Multi-Modal Remote Sensing Image Matching

    Science.gov (United States)

    Ye, Yuanxin; Shen, Li

    2016-06-01

    Automatic matching of multi-modal remote sensing images (e.g., optical, LiDAR, SAR and maps) remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper addresses this problem and proposes a novel similarity metric for multi-modal matching using geometric structural properties of images. We first extend the phase congruency model with illumination and contrast invariance, and then use the extended model to build a dense descriptor called the Histogram of Orientated Phase Congruency (HOPC) that captures geometric structure or shape features of images. Finally, HOPC is integrated as the similarity metric to detect tie-points between images by designing a fast template matching scheme. This novel metric aims to represent geometric structural similarities between multi-modal remote sensing datasets and is robust against significant non-linear radiometric changes. HOPC has been evaluated with a variety of multi-modal images including optical, LiDAR, SAR and map data. Experimental results show its superiority to the recent state-of-the-art similarity metrics (e.g., NCC, MI, etc.), and demonstrate its improved matching performance.

  1. Sex in the Curriculum: The Effect of a Multi-Modal Sexual History-Taking Module on Medical Student Skills

    Science.gov (United States)

    Lindau, Stacy Tessler; Goodrich, Katie G.; Leitsch, Sara A.; Cook, Sandy

    2008-01-01

    Purpose: The objective of this study was to determine the effect of a multi-modal curricular intervention designed to teach sexual history-taking skills to medical students. The Association of Professors of Gynecology and Obstetrics, the National Board of Medical Examiners, and others, have identified sexual history-taking as a learning objective…

  2. Risk factors for insufficient perioperative oral nutrition after hip fracture surgery within a multi-modal rehabilitation programme

    DEFF Research Database (Denmark)

    Foss, Nicolai B; Jensen, Pia S; Kehlet, Henrik

    2007-01-01

    To examine oral nutritional intake in the perioperative phase in elderly hip fracture patients treated according to a well-defined multi-modal rehabilitation program, including unselected oral nutritional supplementation, and to identify independent risk factors for insufficient nutritional intake....

  3. DIAGNOSIS-GUIDED METHOD FOR IDENTIFYING MULTI-MODALITY NEUROIMAGING BIOMARKERS ASSOCIATED WITH GENETIC RISK FACTORS IN ALZHEIMER'S DISEASE.

    Science.gov (United States)

    Hao, Xiaoke; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Many recent imaging genetic studies focus on detecting the associations between genetic markers such as single nucleotide polymorphisms (SNPs) and quantitative traits (QTs). Although there exist a large number of generalized multivariate regression analysis methods, few of them have used diagnosis information in subjects to enhance the analysis performance. In addition, few of models have investigated the identification of multi-modality phenotypic patterns associated with interesting genotype groups in traditional methods. To reveal disease-relevant imaging genetic associations, we propose a novel diagnosis-guided multi-modality (DGMM) framework to discover multi-modality imaging QTs that are associated with both Alzheimer's disease (AD) and its top genetic risk factor (i.e., APOE SNP rs429358). The strength of our proposed method is that it explicitly models the priori diagnosis information among subjects in the objective function for selecting the disease-relevant and robust multi-modality QTs associated with the SNP. We evaluate our method on two modalities of imaging phenotypes, i.e., those extracted from structural magnetic resonance imaging (MRI) data and fluorodeoxyglucose positron emission tomography (FDG-PET) data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The experimental results demonstrate that our proposed method not only achieves better performances under the metrics of root mean squared error and correlation coefficient but also can identify common informative regions of interests (ROIs) across multiple modalities to guide the disease-induced biological interpretation, compared with other reference methods.

  4. Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation.

    Science.gov (United States)

    Wang, Hongzhi; Yushkevich, Paul A

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  5. Multi-Atlas Segmentation with Joint Label Fusion and Corrective Learning - An Open Source Implementation

    Directory of Open Access Journals (Sweden)

    Hongzhi eWang

    2013-11-01

    Full Text Available Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  6. Guidelines for the implementation of an open source information system

    Energy Technology Data Exchange (ETDEWEB)

    Doak, J.; Howell, J.A.

    1995-08-01

    This work was initially performed for the International Atomic Energy Agency (IAEA) to help with the Open Source Task of the 93 + 2 Initiative; however, the information should be of interest to anyone working with open sources. The authors cover all aspects of an open source information system (OSIS) including, for example, identifying relevant sources, understanding copyright issues, and making information available to analysts. They foresee this document as a reference point that implementors of a system could augment for their particular needs. The primary organization of this document focuses on specific aspects, or components, of an OSIS; they describe each component and often make specific recommendations for its implementation. This document also contains a section discussing the process of collecting open source data and a section containing miscellaneous information. The appendix contains a listing of various providers, producers, and databases that the authors have come across in their research.

  7. Open-source software gets boost at UN

    CERN Multimedia

    Schenker, J L

    2003-01-01

    "A months-long backroom battle led by Brazil, with support from India, South Africa and China, against the United States over open-source software took center stage Wednesday at the UN information summit meeting" (1 page)

  8. Setting up of an Open Source based Private Cloud

    Directory of Open Access Journals (Sweden)

    G R Karpagam

    2011-05-01

    Full Text Available Cloud Computing is an attractive concept in IT field, since it allows the resources to be provisioned according to the user needs[11]. It provides services on virtual machines whereby the user can share resources, software and other devices on demand. Cloud services are supported both by Proprietary and Open Source Systems. As Proprietary products are very expensive, customers are not allowed to experiment on their product and security is a major issue in it, Open source systems helps in solving out these problems. Cloud Computing motivated many academic and non academic members to develop Open Source Cloud Setup, here the users are allowed to study the source code and experiment it. This paper describes the configuration of a private cloud using Eucalyptus. Eucalyptus an open source system has been used to implement a private cloud using the hardware and software without making any modification to it and provide various types of services to the cloud computing environment

  9. Implementing a real estate management system using open source ...

    African Journals Online (AJOL)

    Implementing a real estate management system using open source GIS software. ... already gets most of the information without having to leave home or work. ... to provide the background on which the parcels of land (plots) are displayed.

  10. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  11. Open Source Seismic Hazard Analysis Software Framework (OpenSHA)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — OpenSHA is an effort to develop object-oriented, web- & GUI-enabled, open-source, and freely available code for conducting Seismic Hazard Analyses (SHA). Our...

  12. A Survey Of Top 10 Open Source Learning Management Systems

    Directory of Open Access Journals (Sweden)

    Mohamed R. Elabnody

    2015-08-01

    Full Text Available Open Source LMSs are fully flexible and customizable so they can be designed in line with your schoolorganizations brand image. Open Source LMSs can also be converted to social learning platforms. You can create an online community through your LMS. This paper describes the most important features in learning management systems LMS that are critical to compare and contrast depend on your system requirements. Also represents a multiple LMS providers that are designed to use in university environment.

  13. Open-source 3D-printable optics equipment.

    Science.gov (United States)

    Zhang, Chenlong; Anzalone, Nicholas C; Faria, Rodrigo P; Pearce, Joshua M

    2013-01-01

    Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing) to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  14. Open-source 3D-printable optics equipment.

    Directory of Open Access Journals (Sweden)

    Chenlong Zhang

    Full Text Available Just as the power of the open-source design paradigm has driven down the cost of software to the point that it is accessible to most people, the rise of open-source hardware is poised to drive down the cost of doing experimental science to expand access to everyone. To assist in this aim, this paper introduces a library of open-source 3-D-printable optics components. This library operates as a flexible, low-cost public-domain tool set for developing both research and teaching optics hardware. First, the use of parametric open-source designs using an open-source computer aided design package is described to customize the optics hardware for any application. Second, details are provided on the use of open-source 3-D printers (additive layer manufacturing to fabricate the primary mechanical components, which are then combined to construct complex optics-related devices. Third, the use of the open-source electronics prototyping platform are illustrated as control for optical experimental apparatuses. This study demonstrates an open-source optical library, which significantly reduces the costs associated with much optical equipment, while also enabling relatively easily adapted customizable designs. The cost reductions in general are over 97%, with some components representing only 1% of the current commercial investment for optical products of similar function. The results of this study make its clear that this method of scientific hardware development enables a much broader audience to participate in optical experimentation both as research and teaching platforms than previous proprietary methods.

  15. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    Science.gov (United States)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  16. Open source electronic health records and chronic disease management.

    Science.gov (United States)

    Goldwater, Jason C; Kwon, Nancy J; Nathanson, Ashley; Muckle, Alison E; Brown, Alexa; Cornejo, Kerri

    2014-02-01

    To study and report on the use of open source electronic health records (EHR) to assist with chronic care management within safety net medical settings, such as community health centers (CHC). The study was conducted by NORC at the University of Chicago from April to September 2010. The NORC team undertook a comprehensive environmental scan, including a literature review, a dozen key informant interviews using a semistructured protocol, and a series of site visits to CHC that currently use an open source EHR. Two of the sites chosen by NORC were actively using an open source EHR to assist in the redesign of their care delivery system to support more effective chronic disease management. This included incorporating the chronic care model into an CHC and using the EHR to help facilitate its elements, such as care teams for patients, in addition to maintaining health records on indigent populations, such as tuberculosis status on homeless patients. The ability to modify the open-source EHR to adapt to the CHC environment and leverage the ecosystem of providers and users to assist in this process provided significant advantages in chronic care management. Improvements in diabetes management, controlled hypertension and increases in tuberculosis vaccinations were assisted through the use of these open source systems. The flexibility and adaptability of open source EHR demonstrated its utility and viability in the provision of necessary and needed chronic disease care among populations served by CHC.

  17. The 2015 Bioinformatics Open Source Conference (BOSC 2015.

    Directory of Open Access Journals (Sweden)

    Nomi L Harris

    2016-02-01

    Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  18. Multi-modal sensor based weight drop spinal cord impact system for large animals.

    Science.gov (United States)

    Kim, Hyeongbeom; Kim, Jong-Wan; Hyun, Jung-Keun; Park, Ilyong

    2017-08-23

    A conventional weight drop spinal cord (SC) impact system for large animals is composed of a high-speed video camera, a vision system, and other things. However, a camera with high speed at over 5,000 frames per second (FPS) is very expensive. In addition, the utilization of the vision system involves complex pattern recognition algorithms and accurate arrangement of the camera and the target. The purpose of this study was to develop a large animal spinal cord injury modeling system using a multi-modal sensor instead of a high-speed video camera and vision system. Another objective of this study was to demonstrate the possibility of the developed system to measure the impact parameters in the experiments using different stiffness materials and an in-vivo porcine SC. A multi-modal sensor based spinal cord injury impact system was developed for large animals. The experiments to measure SC impact parameters were then performed using three different stiffness materials and a Yucatan miniature pig to verify the performance of system developed. A comparative experiment was performed using three different stiffness materials such as high density (HD) sponge, rubber, and clay to demonstrate the system and perform measurement for impact parameters such as impact velocity, impulsive force, and maximally compressed displacement reflecting physical properties of materials. In the animal experiment, a female Yucatan miniature pig of 60 kg weight was used. Impact conditions for all experiments were fixed at freefalling object mass of 50 g and height of 20 cm. In the impact test, measured impact velocities were almost the same for the three different stiffness materials at 1.84 ± 0.0153 m/s. Impulsive forces for the three materials of rubber, HD sponge, and clay were 50.88 N, 32.35 N, and 6.68 N, respectively. Maximally compressed displacements for rubber, HD sponge, and clay were 1.93 mm, 3.35 mm, and 15.01 mm, respectively. In the pig experiment, impact velocity, impulsive

  19. Anticipation by multi-modal association through an artificial mental imagery process

    Science.gov (United States)

    Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno

    2015-01-01

    Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.

  20. Multi-modal target detection for autonomous wide area search and surveillance

    Science.gov (United States)

    Breckon, Toby P.; Gaszczak, Anna; Han, Jiwan; Eichner, Marcin L.; Barnes, Stuart E.

    2013-10-01

    Generalised wide are search and surveillance is a common-place tasking for multi-sensory equipped autonomous systems. Here we present on a key supporting topic to this task - the automatic interpretation, fusion and detected target reporting from multi-modal sensor information received from multiple autonomous platforms deployed for wide-area environment search. We detail the realization of a real-time methodology for the automated detection of people and vehicles using combined visible-band (EO), thermal-band (IR) and radar sensing from a deployed network of multiple autonomous platforms (ground and aerial). This facilities real-time target detection, reported with varying levels of confidence, using information from both multiple sensors and multiple sensor platforms to provide environment-wide situational awareness. A range of automatic classification approaches are proposed, driven by underlying machine learning techniques, that facilitate the automatic detection of either target type with cross-modal target confirmation. Extended results are presented that show both the detection of people and vehicles under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance evaluation is presented at an episodic level with individual classifiers optimized for maximal each object of interest (vehicle/person) detection over a given search path/pattern of the environment, across all sensors and modalities, rather than on a per sensor sample basis. Episodic target detection, evaluated over a number of wide-area environment search and reporting tasks, generally exceeds 90%+ for the targets considered here.

  1. Nano-sensitizers for multi-modality optical diagnostic imaging and therapy of cancer

    Science.gov (United States)

    Olivo, Malini; Lucky, Sasidharan S.; Bhuvaneswari, Ramaswamy; Dendukuri, Nagamani

    2011-07-01

    We report novel bioconjugated nanosensitizers as optical and therapeutic probes for the detection, monitoring and treatment of cancer. These nanosensitisers, consisting of hypericin loaded bioconjugated gold nanoparticles, can act as tumor cell specific therapeutic photosensitizers for photodynamic therapy coupled with additional photothermal effects rendered by plasmonic heating effects of gold nanoparticles. In addition to the therapeutic effects, the nanosensitizer can be developed as optical probes for state-of-the-art multi-modality in-vivo optical imaging technology such as in-vivo 3D confocal fluorescence endomicroscopic imaging, optical coherence tomography (OCT) with improved optical contrast using nano-gold and Surface Enhanced Raman Scattering (SERS) based imaging and bio-sensing. These techniques can be used in tandem or independently as in-vivo optical biopsy techniques to specifically detect and monitor specific cancer cells in-vivo. Such novel nanosensitizer based optical biopsy imaging technique has the potential to provide an alternative to tissue biopsy and will enable clinicians to make real-time diagnosis, determine surgical margins during operative procedures and perform targeted treatment of cancers.

  2. Registration strategies for multi-modal whole-body MRI mosaicing.

    Science.gov (United States)

    Ceranka, Jakub; Polfliet, Mathias; Lecouvet, Frédéric; Michoux, Nicolas; de Mey, Johan; Vandemeulebroucke, Jef

    2017-06-21

    To test and compare different registration approaches for performing whole-body diffusion-weighted (wbDWI) image station mosaicing, and its alignment to corresponding anatomical T1 whole-body image. Four different registration strategies aiming at mosaicing of diffusion-weighted image stations, and their alignment to the corresponding whole-body anatomical image, were proposed and evaluated. These included two-step approaches, where diffusion-weighted stations are first combined in a pairwise (Strategy 1) or groupwise (Strategy 2) manner and later non-rigidly aligned to the anatomical image; a direct pairwise mapping of DWI stations onto the anatomical image (Strategy 3); and simultaneous mosaicing of DWI and alignment to the anatomical image (Strategy 4). Additionally, different images driving the registration were investigated. Experiments were performed for 20 whole-body images of patients with bone metastases. Strategies 1 and 2 showed significant improvement in mosaicing accuracy with respect to the non-registered images (P multi-modal alignment. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Multi-Modal Neuroimaging Feature Learning for Multi-Class Diagnosis of Alzheimer’s Disease

    Science.gov (United States)

    Liu, Siqi; Liu, Sidong; Cai, Weidong; Che, Hangyu; Pujol, Sonia; Kikinis, Ron; Feng, Dagan; Fulham, Michael J.

    2015-01-01

    The accurate diagnosis of Alzheimers disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer aided diagnosis (CAD) of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multi-modal neuroimaging features in one setting and has the potential to require less labelled data. A performance gain was achieved in both binary classification and multi-class classification of AD. The advantages and limitations of the proposed framework are discussed. PMID:25423647

  4. Interactive Feature Space Explorer© for multi-modal magnetic resonance imaging.

    Science.gov (United States)

    Özcan, Alpay; Türkbey, Barış; Choyke, Peter L; Akin, Oguz; Aras, Ömer; Mun, Seong K

    2015-07-01

    Wider information content of multi-modal biomedical imaging is advantageous for detection, diagnosis and prognosis of various pathologies. However, the necessity to evaluate a large number images might hinder these advantages and reduce the efficiency. Herein, a new computer aided approach based on the utilization of feature space (FS) with reduced reliance on multiple image evaluations is proposed for research and routine clinical use. The method introduces the physician experience into the discovery process of FS biomarkers for addressing biological complexity, e.g., disease heterogeneity. This, in turn, elucidates relevant biophysical information which would not be available when automated algorithms are utilized. Accordingly, the prototype platform was designed and built for interactively investigating the features and their corresponding anatomic loci in order to identify pathologic FS regions. While the platform might be potentially beneficial in decision support generally and specifically for evaluating outlier cases, it is also potentially suitable for accurate ground truth determination in FS for algorithm development. Initial assessments conducted on two different pathologies from two different institutions provided valuable biophysical perspective. Investigations of the prostate magnetic resonance imaging data resulted in locating a potential aggressiveness biomarker in prostate cancer. Preliminary findings on renal cell carcinoma imaging data demonstrated potential for characterization of disease subtypes in the FS. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Analysis of Predictive Values Based on Individual Risk Factors in Multi-Modality Trials

    Directory of Open Access Journals (Sweden)

    Katharina Lange

    2013-03-01

    Full Text Available The accuracy of diagnostic tests with binary end-points is most frequently measured by sensitivity and specificity. However, from the clinical perspective, the main purpose of a diagnostic agent is to assess the probability of a patient actually being diseased and hence predictive values are more suitable here. As predictive values depend on the pre-test probability of disease, we provide a method to take risk factors influencing the patient’s prior probability of disease into account, when calculating predictive values. Furthermore, approaches to assess confidence intervals and a methodology to compare predictive values by statistical tests are presented. Hereby the methods can be used to analyze predictive values of factorial diagnostic trials, such as multi-modality, multi-reader-trials. We further performed a simulation study assessing length and coverage probability for different types of confidence intervals, and we present the R-Package facROC that can be used to analyze predictive values in factorial diagnostic trials in particular. The methods are applied to a study evaluating CT-angiography as a noninvasive alternative to coronary angiography for diagnosing coronary artery disease. Hereby the patients’ symptoms are considered as risk factors influencing the respective predictive values.

  6. Modeling a Multi-modal Distribution of Wind Direction Data in Kudat, Malaysia

    Directory of Open Access Journals (Sweden)

    Nurulkamal Masseran

    2015-08-01

    Full Text Available Wind direction is the direction from which the wind is blowing. It is expressed in terms of degrees measured clockwise from geographical direction. The knowledge of the wind direction can be used to obtain information about the wind energy potential, dispersion of particulate matter in the air, the effects of engineering structures on the building, maritime study, and etc. This study provides a suitable model for the wind direction that indicates multi-modal distributional properties. A case study involves with a data from Kudat, Malaysia has been analysed. The statistical models known as a Finite Mixture of von Mises Fisher (mvMF and Circular Distribution based on Nonnegative Trigonometric Sums (NNTS has been fitted to the data. Then, the suitability of mvMF and NNTS models were judged based on a graphical representation and goodness-of-fit statistics. The results found that the mvMF model with  components is sufficient to provide a best model. 

  7. Holographic Raman tweezers controlled by multi-modal natural user interface

    Science.gov (United States)

    Tomori, Zoltán; Keša, Peter; Nikorovič, Matej; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Valušová, Eva; Antalík, Marián; Zemánek, Pavel

    2016-01-01

    Holographic optical tweezers provide a contactless way to trap and manipulate several microobjects independently in space using focused laser beams. Although the methods of fast and efficient generation of optical traps are well developed, their user friendly control still lags behind. Even though several attempts have appeared recently to exploit touch tablets, 2D cameras, or Kinect game consoles, they have not yet reached the level of natural human interface. Here we demonstrate a multi-modal ‘natural user interface’ approach that combines finger and gaze tracking with gesture and speech recognition. This allows us to select objects with an operator’s gaze and voice, to trap the objects and control their positions via tracking of finger movement in space and to run semi-automatic procedures such as acquisition of Raman spectra from preselected objects. This approach takes advantage of the power of human processing of images together with smooth control of human fingertips and downscales these skills to control remotely the motion of microobjects at microscale in a natural way for the human operator.

  8. Analysis of Predictive Values Based on Individual Risk Factors in Multi-Modality Trials.

    Science.gov (United States)

    Lange, Katharina; Brunner, Edgar

    2013-03-15

    The accuracy of diagnostic tests with binary end-points is most frequently measured by sensitivity and specificity. However, from the clinical perspective, the main purpose of a diagnostic agent is to assess the probability of a patient actually being diseased and hence predictive values are more suitable here. As predictive values depend on the pre-test probability of disease, we provide a method to take risk factors influencing the patient's prior probability of disease into account, when calculating predictive values. Furthermore, approaches to assess confidence intervals and a methodology to compare predictive values by statistical tests are presented. Hereby the methods can be used to analyze predictive values of factorial diagnostic trials, such as multi-modality, multi-reader-trials. We further performed a simulation study assessing length and coverage probability for different types of confidence intervals, and we present the R-Package facROC that can be used to analyze predictive values in factorial diagnostic trials in particular. The methods are applied to a study evaluating CT-angiography as a noninvasive alternative to coronary angiography for diagnosing coronary artery disease. Hereby the patients' symptoms are considered as risk factors influencing the respective predictive values.

  9. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI.

    Science.gov (United States)

    Zhuang, Xiahai; Shen, Juan

    2016-07-01

    A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min.

  10. Multi-modal, ultrasensitive detection of trace explosives using MEMS devices with quantum cascade lasers

    Science.gov (United States)

    Zandieh, Omid; Kim, Seonghwan

    2016-05-01

    Multi-modal chemical sensors based on microelectromechanical systems (MEMS) have been developed with an electrical readout. Opto-calorimetric infrared (IR) spectroscopy, capable of obtaining molecular signatures of extremely small quantities of adsorbed explosive molecules, has been realized with a microthermometer/microheater device using a widely tunable quantum cascade laser. A microthermometer/microheater device responds to the heat generated by nonradiative decay process when the adsorbed explosive molecules are resonantly excited with IR light. Monitoring the variation in microthermometer signal as a function of illuminating IR wavelength corresponds to the conventional IR absorption spectrum of the adsorbed molecules. Moreover, the mass of the adsorbed molecules is determined by measuring the resonance frequency shift of the cantilever shape microthermometer for the quantitative opto-calorimetric IR spectroscopy. In addition, micro-differential thermal analysis, which can be used to differentiate exothermic or endothermic reaction of heated molecules, has been performed with the same device to provide additional orthogonal signal for trace explosive detection and sensor surface regeneration. In summary, we have designed, fabricated and tested microcantilever shape devices integrated with a microthermometer/microheater which can provide electrical responses used to acquire both opto-calorimetric IR spectra and microcalorimetric thermal responses. We have demonstrated the successful detection, differentiation, and quantification of trace amounts of explosive molecules and their mixtures (cyclotrimethylene trinitramine (RDX) and pentaerythritol tetranitrate (PETN)) using three orthogonal sensing signals which improve chemical selectivity.

  11. MINC 2.0: A Flexible Format for Multi-Modal Images

    Science.gov (United States)

    Vincent, Robert D.; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L.; Fonov, Vladimir S.; Robbins, Steven M.; Baghdadi, Leila; Lerch, Jason; Sled, John G.; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P.; Collins, D. Louis; Evans, Alan C.

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities. PMID:27563289

  12. Multi-modal Patient Cohort Identification from EEG Report and Signal Data

    Science.gov (United States)

    Goodwin, Travis R.; Harabagiu, Sanda M.

    2016-01-01

    Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%. PMID:28269938

  13. Multi-Modal Dictionary Learning for Image Separation With Application in Art Investigation

    Science.gov (United States)

    Deligiannis, Nikos; Mota, Joao F. C.; Cornelis, Bruno; Rodrigues, Miguel R. D.; Daubechies, Ingrid

    2017-02-01

    In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the front and back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component models features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data - taken from digital acquisition of the Ghent Altarpiece (1432) - confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.

  14. A multi-modal treatment approach for the shoulder: A 4 patient case series

    Directory of Open Access Journals (Sweden)

    Pollard Henry

    2005-09-01

    Full Text Available Abstract Background This paper describes the clinical management of four cases of shoulder impingement syndrome using a conservative multimodal treatment approach. Clinical Features Four patients presented to a chiropractic clinic with chronic shoulder pain, tenderness in the shoulder region and a limited range of motion with pain and catching. After physical and orthopaedic examination a clinical diagnosis of shoulder impingement syndrome was reached. The four patients were admitted to a multi-modal treatment protocol including soft tissue therapy (ischaemic pressure and cross-friction massage, 7 minutes of phonophoresis (driving of medication into tissue with ultrasound with 1% cortisone cream, diversified spinal and peripheral joint manipulation and rotator cuff and shoulder girdle muscle exercises. The outcome measures for the study were subjective/objective visual analogue pain scales (VAS, range of motion (goniometer and return to normal daily, work and sporting activities. All four subjects at the end of the treatment protocol were symptom free with all outcome measures being normal. At 1 month follow up all patients continued to be symptom free with full range of motion and complete return to normal daily activities. Conclusion This case series demonstrates the potential benefit of a multimodal chiropractic protocol in resolving symptoms associated with a suspected clinical diagnosis of shoulder impingement syndrome.

  15. Multi-modality image reconstruction for dual-head small-animal PET

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Chang-Han; Chou, Cheng-Ying [National Taiwan University, Taipei, Taiwan (China)

    2015-05-18

    The hybrid positron emission tomography/computed tomography (PET/CT) or positron emission tomography/magnetic resonance imaging (PET/MRI) has become routine practice in clinics. The applications of multi-modality imaging can also benefit research advances. Consequently, dedicated small-imaging system like dual-head small-animal PET (DHAPET) that possesses the advantages of high detection sensitivity and high resolution can exploit the structural information from CT or MRI. It should be noted that the special detector arrangement in DHAPET leads to severe data truncation, thereby degrading the image quality. We proposed to take advantage of anatomical priors and total variation (TV) minimization methods to reconstruct PET activity distribution form incomplete measurement data. The objective is to solve the penalized least-squares function consisted of data fidelity term, TV norm and medium root priors. In this work, we employed the splitting-based fast iterative shrinkage/thresholding algorithm to split smooth and non-smooth functions in the convex optimization problems. Our simulations studies validated that the images reconstructed by use of the proposed method can outperform those obtained by use of conventional expectation maximization algorithms or that without considering the anatomical prior information. Additionally, the convergence rate is also accelerated.

  16. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  17. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  18. Calculation of Prompt Fission Neutron from 233U(n, f) Reaction by Multi-Modal Los Alamos Model%Calculation of Prompt Fission Neutron from 233U(n, f) Reaction by Multi-Modal Los Alamos Model

    Institute of Scientific and Technical Information of China (English)

    郑娜; 钟春来; 樊铁栓

    2012-01-01

    An attempt is made to improve the evaluation of the prompt fission neutron emis- sion from 233U(n, f) reaction for incident neutron energies below 6 MeV. The multi-modal fission approach is applied to the improved version of Los Alamos model and the point by point model. The prompt fission neutron spectra and the prompt fission neutron as a function of fragment mass (usually named "sawtooth" data) v(A) are calculated independently for the three most dominant fission modes (standard I, standard II and superlong), and the total spectra and v(A) are syn- thesized. The multi-modal parameters are determined on the basis of experimental data of fission fragment mass distributions. The present calculation results can describe the experimental data very well, and the proposed treatment is thus a useful tool for prompt fission neutron emission prediction.

  19. Your Personal Analysis Toolkit - An Open Source Solution

    Science.gov (United States)

    Mitchell, T.

    2009-12-01

    Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!

  20. Integrating Free and Open Source Solutions into Geospatial Science Education

    Directory of Open Access Journals (Sweden)

    Vaclav Petras

    2015-06-01

    Full Text Available While free and open source software becomes increasingly important in geospatial research and industry, open science perspectives are generally less reflected in universities’ educational programs. We present an example of how free and open source software can be incorporated into geospatial education to promote open and reproducible science. Since 2008 graduate students at North Carolina State University have the opportunity to take a course on geospatial modeling and analysis that is taught with both proprietary and free and open source software. In this course, students perform geospatial tasks simultaneously in the proprietary package ArcGIS and the free and open source package GRASS GIS. By ensuring that students learn to distinguish between geospatial concepts and software specifics, students become more flexible and stronger spatial thinkers when choosing solutions for their independent work in the future. We also discuss ways to continually update and improve our publicly available teaching materials for reuse by teachers, self-learners and other members of the GIS community. Only when free and open source software is fully integrated into geospatial education, we will be able to encourage a culture of openness and, thus, enable greater reproducibility in research and development applications.

  1. An International Look at Women in Open Source

    Directory of Open Access Journals (Sweden)

    Cathy Malmrose

    2009-05-01

    Full Text Available When attending conferences, working with various open source teams, and generally interacting with people in the open source world, we see women as a small representative minority. The disparity leaves us wondering: "How to activate the other 50% of the population?". The question, "How do we include more women?" has been asked many times and answered in many ways. Cathy Malmrose, CEO of ZaReason, a Linux hardware company, stated, "possibly the most immediately effective solution is to showcase women internationally and their contributions. By simply talking about what women are doing all over the world, it creates an atmosphere of acceptance, encouraging more women to try contributing, no matter where they are located or what their situation is. Our goal is to normalize the experience of having women on open source projects". This issue of OSBR is a powerful effort to do just that. This article provides a glance at women in open source internationally. It is by no means comprehensive and is based solely on a random sampling of women who are currently contributing. The goal of this article is to give you a sense of the breadth and depth of women contributing to open source.

  2. Comparison of open-source linear programming solvers.

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph

    2013-10-01

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.

  3. The Imagery Exchange (TIE): Open Source Imagery Management System

    Science.gov (United States)

    Alarcon, C.; Huang, T.; Thompson, C. K.; Roberts, J. T.; Hall, J. R.; Cechini, M.; Schmaltz, J. E.; McGann, J. M.; Boller, R. A.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    The NASA's Global Imagery Browse Service (GIBS) is the Earth Observation System (EOS) imagery solution for delivering global, full-resolution satellite imagery in a highly responsive manner. GIBS consists of two major subsystems, OnEarth and The Imagery Exchange (TIE). TIE is the GIBS horizontally scaled imagery workflow manager component, an Open Archival Information System (OAIS) responsible for orchestrating the acquisition, preparation, generation, and archiving of imagery to be served by OnEarth. TIE is an extension of the Data Management and Archive System (DMAS), a high performance data management system developed at the Jet Propulsion Laboratory by leveraging open source tools and frameworks, which includes Groovy/Grails, Restlet, Apache ZooKeeper, Apache Solr, and other open source solutions. This presentation focuses on the application of Open Source technologies in developing a horizontally scaled data system like DMAS and TIE. As part of our commitment in contributing back to the open source community, TIE is in the process of being open sourced. This presentation will also cover our current effort in getting TIE in to the hands of the community from which we benefited from.

  4. Mobile Applications for Agricultural Online Portals – Cross-platform or Native Development

    Directory of Open Access Journals (Sweden)

    J. Masner

    2015-06-01

    Full Text Available There exist several possible approaches to the development of mobile applications. The paper treats the options of native applications for mobile devices. It analyzes economic aspects of two approaches - native applications development with the help of tools for individual platforms (Windows Phone, Android, iOS and native applications developed by cross-platform tools represented by the tool Xamarin. In the paper basic variables and a formula for costs calculation are defined. The conclusions show that the utilization of the cross-platform tool Xamarin can lead to significant costs reduction. However, further research is needed mainly in the area of both the complexity development by cross-platform tools and meeting the requirements on UI and UX.

  5. Industrial open source solutions for product life cycle management

    Directory of Open Access Journals (Sweden)

    Jaime Campos

    2014-12-01

    Full Text Available The authors go through the open source for product life cycle management (PLM and the efforts done from communities such as the open source initiative. The characteristics of the open source solutions are highlighted as well. Next, the authors go through the requirements for PLM. This is an area where more attention has been given as the manufacturers are competing with the quality and life cycle costs of their products. Especially, the need of companies to try to get a strong position in providing services for their products and thus to make themselves less vulnerable to changes in the market has led to high interest in product life cycle simulation. The potential of applying semantic data management to solve these problems discussed in the light of recent developments. In addition, a basic roadmap is presented as to how the above-described problems could be tackled with open software solutions.

  6. Dilemmas within Commercial Involvement in Open Source Software

    DEFF Research Database (Denmark)

    Ciesielska, Malgorzata; Westenholz, Ann

    2016-01-01

    Purpose – The purpose of this paper is to contribute to the literature about the commercial involvement in open source software, levels of this involvement and consequences of attempting to mix various logics of action. Design/methodology/approach – This paper uses the case study approach based...... on mixed methods: literature reviews and news searches, electronic surveys, qualitative interviews and observations. It combines discussions from several research projects as well as previous publications to present the scope of commercial choices within open source software and their consequences....... Findings – The findings show that higher levels of involvement in open source software communities poses important questions about the balance between economic, technological, and social logics as well as the benefits of being autonomous, having access to collaborative networks and minimizing risks related...

  7. Open Source AV solution supporting In Situ Simulation

    DEFF Research Database (Denmark)

    Krogh, Kristian; Pociunas, Gintas; Dahl, Mads Ronald

    the software to meet our expectations for a portable AV system for VAD. The system would make use of “off the shelf” hardware components which are widely available and easily replaced or expanded. The developed AV software and coding is contracted to be available as Copyleft Open Source to ensure low cost...... a stable AV software that has be developed and implemented for an in situ simulation initiative. This version (1.3) is the first on released as Open Source (Copyleft) software (see QR tag). We have found that it is possible to deliver multi-camera video assisted debriefing in a mobile, in situ simulation...... environment using an AV system constructed from “off the shelf” components and Open Source software....

  8. Open Source Web Based Geospatial Processing with OMAR

    Directory of Open Access Journals (Sweden)

    Mark Lucas

    2009-01-01

    Full Text Available The availability of geospatial data sets is exploding. New satellites, aerial platforms, video feeds, global positioning system tagged digital photos, and traditional GIS information are dramatically increasing across the globe. These raw materials need to be dynamically processed, combined and correlated to generate value added information products to answer a wide range of questions. This article provides an overview of OMAR web based geospatial processing. OMAR is part of the Open Source Software Image Map project under the Open Source Geospatial Foundation. The primary contributors of OSSIM make their livings by providing professional services to US Government agencies and programs. OMAR provides one example that open source software solutions are increasingly being deployed in US government agencies. We will also summarize the capabilities of OMAR and its plans for near term development.

  9. A Framework for the Systematic Collection of Open Source Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Pouchard, Line Catherine [ORNL; Trien, Joseph P [ORNL; Dobson, Jonathan D [ORNL

    2009-01-01

    Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search, view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.

  10. Seizure Onset Detection based on a Uni- or Multi-modal Intelligent Seizure Acquisition (UISA/MISA) System

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter

    2010-01-01

    An automatic Uni- or Multi-modal Inteligent Seizure Acquisition (UISA/MISA) system is highly applicable for onset detection of epileptic seizures based on motion data. The modalities used are surface electromyography (sEMG), acceleration (ACC) and angular velocity (ANG). The new proposed automatic...... algorithm on motion data is extracting features as “log-sum” measures of discrete wavelet components. Classification into the two groups “seizure” versus “nonseizure” is made based on the support vector machine (SVM) algorithm. The algorithm performs with a sensitivity of 91-100%, a median latency of 1...... second and a specificity of 100% on multi-modal data from five healthy subjects simulating seizures. The uni-modal algorithm based on sEMG data from the subjects and patients performs satisfactorily in some cases. As expected, our results clearly show superiority of the multimodal approach, as compared...

  11. Open Source Development of Tangible Products - from a business perspective

    DEFF Research Database (Denmark)

    Fjeldsted, Asta S.; Adalsteinsdottir, Gudrun; Howard, Thomas J.

    This article’s objective is to set up some foundational theory and practices for Open Source Development (OSD) of tangible products, a novel and emerging approach derived from the well-known open source software movement. As a contribution to the first steps of research in this discipline a clear...... definition for OSD is proposed and used to describe the key elements of a suggested OSD Process model. Several case studies are analysed to create an Archetypal Business Model characterising OSD of tangible products and the possibilities and delimitations within. Furthermore, strategic tools are suggested...

  12. Data from thermal testing of the Open Source Cryostage

    DEFF Research Database (Denmark)

    Buch, Johannes Lørup; Ramløv, Hans

    2016-01-01

    The data presented here is related to the research article "An open source cryostage and software analysis method for detection of antifreeze activity" (Buch and Ramløv, 2016) [1]. The design of the Open Source Cryostage (OSC) is tested in terms of thermal limits, thermal efficiency and electrical...... efficiency. This article furthermore includes an overview of the electrical circuitry and a flowchart of the software program controlling the temperature of the OSC. The thermal efficiency data is presented here as degrees per volt and maximum cooling capacity....

  13. Impact of Open Source software on the environmental protection

    Directory of Open Access Journals (Sweden)

    D. Viduka

    2015-03-01

    Full Text Available Ongoing development of computer hardware contributes to the constant throwing old computers ie. direct increase of electric and electronic equipment waste. This type of waste is a major threat to human health and the environment. By applying Open Source software solutions all users of computer hardware can significantly effect on the reduction of this type of waste. The aim of this paper is to present the advantages of using Open Source software package in terms of preserving and protecting the environment. This paper presents the results of testing MS Windows and Linux operating systems on an older computer, the results are obtained by applying the benchmark software GeekBench.

  14. Forks impacts and motivations in free and open source projects

    Directory of Open Access Journals (Sweden)

    R. Viseur

    2012-02-01

    Full Text Available Forking is a mechanism of splitting in a community and is typically found in the free and open source software field. As a failure of cooperation in a context of open innovation, forking is a practical and informative subject of study. In-depth researches concerning the fork phenomenon are uncommon. We therefore conducted a detailed study of 26 forks from popular free and open source projects. We created fact sheets, highlighting the impact and motivations to fork. We particularly point to the fact that the desire for greater technical differentiation and problems of project governance are major sources of conflict.

  15. An Organizational Perspective on Free and Open Source Software Development

    DEFF Research Database (Denmark)

    Vujovic, Sladjana; Ulhøi, John Parm

    2006-01-01

    The traditional model of innovation, the restricted/close source (R/CS) model, is based on proprietary knowledge and private model of production. A fundamental different one, the open source model is based on non-proprietary knowledge and non-economic motives. Moreover, between the two......, there are various combinations or hybrids, in the following referred to as free/open source-based (F/OS-based) agency. In the discussions, practical examples from software production are included. In conclusion, the paper identifies avenues for future research as well as important managerial and policy implications....

  16. Open source and DIY hardware for DNA nanotechnology labs.

    Science.gov (United States)

    Damase, Tulsi R; Stephens, Daniel; Spencer, Adam; Allen, Peter B

    A set of instruments and specialized equipment is necessary to equip a laboratory to work with DNA. Reducing the barrier to entry for DNA manipulation should enable and encourage new labs to enter the field. We present three examples of open source/DIY technology with significantly reduced costs relative to commercial equipment. This includes a gel scanner, a horizontal PAGE gel mold, and a homogenizer for generating DNA-coated particles. The overall cost savings obtained by using open source/DIY equipment was between 50 and 90%.

  17. The Challenges of Open Source Software in IT Adoption

    DEFF Research Database (Denmark)

    Holm Larsen, Michael; Holck, Jesper; Pedersen, Mogens Kuhn

    2004-01-01

    Abstract: The paper presents an explorative study of Open Source Software (OSS) focusing on the managerial decisions for acquisition of OSS. Based on three case studies we argue that whereas small organizations often may chose adoption of OSS expecting significant cost savings, a major barrier...... for larger organizations' adoption of OSS lies in the organizations' consolidation of the enterprise architectures, in addition to that OSS will not be adopted before satisfactory delivery and procurement models for OSS are established. Keywords: Open Source Software, OSS, Enterprise Architecture, Total Cost...

  18. Data from thermal testing of the Open Source Cryostage

    Directory of Open Access Journals (Sweden)

    Johannes Lørup Buch

    2016-09-01

    Full Text Available The data presented here is related to the research article “An open source cryostage and software analysis method for detection of antifreeze activity” (Buch and Ramløv, 2016 [1]. The design of the Open Source Cryostage (OSC is tested in terms of thermal limits, thermal efficiency and electrical efficiency. This article furthermore includes an overview of the electrical circuitry and a flowchart of the software program controlling the temperature of the OSC. The thermal efficiency data is presented here as degrees per volt and maximum cooling capacity.

  19. Open source and DIY hardware for DNA nanotechnology labs

    Directory of Open Access Journals (Sweden)

    Tulsi R. Damase

    2015-08-01

    Full Text Available A set of instruments and specialized equipment is necessary to equip a laboratory to work with DNA. Reducing the barrier to entry for DNA manipulation should enable and encourage new labs to enter the field. We present three examples of open source/DIY technology with significantly reduced costs relative to commercial equipment. This includes a gel scanner, a horizontal PAGE gel mold, and a homogenizer for generating DNA-coated particles. The overall cost savings obtained by using open source/DIY equipment was between 50 and 90%.

  20. Open source MySQL Browser for Open Innovation

    Directory of Open Access Journals (Sweden)

    Radu Bucea-Manea-Tonis

    2014-07-01

    Full Text Available Abstract. Our purpose is to cross-compile MySQL driver source code for Linux on Windows architecture using a tool chain in order to build a neutral valid graphic interface on 32 bits. Once achieving this goal we could say that every possible Open source application can be built and run on Windows with maximum efficiency concerning costs and resource. This browser is an example of open innovation because its source code is free for anybody willing to develop new software apps for business and uses only Open source tools.

  1. Human genome and open source: balancing ethics and business.

    Science.gov (United States)

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives.

  2. Building cross-platform apps using Titanium, Alloy, and Appcelerator cloud services

    CERN Document Server

    Saunders, Aaron

    2014-01-01

    Skip Objective-C and Java to get your app to market faster, using the skills you already have Building Cross-Platform Apps using Titanium, Alloy, and Appcelerator Cloud Services shows you how to build cross-platform iOS and Android apps without learning Objective-C or Java. With detailed guidance given toward using the Titanium Mobile Platform and Appcelerator Cloud Services, you will quickly develop the skills to build real, native apps- not web apps-using existing HTML, CSS, and JavaScript know-how. This guide takes you step-by-step through the creation of a photo-sharing app that leverages

  3. Comparison of Sleep-Wake Classification using Electroencephalogram and Wrist-worn Multi-modal Sensor Data

    OpenAIRE

    Sano, Akane; Picard, Rosalind W.

    2014-01-01

    This paper presents the comparison of sleep-wake classification using electroencephalogram (EEG) and multi-modal data from a wrist wearable sensor. We collected physiological data while participants were in bed: EEG, skin conductance (SC), skin temperature (ST), and acceleration (ACC) data, from 15 college students, computed the features and compared the intra-/inter-subject classification results. As results, EEG features showed 83% while features from a wrist wearable sensor showed 74% and ...

  4. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter

    2012-01-01

    measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study...... system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data....

  5. Classification of first-episode psychosis: a multi-modal multi-feature approach integrating structural and diffusion imaging.

    Science.gov (United States)

    Peruzzo, Denis; Castellani, Umberto; Perlini, Cinzia; Bellani, Marcella; Marinelli, Veronica; Rambaldelli, Gianluca; Lasalvia, Antonio; Tosato, Sarah; De Santi, Katia; Murino, Vittorio; Ruggeri, Mirella; Brambilla, Paolo

    2015-06-01

    Currently, most of the classification studies of psychosis focused on chronic patients and employed single machine learning approaches. To overcome these limitations, we here compare, to our best knowledge for the first time, different classification methods of first-episode psychosis (FEP) using multi-modal imaging data exploited on several cortical and subcortical structures and white matter fiber bundles. 23 FEP patients and 23 age-, gender-, and race-matched healthy participants were included in the study. An innovative multivariate approach based on multiple kernel learning (MKL) methods was implemented on structural MRI and diffusion tensor imaging. MKL provides the best classification performances in comparison with the more widely used support vector machine, enabling the definition of a reliable automatic decisional system based on the integration of multi-modal imaging information. Our results show a discrimination accuracy greater than 90 % between healthy subjects and patients with FEP. Regions with an accuracy greater than 70 % on different imaging sources and measures were middle and superior frontal gyrus, parahippocampal gyrus, uncinate fascicles, and cingulum. This study shows that multivariate machine learning approaches integrating multi-modal and multisource imaging data can classify FEP patients with high accuracy. Interestingly, specific grey matter structures and white matter bundles reach high classification reliability when using different imaging modalities and indices, potentially outlining a prefronto-limbic network impaired in FEP with particular regard to the right hemisphere.

  6. Female preference for multi-modal courtship: multiple signals are important for male mating success in peacock spiders.

    Science.gov (United States)

    Girard, Madeline B; Elias, Damian O; Kasumovic, Michael M

    2015-12-07

    A long-standing goal for biologists has been to understand how female preferences operate in systems where males have evolved numerous sexually selected traits. Jumping spiders of the Maratus genus are exceptionally sexually dimorphic in appearance and signalling behaviour. Presumably, strong sexual selection by females has played an important role in the evolution of complex signals displayed by males of this group; however, this has not yet been demonstrated. In fact, despite apparent widespread examples of sexual selection in nature, empirical evidence is relatively sparse, especially for species employing multiple modalities for intersexual communication. In order to elucidate whether female preference can explain the evolution of multi-modal signalling traits, we ran a series of mating trials using Maratus volans. We used video recordings and laser vibrometry to characterize, quantify and examine which male courtship traits predict various metrics of mating success. We found evidence for strong sexual selection on males in this system, with success contingent upon a combination of visual and vibratory displays. Additionally, independently produced, yet correlated suites of multi-modal male signals are linked to other aspects of female peacock spider behaviour. Lastly, our data provide some support for both the redundant signal and multiple messages hypotheses for the evolution of multi-modal signalling. © 2015 The Author(s).

  7. Assessment of a multi-modal intervention for the prevention of catheter-associated urinary tract infections.

    Science.gov (United States)

    Ternavasio-de la Vega, H G; Barbosa Ventura, A; Castaño-Romero, F; Sauchelli, F D; Prolo Acosta, A; Rodríguez Alcázar, F J; Vicente Sánchez, A; Ruiz Antúnez, E; Marcos, M; Laso, J

    2016-10-01

    Catheter-associated urinary tract infections (CAUTIs) represent an important healthcare burden. To assess the effectiveness of an evidence-based multi-modal, multi-disciplinary intervention intended to improve outcomes by reducing the use of urinary catheters (UCs) and minimizing the incidence of CAUTIs in the internal medicine department of a university hospital. A multi-modal intervention was developed, including training sessions, urinary catheterization reminders, surveillance systems, and mechanisms for staff feedback of results. The frequency of UC use and incidence of CAUTIs were recorded in three-month periods before (P1) and during the intervention (P2). The catheterization rate decreased significantly during P2 [27.8% vs 16.9%; relative risk (RR): 0.61; 95% confidence interval (CI): 0.57-0.65]. We also observed a reduction in CAUTI risk (18.3 vs 9.8%; RR: 0.53; 95% CI: 0.30-0.93), a reduction in the CAUTI rate per 1000 patient-days [5.5 vs 2.8; incidence ratio (IR): 0.52; 95% CI: 0.28-0.94], and a non-significant decrease in the CAUTI rate per 1000 catheter-days (19.3 vs 16.9; IR: 0.85; 95% CI: 0.46-1.55). The multi-modal intervention was effective in reducing the catheterization rate and the frequency of CAUTIs. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  8. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data.

    Science.gov (United States)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter; Kjaer, Troels W; Sams, Thomas; Sorensen, Helge B D

    2012-08-01

    The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy measures of reconstructed sub-bands from the discrete wavelet transformation (DWT) and the wavelet packet transformation (WPT). Based on the extracted features all data segments were classified using a support vector machine (SVM) algorithm as simulated seizure or normal activity. A case study of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi-modal detection system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. A connectivity-based test-retest dataset of multi-modal magnetic resonance imaging in young healthy adults.

    Science.gov (United States)

    Lin, Qixiang; Dai, Zhengjia; Xia, Mingrui; Han, Zaizhu; Huang, Ruiwang; Gong, Gaolang; Liu, Chao; Bi, Yanchao; He, Yong

    2015-01-01

    Recently, magnetic resonance imaging (MRI) has been widely used to investigate the structures and functions of the human brain in health and disease in vivo. However, there are growing concerns about the test-retest reliability of structural and functional measurements derived from MRI data. Here, we present a test-retest dataset of multi-modal MRI including structural MRI (S-MRI), diffusion MRI (D-MRI) and resting-state functional MRI (R-fMRI). Fifty-seven healthy young adults (age range: 19-30 years) were recruited and completed two multi-modal MRI scan sessions at an interval of approximately 6 weeks. Each scan session included R-fMRI, S-MRI and D-MRI data. Additionally, there were two separated R-fMRI scans at the beginning and at the end of the first session (approximately 20 min apart). This multi-modal MRI dataset not only provides excellent opportunities to investigate the short- and long-term test-retest reliability of the brain's structural and functional measurements at the regional, connectional and network levels, but also allows probing the test-retest reliability of structural-functional couplings in the human brain.

  10. Comparing uni-modal and multi-modal therapies for improving writing in acquired dysgraphia after stroke.

    Science.gov (United States)

    Thiel, Lindsey; Sage, Karen; Conroy, Paul

    2016-01-01

    Writing therapy studies have been predominantly uni-modal in nature; i.e., their central therapy task has typically been either writing to dictation or copying and recalling words. There has not yet been a study that has compared the effects of a uni-modal to a multi-modal writing therapy in terms of improvements to spelling accuracy. A multiple-case study with eight participants aimed to compare the effects of a uni-modal and a multi-modal therapy on the spelling accuracy of treated and untreated target words at immediate and follow-up assessment points. A cross-over design was used and within each therapy a matched set of words was targeted. These words and a matched control set were assessed before as well as immediately after each therapy and six weeks following therapy. The two approaches did not differ in their effects on spelling accuracy of treated or untreated items or degree of maintenance. All participants made significant improvements on treated and control items; however, not all improvements were maintained at follow-up. The findings suggested that multi-modal therapy did not have an advantage over uni-modal therapy for the participants in this study. Performance differences were instead driven by participant variables.

  11. Fatal pulmonary embolism following elective total knee replacement using aspirin in multi-modal prophylaxis - A 12year study.

    Science.gov (United States)

    Quah, C; Bayley, E; Bhamber, N; Howard, P

    2017-06-13

    The National Institute for Health and Clinical Excellence (NICE) has issued guidelines on which thromboprophylaxis regimens are suitable following lower limb arthroplasty. Aspirin is not a recommended agent despite being accepted in orthopaedic guidelines elsewhere. We assessed the incidence of fatal pulmonary embolism (PE) and all-cause mortality following elective primary total knee replacement (TKR) with a standardised multi-modal prophylaxis regime in a large teaching district general hospital. We utilised a prospective audit database to identify those that had died within 42 and 90days postoperatively. Data from April 2000 to 2012 were analysed for 42 and 90day mortality rates. There were a total of 8277 elective primary TKR performed over the 12year period. The multi-modal prophylaxis regimen used unless contraindicated for all patients included 75mg aspirin once daily for four weeks. Case note review ascertained the causes of death. Where a patient had been referred to the coroner, they were contacted for post mortem results. The mortality rates at 42 and 90days were 0.36 and 0.46%. There was one fatal PE within 42days of surgery (0.01%) who was taking enoxaparin because of aspirin intolerance. Two fatal PE's occurred at 48 and 57days post-operatively (0.02%). The leading cause of death was myocardial infarction (0.13%). Fatal PE following elective TKR with a multi-modal prophylaxis regime is a very rare cause of mortality. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The sweet spot: FDG and other 2-carbon glucose analogs for multi-modal metabolic imaging of tumor metabolism.

    Science.gov (United States)

    Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W

    2015-01-01

    Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with (18)F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents.

  13. Imaging Neurodegeneration: Steps Toward Brain Network-Based Pathophysiology and Its Potential for Multi-modal Imaging Diagnostics.

    Science.gov (United States)

    Sorg, C; Göttler, J; Zimmer, C

    2015-10-01

    Multi-modal brain imaging provides different in vivo windows into the human brain and thereby different ways to characterize brain disorders. Particularly, resting-state functional magnetic resonance imaging facilitates the study of macroscopic intrinsic brain networks, which are critical for development and spread of neurodegenerative processes in different neurodegenerative diseases. The aim of the current study is to present and highlight some paradigmatic findings in intrinsic network-based pathophysiology of neurodegenerative diseases and its potential for new network-based multimodal tools in imaging diagnostics. Qualitative review of selected multi-modal imaging studies in neurodegenerative diseases particularly in Alzheimer's disease (AD). Functional connectivity of intrinsic brain networks is selectively and progressively impaired in AD, with changes likely starting before the onset of symptoms in fronto-parietal key networks such as default mode or attention networks. Patterns of distribution and development of both amyloid-β plaques and atrophy are linked with network connectivity changes, suggesting that start and spread of pathology interacts with network connectivity. Qualitatively similar findings have been observed in other neurodegenerative disorders, suggesting shared mechanisms of network-based pathophysiology across diseases. Spread of neurodegeneration is intimately linked with the functional connectivity of intrinsic brain networks. These pathophysiological insights pave the way for new multi-modal network-based tools to detect and characterize neurodegeneration in individual patients.

  14. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    Science.gov (United States)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  15. The effectiveness of multi modal representation text books to improve student's scientific literacy of senior high school students

    Science.gov (United States)

    Zakiya, Hanifah; Sinaga, Parlindungan; Hamidah, Ida

    2017-05-01

    The results of field studies showed the ability of science literacy of students was still low. One root of the problem lies in the books used in learning is not oriented toward science literacy component. This study focused on the effectiveness of the use of textbook-oriented provisioning capability science literacy by using multi modal representation. The text books development method used Design Representational Approach Learning to Write (DRALW). Textbook design which was applied to the topic of "Kinetic Theory of Gases" is implemented in XI grade students of high school learning. Effectiveness is determined by consideration of the effect and the normalized percentage gain value, while the hypothesis was tested using Independent T-test. The results showed that the textbooks which were developed using multi-mode representation science can improve the literacy skills of students. Based on the size of the effect size textbooks developed with representation multi modal was found effective in improving students' science literacy skills. The improvement was occurred in all the competence and knowledge of scientific literacy. The hypothesis testing showed that there was a significant difference on the ability of science literacy between class that uses textbooks with multi modal representation and the class that uses the regular textbook used in schools.

  16. Embedded security system for multi-modal surveillance in a railway carriage

    Science.gov (United States)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  17. Multi-modal MRI classifiers identify excessive alcohol consumption and treatment effects in the brain.

    Science.gov (United States)

    Cosa, Alejandro; Moreno, Andrea; Pacheco-Torres, Jesús; Ciccocioppo, Roberto; Hyytiä, Petri; Sommer, Wolfgang H; Moratal, David; Canals, Santiago

    2017-09-01

    Robust neuroimaging markers of neuropsychiatric disorders have proven difficult to obtain. In alcohol use disorders, profound brain structural deficits can be found in severe alcoholic patients, but the heterogeneity of unimodal MRI measurements has so far precluded the identification of selective biomarkers, especially for early diagnosis. In the present work we used a combination of multiple MRI modalities to provide comprehensive and insightful descriptions of brain tissue microstructure. We performed a longitudinal experiment using Marchigian-Sardinian (msP) rats, an established model of chronic excessive alcohol consumption, and acquired multi-modal images before and after 1 month of alcohol consumption (6.8 ± 1.4 g/kg/day, mean ± SD), as well as after 1 week of abstinence with or without concomitant treatment with the antirelapse opioid antagonist naltrexone (2.5 mg/kg/day). We found remarkable sensitivity and selectivity to accurately classify brains affected by alcohol even after the relative short exposure period. One month drinking was enough to imprint a highly specific signature of alcohol consumption. Brain alterations were regionally specific and affected both gray and white matter and persisted into the early abstinence state without any detectable recovery. Interestingly, naltrexone treatment during early abstinence resulted in subtle brain changes that could be distinguished from non-treated abstinent brains, suggesting the existence of an intermediate state associated with brain recovery from alcohol exposure induced by medication. The presented framework is a promising tool for the development of biomarkers for clinical diagnosis of alcohol use disorders, with capacity to further inform about its progression and response to treatment. © 2016 Society for the Study of Addiction.

  18. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning.

    Science.gov (United States)

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent; Wu, Guorong

    2017-07-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer's disease and Parkinson's disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A Prototype Multi-Modality Picture Archive And Communication System At Victoria General Hospital

    Science.gov (United States)

    Nosil, J.; Justice, G.; Fisher, P.; Ritchie, G.; Weigl, W. J.; Gnoyke, H.

    1988-06-01

    The Medical Imaging Department at Victoria General Hospital is the first in Canada to implement an integrated multi-modality picture archive and communication system for clinical use. The aim of this paper is to present the current status of the picture archive and communication system components and to describe its function. This system was installed in April of 1987, and upgraded in November of 1987. A picture archive and communication system includes image sources, an image management system, and image display and reporting facilities. The installed image sources (digital radiography, digital fluoroscopy, computed tomography, and digital subtraction angiography) provide digital data for the image management system. The image management system provides facilities for receiving, storing, retrieving, and transmitting images using conventional computers and networks. There are two display stations, a viewing console and an image processing workstation, which provide various image display and manipulation functions. In parallel with the implementation of the picture archive and communication system there are clinical, physical, and economic evaluations being pursued. An initial examination of digital image transfer rates indicate that users will experience similar image availability times as with conventional film imaging. Clinical experience to date with the picture archive and communication system has been limited to that required to evaluate digital imaging as a diagnostic tool, using digital radiography and digital fluoroscopy studies. Computed tomography and digital subtraction angiography have only recently been connected to the picture archive and communication system. Clinical experience with these modalities is limited to several cases, but image fidelity appears to be well above clinically acceptable levels.

  20. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  1. A multi-modal prostate segmentation scheme by combining spectral clustering and active shape models

    Science.gov (United States)

    Toth, Robert; Tiwari, Pallavi; Rosen, Mark; Kalyanpur, Arjun; Pungavkar, Sona; Madabhushi, Anant

    2008-03-01

    Segmentation of the prostate boundary on clinical images is useful in a large number of applications including calculating prostate volume during biopsy, tumor estimation, and treatment planning. Manual segmentation of the prostate boundary is, however, time consuming and subject to inter- and intra-reader variability. Magnetic Resonance (MR) imaging (MRI) and MR Spectroscopy (MRS) have recently emerged as promising modalities for detection of prostate cancer in vivo. In this paper we present a novel scheme for accurate and automated prostate segmentation on in vivo 1.5 Tesla multi-modal MRI studies. The segmentation algorithm comprises two steps: (1) A hierarchical unsupervised spectral clustering scheme using MRS data to isolate the region of interest (ROI) corresponding to the prostate, and (2) an Active Shape Model (ASM) segmentation scheme where the ASM is initialized within the ROI obtained in the previous step. The hierarchical MRS clustering scheme in step 1 identifies spectra corresponding to locations within the prostate in an iterative fashion by discriminating between potential prostate and non-prostate spectra in a lower dimensional embedding space. The spatial locations of the prostate spectra so identified are used as the initial ROI for the ASM. The ASM is trained by identifying user-selected landmarks on the prostate boundary on T2 MRI images. Boundary points on the prostate are identified using mutual information (MI) as opposed to the traditional Mahalanobis distance, and the trained ASM is deformed to fit the boundary points so identified. Cross validation on 150 prostate MRI slices yields an average segmentation sensitivity, specificity, overlap, and positive predictive value of 89, 86, 83, and 93% respectively. We demonstrate that the accurate initialization of the ASM via the spectral clustering scheme is necessary for automated boundary extraction. Our method is fully automated, robust to system parameters, and computationally efficient.

  2. Use of different exposure metrics for understanding multi-modal travel injury risk

    Directory of Open Access Journals (Sweden)

    S. Ilgin Guler

    2016-08-01

    Full Text Available The objective of this work is to identify characteristics of different metrics of exposure for quantifying multi-modal travel injury risk. First, a discussion on the use of time-based and trip-based metrics for road user exposure to injury risk, considering multiple travel modes, is presented. The main difference between a time-based and trip-based metric is argued to be that a time-based metric reflects the actual duration of time spent on the road exposed to the travel risks. This can be proven to be important when considering multiple modes since different modes typically different speeds and average travel distances. Next, the use of total number of trips, total time traveled, and mode share (time-based or trip-based is considered to compare the injury risk of a given mode at different locations. It is argued that using mode share the safety concept which focuses on absolute numbers can be generalized. Quantitative results are also obtained from combining travel survey data with police collision reports for ten counties in California. The data are aggregated for five modes: (i cars, (ii SUVs, (iii transit riders, (iv bicyclists, and (v pedestrians. These aggregated data are used to compare travel risk of different modes with time-based or trip-based exposure metrics. These quantitative results confirm the initial qualitative discussions. As the penetration of mobile probes for transportation data collection increases, the insights of this study can provide guidance on how to best utilize the added value of such data to better quantify travel injury risk, and improve safety.

  3. Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data.

    Science.gov (United States)

    Yuan, Lei; Wang, Yalin; Thompson, Paul M; Narayan, Vaibhav A; Ye, Jieping

    2012-01-01

    Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI's 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results.

  4. A custom multi-modal sensor suite and data analysis pipeline for aerial field phenotyping

    Science.gov (United States)

    Bartlett, Paul W.; Coblenz, Lauren; Sherwin, Gary; Stambler, Adam; van der Meer, Andries

    2017-05-01

    Our group has developed a custom, multi-modal sensor suite and data analysis pipeline to phenotype crops in the field using unpiloted aircraft systems (UAS). This approach to high-throughput field phenotyping is part of a research initiative intending to markedly accelerate the breeding process for refined energy sorghum varieties. To date, single rotor and multirotor helicopters, roughly 14 kg in total weight, are being employed to provide sensor coverage over multiple hectaresized fields in tens of minutes. The quick, autonomous operations allow for complete field coverage at consistent plant and lighting conditions, with low operating costs. The sensor suite collects data simultaneously from six sensors and registers it for fusion and analysis. High resolution color imagery targets color and geometric phenotypes, along with lidar measurements. Long-wave infrared imagery targets temperature phenomena and plant stress. Hyperspectral visible and near-infrared imagery targets phenotypes such as biomass and chlorophyll content, as well as novel, predictive spectral signatures. Onboard spectrometers and careful laboratory and in-field calibration techniques aim to increase the physical validity of the sensor data throughout and across growing seasons. Off-line processing of data creates basic products such as image maps and digital elevation models. Derived data products include phenotype charts, statistics, and trends. The outcome of this work is a set of commercially available phenotyping technologies, including sensor suites, a fully integrated phenotyping UAS, and data analysis software. Effort is also underway to transition these technologies to farm management users by way of streamlined, lower cost sensor packages and intuitive software interfaces.

  5. Virtual reality testing of multi-modal integration in schizophrenic patients.

    Science.gov (United States)

    Sorkin, Anna; Peled, Avi; Weinshall, Daphna

    2005-01-01

    Our goal is to develop a new family of automatic tools for the diagnosis of schizophrenia, using Virtual Reality Technology (VRT). VRT is specifically suitable for this purpose, because it allows for multi-modal stimulation in a complex setup, and the simultaneous measurement of multiple parameters. In this work we studied sensory integration within working memory, in a navigation task through a VR maze. Along the way subjects pass through multiple rooms that include three doors each, only one of which can be used to legally exit the room. Specifically, each door is characterized by three features (color, shape and sound), and only one combination of features -- as determined by a transient opening rule -- is legal. The opening rule changes over time. Subjects must learn the rule and use it for successful navigation throughout the maze. 39 schizophrenic patients and 21 healthy controls participated in this study. Upon completion, each subject was assigned a performance profile, including various error scores, response time, navigation ability and strategy. We developed a classification procedure based on the subjects' performance profile, which correctly predicted 85% of the schizophrenic patients (and all the controls). We observed that a number of parameters showed significant correlation with standard diagnosis scores (PANSS), suggesting the potential use of our measurements for future diagnosis of schizophrenia. On the other hand, our patients did not show unusual repetition of response despite stimulus cessation (called perseveration in classical studies of schizophrenia), which is usually considered a robust marker of the disease. Interestingly, this deficit only appeared in our study when subjects did not receive proper explanation of the task.

  6. Multi-modal distraction. Using technology to combat pain in young children with burn injuries.

    Science.gov (United States)

    Miller, Kate; Rodger, Sylvia; Bucolo, Sam; Greer, Ristan; Kimble, Roy M

    2010-08-01

    The use of non-pharmacological pain management remains adhoc within acute paediatric burns pain management protocols despite ongoing acknowledgement of its role. Advancements in adult based pain services including the integration of virtual reality has been adapted to meet the needs of children in pain, as exemplified by the development of multi-modal distraction (MMD). This easy to use, hand held interactive device uses customized programs designed to inform the child about the procedure he/she is about to experience and to distract the child during dressing changes. (1) To investigate if either MMD procedural preparation (MMD-PP) or distraction (MMD-D) has a greater impact on child pain reduction compared to standard distraction (SD) or hand held video game distraction (VG), (2) to understand the impact of MMD-PP and MMD-D on clinic efficiency by measuring length of treatment across groups, and lastly, (3) to assess the efficacy of distraction techniques over three dressing change procedures. A prospective randomised control trial was completed in a paediatric tertiary hospital Burns Outpatient Clinic. Eighty participants were recruited and studied over their first three dressing changes. Pain was assessed using validated child report, caregiver report, nursing observation and physiological measures. MMD-D and MMD-PP were both shown to significantly relieve reported pain (peffects of both MMD-D and MMD-PP were sustained with subsequent dressing changes. The use of MMD as a preparatory or a distraction tool in an outpatient burns clinic offered superior pain reduction across three dressing changes to children when compared to standard practices or hand held video games. This device has the potential to improve clinic efficiency with reductions in treatment lengths.

  7. A novel technique to incorporate structural prior information into multi-modal tomographic reconstruction

    Science.gov (United States)

    Kazantsev, Daniil; Ourselin, Sébastien; Hutton, Brian F.; Dobson, Katherine J.; Kaestner, Anders P.; Lionheart, William R. B.; Withers, Philip J.; Lee, Peter D.; Arridge, Simon R.

    2014-06-01

    There has been a rapid expansion of multi-modal imaging techniques in tomography. In biomedical imaging, patients are now regularly imaged using both single photon emission computed tomography (SPECT) and x-ray computed tomography (CT), or using both positron emission tomography and magnetic resonance imaging (MRI). In non-destructive testing of materials both neutron CT (NCT) and x-ray CT are widely applied to investigate the inner structure of material or track the dynamics of physical processes. The potential benefits from combining modalities has led to increased interest in iterative reconstruction algorithms that can utilize the data from more than one imaging mode simultaneously. We present a new regularization term in iterative reconstruction that enables information from one imaging modality to be used as a structural prior to improve resolution of the second modality. The regularization term is based on a modified anisotropic tensor diffusion filter, that has shape-adapted smoothing properties. By considering the underlying orientations of normal and tangential vector fields for two co-registered images, the diffusion flux is rotated and scaled adaptively to image features. The images can have different greyscale values and different spatial resolutions. The proposed approach is particularly good at isolating oriented features in images which are important for medical and materials science applications. By enhancing the edges it enables both easy identification and volume fraction measurements aiding segmentation algorithms used for quantification. The approach is tested on a standard denoising and deblurring image recovery problem, and then applied to 2D and 3D reconstruction problems; thereby highlighting the capabilities of the algorithm. Using synthetic data from SPECT co-registered with MRI, and real NCT data co-registered with x-ray CT, we show how the method can be used across a range of imaging modalities.

  8. Design and Implementation of a Multi-Modal Biometric System for Company Access Control

    Directory of Open Access Journals (Sweden)

    Elisabetta Stefani

    2017-05-01

    Full Text Available This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman’s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user’s face only or the serial acquisition of both the user’s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions, possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent “response behaviors” according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database.

  9. Open-source software for generating electrocardiogram signals

    CERN Document Server

    McSharry, P E; Sharry, Patrick E. Mc; Cifford, Gari D.

    2004-01-01

    ECGSYN, a dynamical model that faithfully reproduces the main features of the human electrocardiogram (ECG), including heart rate variability, RR intervals and QT intervals is presented. Details of the underlying algorithm and an open-source software implementation in Matlab, C and Java are described. An example of how this model will facilitate comparisons of signal processing techniques is provided.

  10. Open Source Projects in Software Engineering Education: A Mapping Study

    Science.gov (United States)

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  11. The Case for Open Source Software in Digital Forensics

    Science.gov (United States)

    Zanero, Stefano; Huebner, Ewa

    In this introductory chapter we discuss the importance of the use of open source software (OSS), and in particular of free software (FLOSS) in computer forensics investigations including the identification, capture, preservation and analysis of digital evidence; we also discuss the importance of OSS in computer forensics

  12. The Economics of Open Source Software Development: An Introduction

    DEFF Research Database (Denmark)

    2006-01-01

    This chapter introduces the fundamentals of Open Source Software, its nature, the central economic aspects and the key mechanisms of its development. Furthermore, we present the themes of the book and provide a first overview for the reader by giving short summaries of its chapters....

  13. Digital Preservation in Open-Source Digital Library Software

    Science.gov (United States)

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  14. Zotero : a free and open-source reference manager

    DEFF Research Database (Denmark)

    Courraud, Julie

    2014-01-01

    Zotero is a free, open-source reference management program compatible with Linux®, Mac®, and Windows® operating systems. Libraries are backed up online allowing sharing between computers and even multiple users. Zotero makes it easy to keep your reference library organised and ‘clean’. Reference...

  15. Open Source Software: All You Do Is Put It Together

    NARCIS (Netherlands)

    Obrenovic, Z.; Gasevic, D.

    2007-01-01

    The authors propose an infrastructure for rapidly prototyping applications from open source software components. The Adaptable Multi-Interface Communicator infrastructure (AMICO) is based on ideas of middleware platforms for component integration, but it focuses on pragmatic aspects of OSS integrati

  16. Exploring Coordination Structures in Open Source Software Development

    NARCIS (Netherlands)

    Amrit, Chintan; Hegeman, J.H.; Hillegersberg, van Jos; Hillegersberg, van Jos; Harmsen, Frank; Amrit, Chintan; Geisberger, Eva; Keil, Patrick; Kuhrmann, Marco

    2007-01-01

    Coordination is difficult to achieve in a large globally distributed project setting. The problem is multiplied in open source software development projects, where most of the traditional means of coordination such as plans, system-level designs, schedules and defined process are not used. In order

  17. Critical Analysis on Open Source LMSs Using FCA

    Science.gov (United States)

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  18. Intrinsic Motivation versus Signaling in Open Source Software Development

    DEFF Research Database (Denmark)

    Bitzer, J; Schrettl, W; Schröder, P

    This papers sheds light on the puzzling fact that even though open source software (OSS) is a public good, it is developed for free by highly qualified, young, motivated individuals, and evolves at a rapid pace. We show that when OSS development is understood as the private provision of a public...

  19. Modular Open-Source Software for Item Factor Analysis

    Science.gov (United States)

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  20. [Osirix: free and open-source software for medical imagery].

    Science.gov (United States)

    Jalbert, F; Paoli, J R

    2008-02-01

    Osirix is a tool for diagnostic imagery, teaching and research tasks, which presents many possible applications in maxillofacial and oral surgery. It is a free and open-source software developed on Mac OS X (Apple) by Dr Antoine Rosset and Dr Osman Ratib, in the department of radiology and medical computing of Geneva (Switzerland).

  1. Is Open Source the ERP Cure-All?

    Science.gov (United States)

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…

  2. Open Source Solutions for Libraries: ABCD vs Koha

    Science.gov (United States)

    Macan, Bojan; Fernandez, Gladys Vanesa; Stojanovski, Jadranka

    2013-01-01

    Purpose: The purpose of this study is to present an overview of the two open source (OS) integrated library systems (ILS)--Koha and ABCD (ISIS family), to compare their "next-generation library catalog" functionalities, and to give comparison of other important features available through ILS modules. Design/methodology/approach: Two open source…

  3. The Value of Open Source Software Tools in Qualitative Research

    Science.gov (United States)

    Greenberg, Gary

    2011-01-01

    In an era of global networks, researchers using qualitative methods must consider the impact of any software they use on the sharing of data and findings. In this essay, I identify researchers' main areas of concern regarding the use of qualitative software packages for research. I then examine how open source software tools, wherein the publisher…

  4. Critical Analysis on Open Source LMSs Using FCA

    Science.gov (United States)

    Sumangali, K.; Kumar, Ch. Aswani

    2013-01-01

    The objective of this paper is to apply Formal Concept Analysis (FCA) to identify the best open source Learning Management System (LMS) for an E-learning environment. FCA is a mathematical framework that represents knowledge derived from a formal context. In constructing the formal context, LMSs are treated as objects and their features as…

  5. Modular Open-Source Software for Item Factor Analysis

    Science.gov (United States)

    Pritikin, Joshua N.; Hunter, Micheal D.; Boker, Steven M.

    2015-01-01

    This article introduces an item factor analysis (IFA) module for "OpenMx," a free, open-source, and modular statistical modeling package that runs within the R programming environment on GNU/Linux, Mac OS X, and Microsoft Windows. The IFA module offers a novel model specification language that is well suited to programmatic generation…

  6. Digital Preservation in Open-Source Digital Library Software

    Science.gov (United States)

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  7. Higher Education Sub-Cultures and Open Source Adoption

    Science.gov (United States)

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  8. Open source and open standards in e-learning research

    NARCIS (Netherlands)

    Koper, Rob

    2007-01-01

    Presentation at the TENCompetence Winterschool directed at Ph.D. students to position e-learning research and the role of open source and open standards to facilitate the research process in a methodological sense. The presentation is based on the paper: http://hdl.handle.net/1820/780

  9. ARLearn - Open source mobile application platform for learning

    NARCIS (Netherlands)

    Börner, Dirk; Ternier, Stefaan; Klemke, Roland; Schmitz, Birgit; Kalz, Marco; Tabuenca, Bernardo; Specht, Marcus

    2013-01-01

    Börner, D., Ternier, S., Klemke, R., Schmitz, B., Kalz, M., Tabuenca, B., & Specht, M. (2013). ARLearn - Open source mobile application platform for learning. In D. Hernández-Leo et al. (Eds.), Scaling up Learning for Sustained Impact. Proceedings of the 8th European Conference on Technology Enhance

  10. Open Source Projects in Software Engineering Education: A Mapping Study

    Science.gov (United States)

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  11. Willingness to Cooperate Within the Open Source Software Domain

    NARCIS (Netherlands)

    Ravesteijn, J.P.P.; Silvius, A.J.G.

    2008-01-01

    Open Source Software (OSS) is an increasingly hot topic in the business domain. One of the key benefits mentioned is the unlimited access to the source code, which enables large communities to continuously improve a software application and prevents vendor lock-in. How attractive these benefits may

  12. Large Data Visualization with Open-Source Tools

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Visualization and post-processing of large data have become increasingly challenging and require more and more tools to support the diversity of data to process. In this seminar, we will present a suite of open-source tools supported and developed by Kitware to perform large-scale data visualization and analysis. In particular, we will present ParaView, an open-source tool for parallel visualization of massive datasets, the Visualization Toolkit (VTK), an open-source toolkit for scientific visualization, and Tangelohub, a suite of tools for large data analytics. About the speaker Julien Jomier is directing Kitware's European subsidiary in Lyon, France, where he focuses on European business development. Julien works on a variety of projects in the areas of parallel and distributed computing, mobile computing, image processing, and visualization. He is one of the developers of the Insight Toolkit (ITK), the Visualization Toolkit (VTK), and ParaView. Julien is also leading the CDash project, an open-source co...

  13. Chinese Localisation of Evergreen: An Open Source Integrated Library System

    Science.gov (United States)

    Zou, Qing; Liu, Guoying

    2009-01-01

    Purpose: The purpose of this paper is to investigate various issues related to Chinese language localisation in Evergreen, an open source integrated library system (ILS). Design/methodology/approach: A Simplified Chinese version of Evergreen was implemented and tested and various issues such as encoding, indexing, searching, and sorting…

  14. Higher Education Sub-Cultures and Open Source Adoption

    Science.gov (United States)

    van Rooij, Shahron Williams

    2011-01-01

    Successful adoption of new teaching and learning technologies in higher education requires the consensus of two sub-cultures, namely the technologist sub-culture and the academic sub-culture. This paper examines trends in adoption of open source software (OSS) for teaching and learning by comparing the results of a 2009 survey of 285 Chief…

  15. Open-Source Urbanism: Creating, Multiplying and Managing Urban Commons

    Directory of Open Access Journals (Sweden)

    Karin Bradley

    2015-06-01

    Full Text Available Within contemporary architecture and urbanism there is marked interest in urban commons. This paper explores the creation of temporary urban commons, or, more specifically, what can be called ‘open-source urbanism’. Citing two practices – urban commons initiated by Atelier d’architecture autogérée in Paris, and Park(ing Day initiated by San Francisco-based Rebar – I argue that these practices can be understood as open-source urbanism since their initiators act as open-source programmers, constructing practice manuals to be freely copied, used, developed and shared, thus producing self-managed commons. Although this tradition of ‘commoning’ is not new, it is currently being reinvented with the use of digital technologies. Combining Elinor Ostrom’s analysis of self-managed natural resource commons with Yochai Benkler’s assertion that commons-based peer production constitutes a ‘third mode of production’ that lies beyond capitalism, socialism and their blends, I argue that open-source urbanism critiques both government and privately-led urban development by advancing a form of postcapitalist urbanism.

  16. OMPC: an open-source MATLAB®-to-Python compiler

    Directory of Open Access Journals (Sweden)

    Peter Jurica

    2009-02-01

    Full Text Available Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we introduce an Open-source MATLAB®-to-Python Compiler (OMPC, a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules run independent of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com.

  17. Open source and open standards in e-learning research

    NARCIS (Netherlands)

    Koper, Rob

    2007-01-01

    Presentation at the TENCompetence Winterschool directed at Ph.D. students to position e-learning research and the role of open source and open standards to facilitate the research process in a methodological sense. The presentation is based on the paper: http://hdl.handle.net/1820/780

  18. OMPC: an Open-Source MATLAB-to-Python Compiler.

    Science.gov (United States)

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  19. OMPC: an Open-Source MATLAB®-to-Python Compiler

    Science.gov (United States)

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  20. Is Open Source the ERP Cure-All?

    Science.gov (United States)

    Panettieri, Joseph C.

    2008-01-01

    Conventional and hosted applications thrive, but open source ERP (enterprise resource planning) is coming on strong. In many ways, the evolution of the ERP market is littered with ironies. When Oracle began buying up customer relationship management (CRM) and ERP companies, some universities worried that they would be left with fewer choices and…