Sample records for model supporting scalable

  1. Scalability of human models

    NARCIS (Netherlands)

    Rodarius, C.; Rooij, L. van; Lange, R. de


    The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was

  2. A fully scalable motion model for scalable video coding. (United States)

    Kao, Meng-Ping; Nguyen, Truong


    Motion information scalability is an important requirement for a fully scalable video codec, especially for decoding scenarios of low bit rate or small image size. So far, several scalable coding techniques on motion information have been proposed, including progressive motion vector precision coding and motion vector field layered coding. However, it is still vague on the required functionalities of motion scalability and how it collaborates flawlessly with other scalabilities, such as spatial, temporal, and quality, in a scalable video codec. In this paper, we first define the functionalities required for motion scalability. Based on these requirements, a fully scalable motion model is proposed along with tailored encoding techniques to minimize the coding overhead of scalability. Moreover, the associated rate distortion optimized motion estimation algorithm will be provided to achieve better efficiency throughout various decoding scenarios. Simulation results will be presented to verify the superiorities of proposed scalable motion model over nonscalable ones.

  3. Scalable Automated Model Search (United States)


    tributed learning environment. Specifically, how to best choose be- tween model families for supervised learning problems and config- ure n Er ro r Maximum Calls 16 81 256 625 Comparison of Search Methods Across Learning Problems Figure 3: Search methods were compared across several...while several of the methods in this paper may apply to this setting, optimizing over this many hyperparameters for learning problems is not a well

  4. A Scalability Model for ECS's Data Server (United States)

    Menasce, Daniel A.; Singhal, Mukesh


    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  5. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian


    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective...

  6. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen


    as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...... a long time to replicate, business model scalability can be cornered into four dimensions. In many corporate restructuring exercises and Mergers and Acquisitions there is a tendency to look for synergies in the form of cost reductions, lean workflows and market segments. However, this state of mind......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...

  7. Scalable Trigram Backoff Language Models, (United States)


    are making the sparse training data problem less serious for certain domains, such as ARPA’s Wall Street Journal corpus, which is part of the 305...our memory calculations. Using a 58,000 word dictionary and 45 million words of Wall Street Journal training data (1992 - 1994), the memory...and used to create models of the same size. The first data set consists of 45.3 million words of Wall Street Journal data (1992 - 1994), the same data

  8. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten


    for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses......The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...

  9. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin


    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  10. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems. (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  11. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University


    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  12. Supporting scalable Bayesian networks using configurable discretizer actuators

    CSIR Research Space (South Africa)

    Osunmakinde, I


    Full Text Available memory management schemes to provide a complete scalable basis for the optimization strategy. This prevents the limited memory from halting while minimizing the discretization time and adapting new observations without re-scanning the entire old data...

  13. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer


    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  14. Bidirectional scalable motion for scalable video coding. (United States)

    Chen, Hu; Kao, Meng-Ping; Nguyen, Truong Q


    Motion information scalability is an important requirement for a fully scalable video codec, especially in low bit rate or small resolution decoding scenarios, for which the fully scalable motion model (SMM) has been proposed. SMM can collaborate flawlessly with other scalabilities, such as spatial, temporal and quality, in a scalable video codec. It performs better than the nonscalable motion model. To further improve the SMM, this paper extends the algorithm to support the hierarchical B frame structure and bidirectional or multidirectional motion estimation. Furthermore, the corresponding rate distortion optimized estimation for improved efficiency in several scenarios is discussed. Several simulation results based on the updated framework are presented to verify the advantage of this extension.

  15. A Scalable Prescriptive Parallel Debugging Model

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Quarfot Nielsen, Niklas; Lee, Gregory L.


    Debugging is a critical step in the development of any parallel program. However, the traditional interactive debugging model, where users manually step through code and inspect their application, does not scale well even for current supercomputers due its centralized nature. While lightweight...

  16. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre


    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...... variational Bayes learning and inference algorithm for these types of models. Empirical results show that the proposed algorithm achieves significantly better accuracy results than other straw-men models evaluated on a collection of well-known data sets. We also demonstrate that the algorithm has a highly...


    Directory of Open Access Journals (Sweden)

    Azizol Abdullah


    Full Text Available Scientific Data Grid mostly deals with large computational problems. It provides geographically distributed resources for large-scale data-intensive applications that generate large scientific data sets. This required the scientist in modern scientific computing communities involved in managing massive amounts of a very large data collections that are geographically distributed. Research in the area of grid has given various ideas and solutions to address these requirements. However, nowadays the number of participants (scientists and institutions that are involved in this kind of environment is increasing tremendously. This situation has lead to a problem of scalability. In order to overcome this problem we need a data grid model that can scale well with the increasing number of users. Peer-to-peer (P2P is one of the architectures that is a promising scale and dynamism environment. In this paper, we present a P2P model for Scientific Data Grid that utilizes the P2P services to address the scalability problem. By using this model, we study and propose various decentralized discovery strategies that intend to address the problem of scalability. We also investigate the impact of data replication that addresses the data distribution and reliability problem for our Scientific Data Grid model on the propose discovery strategies. For the purpose of this study, we have developed and used our own data grid simulation written using PARSEC. We illustrate our P2P Scientific Data Grid model and our data grid simulation used in this study. We then analyze the performance of the discovery strategies with and without the existence of replication strategies relative to their success rates, bandwidth consumption and average number of hop.

  18. Scalable singular 3D modeling for digital battlefield applications (United States)

    Jannson, Tomasz P.; Ternovskiy, Igor V.


    We propose a new classification algorithm to detect and classify targets of interest. It is based on an advanced brand of analytic geometry of manifolds, called theory of catastrophes. Physical Optics Corporation's (POC) scalable 3D model representation provides automatic and real-time analysis of a discrete frame of a sensed 2D imagery of terrain, urban, and target features. It then transforms this frame of discrete different-perspective 2D views of a target into a 3D continuous model called a pictogram. The unique local stereopsis feature of this modeling is the surprising ability to locally obtain a 3D pictogram from a single monoscopic photograph. The proposed 3D modeling, combined with more standard change detection algorithms and 3D terrain feature models, will constitute a novel classification algorithm and a new type of digital battlefield imagery for Imaging Systems.

  19. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey


    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from

  20. Toward a scalable flexible-order model for 3D nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Ducrozet, Guillaume; Bingham, Harry B.

    For marine and coastal applications, current work are directed toward the development of a scalable numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included...... strategy on a time-invariant mesh. The 3D numerical model is based on a finite difference method as in the original works \\cite{LiFleming1997,BinghamZhang2007}. Full details and other aspects of an improved 3D solution can be found in \\cite{EBL08}. The new and improved approach for three...

  1. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan


    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  2. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad


    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  3. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima


    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  4. Fuzzy-Arden-Syntax-based, Vendor-agnostic, Scalable Clinical Decision Support and Monitoring Platform. (United States)

    Adlassnig, Klaus-Peter; Fehre, Karsten; Rappelsberger, Andrea


    This study's objective is to develop and use a scalable genuine technology platform for clinical decision support based on Arden Syntax, which was extended by fuzzy set theory and fuzzy logic. Arden Syntax is a widely recognized formal language for representing clinical and scientific knowledge in an executable format, and is maintained by Health Level Seven (HL7) International and approved by the American National Standards Institute (ANSI). Fuzzy set theory and logic permit the representation of knowledge and automated reasoning under linguistic and propositional uncertainty. These forms of uncertainty are a common feature of patients' medical data, the body of medical knowledge, and deductive clinical reasoning.

  5. Progress Report 2008: A Scalable and Extensible Earth System Model for Climate Change Science

    Energy Technology Data Exchange (ETDEWEB)

    Drake, John B [ORNL; Worley, Patrick H [ORNL; Hoffman, Forrest M [ORNL; Jones, Phil [Los Alamos National Laboratory (LANL)


    This project employs multi-disciplinary teams to accelerate development of the Community Climate System Model (CCSM), based at the National Center for Atmospheric Research (NCAR). A consortium of eight Department of Energy (DOE) National Laboratories collaborate with NCAR and the NASA Global Modeling and Assimilation Office (GMAO). The laboratories are Argonne (ANL), Brookhaven (BNL) Los Alamos (LANL), Lawrence Berkeley (LBNL), Lawrence Livermore (LLNL), Oak Ridge (ORNL), Pacific Northwest (PNNL) and Sandia (SNL). The work plan focuses on scalablity for petascale computation and extensibility to a more comprehensive earth system model. Our stated goal is to support the DOE mission in climate change research by helping ... To determine the range of possible climate changes over the 21st century and beyond through simulations using a more accurate climate system model that includes the full range of human and natural climate feedbacks with increased realism and spatial resolution.

  6. How to develop scalable business model?:a study on the scalability of business model in Finnish ICT & software industry


    Nguyen, H. (Hang)


    Abstract The revolution of Information Communication Technology (ICT) and globalization leverages the business model concept to become more popular in order to support the firm to achieve competitive advantage in dynamic business environment. The start up is not restrict in their size and their novelty but able to be agile by efficiently and effectively exploiting business opportunity through business model innovation. Give...

  7. Reinforcing user data analysis with Ganga in the LHC era: scalability, monitoring and user-support

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Ebke, Johannes; Brochu, Frederic; Dzhunov, Ivan; Kokoszkiewicz, Lukasz; Maier, Andrew; Mościcki, Jakub; Tuckett, David; Vanderster, Daniel; Egede, Ulrik; Reece, Will; Williams, Michael; Jha, Manoj Kumar; Lee, Hurng-Chun; München, Tim; Samset, Bjorn; Slater, Mark


    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticeable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to improve user support and debugging user problems. Ganga is a mature, stable and widely-used tool with long-term support from the HEP community. We report on how it is being constantly improved following the user needs for faster and easier distributed data analysis on the grid.

  8. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.


    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  9. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.


    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  10. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited) (United States)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.


    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce

  11. Scalable air cathode microbial fuel cells using glass fiber separators, plastic mesh supporters, and graphite fiber brush anodes

    KAUST Repository

    Zhang, Xiaoyuan


    The combined use of brush anodes and glass fiber (GF1) separators, and plastic mesh supporters were used here for the first time to create a scalable microbial fuel cell architecture. Separators prevented short circuiting of closely-spaced electrodes, and cathode supporters were used to avoid water gaps between the separator and cathode that can reduce power production. The maximum power density with a separator and supporter and a single cathode was 75±1W/m3. Removing the separator decreased power by 8%. Adding a second cathode increased power to 154±1W/m3. Current was increased by connecting two MFCs connected in parallel. These results show that brush anodes, combined with a glass fiber separator and a plastic mesh supporter, produce a useful MFC architecture that is inherently scalable due to good insulation between the electrodes and a compact architecture. © 2010 Elsevier Ltd.

  12. Reinforcing User Data Analysis with Ganga in the LHC Era: Scalability, Monitoring and User-support

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M


    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to impr...

  13. Reinforcing user data analysis with Ganga in the LHC era: scalability, monitoring and user-support.

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M


    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticeable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to imp...

  14. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster (United States)


    Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster James F. Kelly and Francis...present performance statistics to explain the scalability behavior. Keywords- atmospheric models , time intergrators, MPI, scal- ability, performance; I...moving toward the nonhy- drostatic regime. The nonhydrostatic atmospheric models , which run at resolutions finer than 10 km, possess fast- moving

  15. A Hardware-Friendly Algorithm for Scalable Training and Deployment of Dimensionality Reduction Models on FPGA


    Nazemi, Mahdi; Eshratifar, Amir Erfan; Pedram, Massoud


    With ever-increasing application of machine learning models in various domains such as image classification, speech recognition and synthesis, and health care, designing efficient hardware for these models has gained a lot of popularity. While the majority of researches in this area focus on efficient deployment of machine learning models (a.k.a inference), this work concentrates on challenges of training these models in hardware. In particular, this paper presents a high-performance, scalabl...

  16. Scalable Power-Component Models for Concept Testing (United States)


    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking...model does not have to be an electrical machine expert to scale the model. Similar features are delivered for the battery and inverter models. The

  17. A Fault-Tolerant Mobile Computing Model Based On Scalable Replica

    Directory of Open Access Journals (Sweden)

    Meenakshi Sati


    Full Text Available The most frequent challenge faced by mobile user is stay connected with online data, while disconnected or poorly connected store the replica of critical data. Nomadic users require replication to store copies of critical data on their mobile machines. Existing replication services do not provide all classes of mobile users with the capabilities they require, which include: the ability for direct synchronization between any two replicas, support for large numbers of replicas, and detailed control over what files reside on their local (mobile replica. Existing peer-to-peer solutions would enable direct communication, but suffers from dramatic scaling problems in the number of replicas, limiting the number of overall users and impacting performance. Roam is a replication system designed to satisfy the requirements of the mobile user. Roam is based on the Ward Model, replication architecture for mobile environments. Using the Ward Model and new distributed algorithms, Roam provides a scalable replication solution for the mobile user. We describe the motivation, design, and implementation of Roam and report its performance. Replication is extremely important in mobile environments because nomadic users require local copies of important data.

  18. Scalable Power-Component Models for Concept Testing (United States)


    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine...Abrams) Diesel 150-1000 hp (Others) Alternator 24 Vdc Bi-directional 150 kW DC - DC Converter 400 kW AC to DC Converter Energy Storage Power Conversion...250 hp traction motor Electrical Machines ISG Model • ISG model and its associated controls system – Automatic scaling – Scope of machines relevant

  19. Scalable Telemonitoring Model in Cloud for Health Care Analysis (United States)

    Sawant, Yogesh; Jayakumar, Naveenkumar, Dr.; Pawar, Sanket Sunil


    Telemonitoring model is health observations model that going to surveillance patients remotely. Telemonitoring model is suitable for patients to avoid high operating expense to get Emergency treatment. Telemonitoring gives the path for monitoring the medical device that generates a complete profile of patient’s health through assembling essential signs as well as additional health information. Telemonitoring model is relying on four differential modules which is capable to generate realistic synthetic electrocardiogram (ECG) signals. Telemonitoring model shows four categories of chronic disease: pulmonary state, diabetes, hypertension, as well as cardiovascular diseases. On the other hand, the results of this application model recommend facilitating despite of their nationality, socioeconomic grade, or age, patients observe amid tele-monitoring programs as well as the utilization of technologies. Patient’s multiple health status is shown in the result such as beat-to-beat variation in morphology and timing of the human ECG, including QT dispersion and R-peak amplitude modulation. This model will be utilized to evaluate biomedical signal processing methods that are utilized to calculate clinical information from the ECG.

  20. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz


    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  1. Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation (United States)


    ideas (recommendation). We went beyond the scope of the proposal in several ways, exploring applications as diverse as neuroscience, sociology , and...DiMaggio, M. Nag, and D. Blei. Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper...coverage of U.S. government arts funding. Poetics, 41:6, 2013. 8. D. Blei. Topic modeling and digital humanities. Journal of Digital Humanities, 2(1), 2013

  2. A Scalable Mextram Model for Advanced Bipolar Circuit Design

    NARCIS (Netherlands)

    Wu, H.C.


    In this thesis, a referenced based scaling approach and its parameter extraction for the bipolar transistor model Mextram is proposed. It is mainly based on the physical properties of the Mextram parameters, which scale with the junction temperature and geometry of the bipolar transistor. The

  3. Model Transport: Towards Scalable Transfer Learning on Manifolds

    DEFF Research Database (Denmark)

    Freifeld, Oren; Hauberg, Søren; Black, Michael J.


    We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use...... ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straightforward idea seems to suffer...... “commutes” with learning. Consequently, our compact framework, applicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image...

  4. Scalable and Robust BDDC Preconditioners for Reservoir and Electromagnetics Modeling

    KAUST Repository

    Zampini, S.


    The purpose of the study is to show the effectiveness of recent algorithmic advances in Balancing Domain Decomposition by Constraints (BDDC) preconditioners for the solution of elliptic PDEs with highly heterogeneous coefficients, and discretized by means of the finite element method. Applications to large linear systems generated by div- and curl- conforming finite elements discretizations commonly arising in the contexts of modelling reservoirs and electromagnetics will be presented.

  5. A veracity preserving model for synthesizing scalable electricity load profiles


    Huang, Yunyou; Zhan, Jianfeng; Luo, Chunjie; Wang, Lei; Wang, Nana; Zheng, Daoyi; Fan, Fanda; Ren, Rui


    Electricity users are the major players of the electric systems, and electricity consumption is growing at an extraordinary rate. The research on electricity consumption behaviors is becoming increasingly important to design and deployment of the electric systems. Unfortunately, electricity load profiles are difficult to acquire. Data synthesis is one of the best approaches to solving the lack of data, and the key is the model that preserves the real electricity consumption behaviors. In this...

  6. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems. (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan


    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  7. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman


    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  8. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.


    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  9. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC. (United States)

    Maani, Ehsan; Katsaggelos, Aggelos K


    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  10. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics for Scientific Data and Analysis (United States)

    National Aeronautics and Space Administration — We will construct SciSpark, a scalable system for interactive model evaluation and for the rapid development of climate metrics and analyses. SciSpark directly...

  11. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond


    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  12. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend


    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

  13. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.


    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  14. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.


    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  15. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki


    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  16. Accurate geometry scalable complementary metal oxide semiconductor modelling of low-power 90 nm amplifier circuits

    Directory of Open Access Journals (Sweden)

    Apratim Roy


    Full Text Available This paper proposes a technique to accurately estimate radio frequency behaviour of low-power 90 nm amplifier circuits with geometry scalable discrete complementary metal oxide semiconductor (CMOS modelling. Rather than characterising individual elements, the scheme is able to predict gain, noise and reflection loss of low-noise amplifier (LNA architectures made with bias, active and passive components. It reduces number of model parameters by formulating dependent functions in symmetric distributed modelling and shows that simple fitting factors can account for extraneous (interconnect effects in LNA structure. Equivalent-circuit model equations based on physical structure and describing layout parasites are developed for major amplifier elements like metal–insulator–metal (MIM capacitor, spiral symmetric inductor, polysilicon (PS resistor and bulk RF transistor. The models are geometry scalable with respect to feature dimensions, i.e. MIM/PS width and length, outer-dimension/turns of planar inductor and channel-width/fingers of active device. Results obtained with the CMOS models are compared against measured literature data for two 1.2 V amplifier circuits where prediction accuracy for RF parameters (S(21, noise figure, S(11, S(22 lies within the range of 92–99%.

  17. Feasibility and scalability of spring parameters in distraction enterogenesis in a murine model. (United States)

    Huynh, Nhan; Dubrovsky, Genia; Rouch, Joshua D; Scott, Andrew; Stelzner, Matthias; Shekherdimian, Shant; Dunn, James C Y


    Distraction enterogenesis has been investigated as a novel treatment for short bowel syndrome (SBS). With variable intestinal sizes, it is critical to determine safe, translatable spring characteristics in differently sized animal models before clinical use. Nitinol springs have been shown to lengthen intestines in rats and pigs. Here, we show spring-mediated intestinal lengthening is scalable and feasible in a murine model. A 10-mm nitinol spring was compressed to 3 mm and placed in a 5-mm intestinal segment isolated from continuity in mice. A noncompressed spring placed in a similar fashion served as a control. Spring parameters were proportionally extrapolated from previous spring parameters to accommodate the smaller size of murine intestines. After 2-3 wk, the intestinal segments were examined for size and histology. Experimental group with spring constants, k = 0.2-1.4 N/m, showed intestinal lengthening from 5.0 ± 0.6 mm to 9.5 ± 0.8 mm (P springs with k ≤ 0.4 N/m can safely yield nearly 2-fold distraction enterogenesis in length and diameter in a scalable mouse model. Not only does this study derive the safe ranges and translatable spring characteristics in a scalable murine model for patients with short bowel syndrome, it also demonstrates the feasibility of spring-mediated intestinal lengthening in a mouse, which can be used to study underlying mechanisms in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Investigation of the blockchain systems’ scalability features using the agent based modelling


    Šulnius, Aleksas


    Investigation of the BlockChain Systems’ Scalability Features using the Agent Based Modelling. BlockChain currently is in the spotlight of all the FinTech industry. This technology is being called revolutionary, ground breaking, disruptive and even the WEB 3.0. On the other hand it is widely agreed that the BlockChain is in its early stages of development. In its current state BlockChain is in similar position that the Internet was in the early nineties. In order for this technology to gain m...

  19. Scalability of the muscular action in a parametric 3D model of the index finger. (United States)

    Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio


    A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.

  20. Perspectives of widely scalable exposure models for multi-hazard global risk assessment (United States)

    Pittore, Massimiliano; Haas, Michael; Wieland, Marc


    Less than 5% of earth's surface is urbanized, and currently hosts around 7.5 billion people, with these figures constantly changing as increasingly faster urbanization takes place. A significant percentage of this population, often in economically developing countries, is exposed to different natural hazards which contribute to further raise the bar on the expected economic and social consequences. Global initiatives such as GAR 15 advocate for a wide scale, possibly global perspective on the assessment of risk arising from natural hazards, as a way to increase the risk-awareness of decision-makers and stakeholders, and to better harmonize large-scale prevention and mitigation actions. Realizing, and even more importantly maintaining a widely-scalable exposure model suited for the assessment of different natural risks would allow large-scale quantitative risk and loss assessment in a more efficient and reliable way. Considering its complexity and extent, such a task is undoubtedly a challenging one, spanning across multiple disciplines and operational contexts. On the other hand, with a careful design and an efficient and scalable implementation such endeavour would be well within reach and would contribute to significantly improve our understanding of the mechanisms lying behind what we call natural catastrophes. In this contribution we'll review existing relevant applications, will discuss how to tackle the most critical issues and will outline a road map for the implementation of global-scoped exposure models.

  1. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Kolla, Hemanth [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Borghesi, Giulio [Sandia National Lab. (SNL-CA), Livermore, CA (United States)


    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- merical tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.

  2. BISICLES - A Scalable Finite-Volume Adaptive Mesh Refinement Ice Sheet Model (United States)

    Martin, D. F.; Cornford, S. L.; Ranken, D. F.; Le Brocq, A. M.; Gladstone, R. M.; Payne, A. J.; Ng, E. G.; Lipscomb, W. H.


    Understanding the changing behavior of land ice sheets is essential for accurate projection of sea-level change. The dynamics of ice sheets span a wide range of scales. Localized regions such as grounding lines and ice streams require extremely fine (better than 1 km) resolution to correctly capture the dynamics. Resolving such features using a uniform computational mesh would be prohibitively expensive. Conversely, there are large regions where such fine resolution is unnecessary and represents a waste of computational resources. This makes ice sheets a prime candidate for adaptive mesh refinement (AMR), in which finer spatial resolution is added only where needed, enabling the efficient use of computing resources. The Berkeley ISICLES (BISICLES) project is a collaboration among the Lawrence Berkeley and Los Alamos National Laboratories in the U.S. and the University of Bristol in the U.K. We are constructing a high-performance scalable AMR ice sheet model using the Chombo parallel AMR framework. The placement of refined meshes can easily adapt dynamically to follow the changing and evolving features of the ice sheets. We also use a vertically-integrated treatment of the momentum equation based on that of Schoof and Hindmarsh (2010), which permits additional computational efficiency. Using Chombo enables us to take advantage of existing scalable multigrid-based AMR elliptic solvers and PPM-based AMR hyperbolic solvers. Linking to the existing Glimmer-CISM community ice sheet model as an alternative dynamical core allows use of many features of the existing Glimmer-CISM model, including a coupler to CESM. We present results showing the effectiveness of our approach, both for simple benchmark problems which validate our approach, and for application to regional and continental-scale ice-sheet modeling.

  3. A conclusive scalable model for the complete actuation response for IPMC transducers

    International Nuclear Information System (INIS)

    McDaid, A J; Aw, K C; Haemmerle, E; Xie, S Q


    This paper proposes a conclusive scalable model for the complete actuation response for ionic polymer metal composites (IPMC). This single model is proven to be able to accurately predict the free displacement/velocity and force actuation at varying displacements, with up to 3 V inputs. An accurate dynamic relationship between the force and displacement has been established which can be used to predict the complete actuation response of the IPMC transducer. The model is accurate at large displacements and can also predict the response when interacting with external mechanical systems and loads. This model equips engineers with a useful design tool which enables simple mechanical design, simulation and optimization when integrating IPMC actuators into an application. The response of the IPMC is modelled in three stages: (i) a nonlinear equivalent electrical circuit to predict the current drawn, (ii) an electromechanical coupling term and (iii) a segmented mechanical beam model which includes an electrically induced torque for the polymer. Model parameters are obtained using the dynamic time response and results are presented demonstrating the correspondence between the model and experimental results over a large operating range. This newly developed model is a large step forward, aiding in the progression of IPMCs towards wide acceptance as replacements to traditional actuators

  4. A nonlinear scalable model for designing ionic polymer-metal composite actuator systems (United States)

    McDaid, A. J.; Aw, K. C.; Hämmerle, E.; Xie, S. Q.


    This paper proposes a conclusive scalable model for Ionic Polymer Metal Composites (IPMC) actuators and their interactions with mechanical systems and external loads. This dynamic, nonlinear model accurately predicts the displacement and force actuation in air for a large range of input voltages. The model addresses all the requirements of a useful design tool for IPMC actuators and is intended for robotic and bio-mimetic (artificial muscle) applications which operate at low frequencies. The response of the IPMC is modeled in three stages, (i) a nonlinear equivalent electrical circuit to predict the current drawn, (ii) an electro-mechanical coupling term, representing the conversion of ion flux to a stress generated in the polymer membrane and (iii) a mechanical beam model which includes an electrically induced torque for the polymer. Mechanical outputs are in the rotational coordinate system, 'tip angle' and 'torque output', to give more practical results for the design and simulation of mechanisms. Model parameters are obtained using the dynamic time response and results are presented demonstrating excellent correspondence between the model and experimental results. This newly developed model is a large step forward, aiding in the progression of IPMCs towards wide acceptance as replacements to traditional actuators.

  5. Working towards a scalable model of problem-based learning instruction in undergraduate engineering education (United States)

    Mantri, Archana


    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and Communication Engineering, namely Analog Electronics, Digital Electronics and Pulse, Digital & Switching Circuits is presented here. It measures the effects of pedagogy, gender and cognitive styles on the knowledge, skill and attitude of the students. The study was conducted two times with content designed around same set of OEPs but with two different trained facilitators for all the three courses. The repeatability of results for effects of the independent parameters on dependent parameters is studied and inferences are drawn.

  6. Model of Tryptophan Metabolism, Readily Scalable Using Tissue-specific Gene Expression Data* (United States)

    Stavrum, Anne-Kristin; Heiland, Ines; Schuster, Stefan; Puntervoll, Pål; Ziegler, Mathias


    Tryptophan is utilized in various metabolic routes including protein synthesis, serotonin, and melatonin synthesis and the kynurenine pathway. Perturbations in these pathways have been associated with neurodegenerative diseases and cancer. Here we present a comprehensive kinetic model of the complex network of human tryptophan metabolism based upon existing kinetic data for all enzymatic conversions and transporters. By integrating tissue-specific expression data, modeling tryptophan metabolism in liver and brain returned intermediate metabolite concentrations in the physiological range. Sensitivity and metabolic control analyses identified expected key enzymes to govern fluxes in the branches of the network. Combining tissue-specific models revealed a considerable impact of the kynurenine pathway in liver on the concentrations of neuroactive derivatives in the brain. Moreover, using expression data from a cancer study predicted metabolite changes that resembled the experimental observations. We conclude that the combination of the kinetic model with expression data represents a powerful diagnostic tool to predict alterations in tryptophan metabolism. The model is readily scalable to include more tissues, thereby enabling assessment of organismal tryptophan metabolism in health and disease. PMID:24129579

  7. A scalable architecture for incremental specification and maintenance of procedural and declarative clinical decision-support knowledge. (United States)

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan


    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians' assessment was significantly lower than the assessment of the knowledge engineers.

  8. A Scalable Architecture for Incremental Specification and Maintenance of Procedural and Declarative Clinical Decision-Support Knowledge (United States)

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan


    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians’ assessment was significantly lower than the assessment of the knowledge engineers. PMID:21611137

  9. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth


    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  10. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.


    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  11. Scalable devices

    KAUST Repository

    Krüger, Jens J.


    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  12. Scalable approximate policies for Markov decision process models of hospital elective admissions. (United States)

    Zhu, George; Lizotte, Dan; Hoey, Jesse


    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Robust and scalable 3-D geo-electromagnetic modelling approach using the finite element method (United States)

    Grayver, Alexander V.; Bürg, Markus


    We present a robust and scalable solver for time-harmonic Maxwell's equations for problems with large conductivity contrasts, wide range of frequencies, stretched grids and locally refined meshes. The solver is part of the fully distributed adaptive 3-D electromagnetic modelling scheme which employs the finite element method and unstructured non-conforming hexahedral meshes for spatial discretization using the open-source software deal.II. We use the complex-valued electric field formulation and split it into two real-valued equations for which we utilize an optimal block-diagonal pre-conditioner. Application of this pre-conditioner requires the solution of two smaller real-valued symmetric problems. We solve them by using either a direct solver or the conjugate gradient method pre-conditioned with the recently introduced auxiliary space technique. The auxiliary space pre-conditioner reformulates the original problem in form of several simpler ones, which are then solved using highly efficient algebraic multigrid methods. In this paper, we consider the magnetotelluric case and verify our numerical scheme by using COMMEMI 3-D models. Afterwards, we run a series of numerical experiments and demonstrate that the solver converges in a small number of iterations for a wide frequency range and variable problem sizes. The number of iterations is independent of the problem size, but exhibits a mild dependency on frequency. To test the stability of the method on locally refined meshes, we have implemented a residual-based a posteriori error estimator and compared it with uniform mesh refinement for problems up to 200 million unknowns. We test the scalability of the most time consuming parts of our code and show that they fulfill the strong scaling assumption as long as each MPI process possesses enough degrees of freedom to alleviate communication overburden. Finally, we refer back to a direct solver-based pre-conditioner and analyse its complexity in time. The results show

  14. A Scalable and Extensible Earth System Model for Climate Change Science

    Energy Technology Data Exchange (ETDEWEB)

    Gent, Peter; Lamarque, Jean-Francois; Conley, Andrew; Vertenstein, Mariana; Craig, Anthony


    The objective of this award was to build a scalable and extensible Earth System Model that can be used to study climate change science. That objective has been achieved with the public release of the Community Earth System Model, version 1 (CESM1). In particular, the development of the CESM1 atmospheric chemistry component was substantially funded by this award, as was the development of the significantly improved coupler component. The CESM1 allows new climate change science in areas such as future air quality in very large cities, the effects of recovery of the southern hemisphere ozone hole, and effects of runoff from ice melt in the Greenland and Antarctic ice sheets. Results from a whole series of future climate projections using the CESM1 are also freely available via the web from the CMIP5 archive at the Lawrence Livermore National Laboratory. Many research papers using these results have now been published, and will form part of the 5th Assessment Report of the United Nations Intergovernmental Panel on Climate Change, which is to be published late in 2013.

  15. Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report

    International Nuclear Information System (INIS)

    Cai, Xiao-Chuan; Yang, Chao; Pernice, Michael


    The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementation since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.

  16. Salvus: A scalable software suite for full-waveform modelling & inversion (United States)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; Fichtner, A.


    Full-waveform inversion (FWI), whether at the lab, exploration, or planetary scale, requires the cooperation of five principal components. (1) The geometry of the domain needs to be properly discretized and an initial guess of the model parameters must be projected onto it; (2) Large volumes of recorded waveform data must be collected, organized, and processed; (3) Synthetic waveform data must be efficiently and accurately computed through complex domains; (4) Suitable misfit functions and optimization techniques must be used to relate discrepancies in data space to perturbations in the model; and (5) Some form of workflow management must be employed to schedule and run (1) - (4) in the correct order. Each one of these components can represent a formidable technical challenge which redirects energy from the true task at hand: using FWI to extract new information about some underlying continuum.In this presentation we give an overview of the current status of the Salvus software suite, which was introduced to address the challenges listed above. Specifically, we touch on (1) salvus_mesher, which eases the discretization of complex Earth models into hexahedral meshes; (2) salvus_seismo, which integrates with LASIF and ObsPy to streamline the processing and preparation of seismic data; (3) salvus_wave, a high-performance and scalable spectral-element solver capable of simulating waveforms through general unstructured 2- and 3-D domains, and (4) salvus_opt, an optimization toolbox specifically designed for full-waveform inverse problems. Tying everything together, we also discuss (5) salvus_flow: a workflow package designed to orchestrate and manage the rest of the suite. It is our hope that these developments represent a step towards the automation of large-scale seismic waveform inversion, while also lowering the barrier of entry for new applications. We include several examples of Salvus' use in (extra-) planetary seismology, non-destructive testing, and medical

  17. A scalable and deformable stylized model of the adult human eye for radiation dose assessment. (United States)

    El Basha, Daniel; Furuta, Takuya; Iyer, Siva S R; Bolch, Wesley E


    With recent changes in the recommended annual limit on eye lens exposures to ionizing radiation, there is considerable interest in predictive computational dosimetry models of the human eye and its various ocular structures including the crystalline lens, ciliary body, cornea, retina, optic nerve, and central retinal artery. Computational eye models to date have been constructed as stylized models, high-resolution voxel models, and polygon mesh models. Their common feature, however, is that they are typically constructed of nominal size and of a roughly spherical shape associated with the emmetropic eye. In this study, we present a geometric eye model that is both scalable (allowing for changes in eye size) and deformable (allowing for changes in eye shape), and that is suitable for use in radiation transport studies of ocular exposures and radiation treatments of eye disease. The model allows continuous and variable changes in eye size (axial lengths from 20 to 26 mm) and eye shape (diopters from -12 to +6). As an explanatory example of its use, five models (emmetropic eyes of small, average, and large size, as well as average size eyes of -12D and +6D) were constructed and subjected to normally incident beams of monoenergetic electrons and photons, with resultant energy-dependent dose coefficients presented for both anterior and posterior eye structures. Electron dose coefficients were found to vary with changes to both eye size and shape for the posterior eye structures, while their values for the eye crystalline lens were found to be sensitive to changes in only eye size. No dependence upon eye size or eye shape was found for photon dose coefficients at energies below 2 MeV. Future applications of the model can include more extensive tabulations of dose coefficients to all ocular structures (not only the lens) as a function of eye size and shape, as well as the assessment of x-ray therapies for ocular disease for patients with non-emmetropic eyes. © 2018

  18. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL


    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  19. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics (United States)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.


    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our

  20. Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi-experimental study

    Directory of Open Access Journals (Sweden)

    Umscheid Craig A


    Full Text Available Abstract Background Venous thromboembolism (VTE causes morbidity and mortality in hospitalized patients, and regulators and payors are encouraging the use of systems to prevent them. Here, we examine the effect of a computerized clinical decision support (CDS intervention implemented across a multi-hospital academic health system on VTE prophylaxis and events. Methods The study included 223,062 inpatients admitted between April 2007 and May 2010, and used administrative and clinical data. The intervention was integrated into a commercial electronic health record (EHR in an admission orderset used for all admissions. Three time periods were examined: baseline (period 1, and the time after implementation of the first CDS intervention (period 2 and a second iteration (period 3. Providers were prompted to accept or decline prophylaxis based on patient risk. Time series analyses examined the impact of the intervention on VTE prophylaxis during time periods two and three compared to baseline, and a simple pre-post design examined impact on VTE events and bleeds secondary to anticoagulation. VTE prophylaxis and events were also examined in a prespecified surgical subset of our population meeting the public reporting criteria defined by the Agency for Healthcare Research and Quality (AHRQ Patient Safety Indicator (PSI. Results Unadjusted analyses suggested that “recommended”, “any”, and “pharmacologic” prophylaxis increased from baseline to the last study period (27.1% to 51.9%, 56.7% to 78.1%, and 42.0% to 54.4% respectively; p  Conclusions The CDS intervention was associated with an increase in “recommended” and “any” VTE prophylaxis across the multi-hospital academic health system. The intervention was also associated with increased VTE rates in the overall study population, but a subanalysis using only admissions with appropriate POA documentation suggested no change in VTE rates, and a prespecified analysis of a surgical

  1. Detailed Modeling, Design, and Evaluation of a Scalable Multi-level Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Moody, A T; Bronevetsky, G; Mohror, K M; de Supinski, B R


    High-performance computing (HPC) systems are growing more powerful by utilizing more hardware components. As the system mean-time-before-failure correspondingly drops, applications must checkpoint more frequently to make progress. However, as the system memory sizes grow faster than the bandwidth to the parallel file system, the cost of checkpointing begins to dominate application run times. A potential solution to this problem is to use multi-level checkpointing, which employs multiple types of checkpoints with different costs and different levels of resiliency in a single run. The goal is to design light-weight checkpoints to handle the most common failure modes and rely on more expensive checkpoints for less common, but more severe failures. While this approach is theoretically promising, it has not been fully evaluated in a large-scale, production system context. To this end we have designed a system, called the Scalable Checkpoint/Restart (SCR) library, that writes checkpoints to storage on the compute nodes utilizing RAM, Flash, or disk, in addition to the parallel file system. We present the performance and reliability properties of SCR as well as a probabilistic Markov model that predicts its performance on current and future systems. We show that multi-level checkpointing improves efficiency on existing large-scale systems and that this benefit increases as the system size grows. In particular, we developed low-cost checkpoint schemes that are 100x-1000x faster than the parallel file system and effective against 85% of our system failures. This leads to a gain in machine efficiency of up to 35%, and it reduces the the load on the parallel file system by a factor of two on current and future systems.

  2. Lumbar model generator: a tool for the automated generation of a parametric scalable model of the lumbar spine. (United States)

    Lavecchia, C E; Espino, D M; Moerman, K M; Tse, K M; Robinson, D; Lee, P V S; Shepherd, D E T


    Low back pain is a major cause of disability and requires the development of new devices to treat pathologies and improve prognosis following surgery. Understanding the effects of new devices on the biomechanics of the spine is crucial in the development of new effective and functional devices. The aim of this study was to develop a preliminary parametric, scalable and anatomically accurate finite-element model of the lumbar spine allowing for the evaluation of the performance of spinal devices. The principal anatomical surfaces of the lumbar spine were first identified, and then accurately fitted from a previous model supplied by S14 Implants (Bordeaux, France). Finally, the reconstructed model was defined according to 17 parameters which are used to scale the model according to patient dimensions. The developed model, available as a toolbox named the lumbar model generator, enables generating a population of models using subject-specific dimensions obtained from data scans or averaged dimensions evaluated from the correlation analysis. This toolbox allows patient-specific assessment, taking into account individual morphological variation. The models have applications in the design process of new devices, evaluating the biomechanics of the spine and helping clinicians when deciding on treatment strategies. © 2018 The Author(s).

  3. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria


    -engineering efforts for scaling a system specification efficaciously. We demonstrate the value of our methodology by investigating a smartphone-based biosensing instrumentation platform. Specifically, we carry out scalability analysis for the system’s bandwidth specification: the maximum analog voltage waveform...

  4. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds (United States)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  5. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek


    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework ( providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  6. Use of modeling to assess the scalability of Ethernet networks for the ATLAS second level trigger

    CERN Document Server

    Korcyl, K; Dobinson, Robert W; Saka, F


    The second level trigger of LHC's ATLAS experiment has to perform real-time analyses on detector data at 10 GBytes/s. A switching network is required to connect more than thousand read-out buffers to about thousand processors that execute the trigger algorithm. We are investigating the use of Ethernet technology to build this large switching network. Ethernet is attractive because of the huge installed base, competitive prices, and recent introduction of the high-performance Gigabit version. Due to the network's size it has to be constructed as a layered structure of smaller units. To assess the scalability of such a structure we evaluated a single switch unit. (0 refs).

  7. Computational Science Research in Support of Petascale Electromagnetic Modeling

    International Nuclear Information System (INIS)

    Lee, L.-Q.


    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O

  8. Engineering three-dimensional stem cell morphogenesis for the development of tissue models and scalable regenerative therapeutics. (United States)

    Kinney, Melissa A; Hookway, Tracy A; Wang, Yun; McDevitt, Todd C


    The physiochemical stem cell microenvironment regulates the delicate balance between self-renewal and differentiation. The three-dimensional assembly of stem cells facilitates cellular interactions that promote morphogenesis, analogous to the multicellular, heterotypic tissue organization that accompanies embryogenesis. Therefore, expansion and differentiation of stem cells as multicellular aggregates provides a controlled platform for studying the biological and engineering principles underlying spatiotemporal morphogenesis and tissue patterning. Moreover, three-dimensional stem cell cultures are amenable to translational screening applications and therapies, which underscores the broad utility of scalable suspension cultures across laboratory and clinical scales. In this review, we discuss stem cell morphogenesis in the context of fundamental biophysical principles, including the three-dimensional modulation of adhesions, mechanics, and molecular transport and highlight the opportunities to employ stem cell spheroids for tissue modeling, bioprocessing, and regenerative therapies.

  9. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences (United States)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.


    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability

  10. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)


    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  11. Genetic algorithms and genetic programming for multiscale modeling: Applications in materials science and chemistry and advances in scalability (United States)

    Sastry, Kumara Narasimha


    building blocks in organic chemistry---indicate that MOGAs produce High-quality semiempirical methods that (1) are stable to small perturbations, (2) yield accurate configuration energies on untested and critical excited states, and (3) yield ab initio quality excited-state dynamics. The proposed method enables simulations of more complex systems to realistic, multi-picosecond timescales, well beyond previous attempts or expectation of human experts, and 2--3 orders-of-magnitude reduction in computational cost. While the two applications use simple evolutionary operators, in order to tackle more complex systems, their scalability and limitations have to be investigated. The second part of the thesis addresses some of the challenges involved with a successful design of genetic algorithms and genetic programming for multiscale modeling. The first issue addressed is the scalability of genetic programming, where facetwise models are built to assess the population size required by GP to ensure adequate supply of raw building blocks and also to ensure accurate decision-making between competing building blocks. This study also presents a design of competent genetic programming, where traditional fixed recombination operators are replaced by building and sampling probabilistic models of promising candidate programs. The proposed scalable GP, called extended compact GP (eCGP), combines the ideas from extended compact genetic algorithm (eCGA) and probabilistic incremental program evolution (PIPE) and adaptively identifies, propagates and exchanges important subsolutions of a search problem. Results show that eCGP scales cubically with problem size on both GP-easy and GP-hard problems. Finally, facetwise models are developed to explore limitations of scalability of MOGAs, where the scalability of multiobjective algorithms in reliably maintaining Pareto-optimal solutions is addressed. The results show that even when the building blocks are accurately identified, massive multimodality

  12. On the scalability of uncoordinated multiple access for the Internet of Things

    KAUST Repository

    Chisci, Giovanni


    The Internet of things (IoT) will entail massive number of wireless connections with sporadic traffic patterns. To support the IoT traffic, several technologies are evolving to support low power wide area (LPWA) wireless communications. However, LPWA networks rely on variations of uncoordinated spectrum access, either for data transmissions or scheduling requests, thus imposing a scalability problem to the IoT. This paper presents a novel spatiotemporal model to study the scalability of the ALOHA medium access. In particular, the developed mathematical model relies on stochastic geometry and queueing theory to account for spatial and temporal attributes of the IoT. To this end, the scalability of the ALOHA is characterized by the percentile of IoT devices that can be served while keeping their queues stable. The results highlight the scalability problem of ALOHA and quantify the extend to which ALOHA can support in terms of number of devices, traffic requirement, and transmission rate.

  13. S-ProvFlow: provenance model and tools for scalable and adaptive analysis pipelines in geoscience. (United States)

    Spinuso, A.; Mihajlovski, A.; Atkinson, M.; Filgueira, R.; Klampanos, I.; Sanchez, S.


    The reproducibility of scientific findings is essential to improve the quality and application of modern data-driven research. Delivering such reproducibility is challenging in the context of systems handling large data-streams with sophisticated computational methods. Similarly, the SKA (Square Kilometer Array) will collect an unprecedented volume of radio-wave signals that will have to be reduced and transformed into derived products, with impact on space-weather research. This highlights the importance of having cross-disciplines mechanisms at the producer's side that rely on usable lineage data to support validation and traceability of the new artifacts. To be informative, provenance has to describe each methods' abstractions and their implementation as mappings onto distributed platforms and their concurrent execution, capturing relevant internal dependencies at runtime. Producers and intelligent toolsets should be able to exploit the produced provenance, steering real-time monitoring activities and inferring adaptations of methods at runtime.We present a model of provenance (S-PROV) that extends W3C PROV and ProvONE, broadening coverage of provenance to aspects related to distribution, scale-up and steering of stateful streaming operators in analytic pipelines. This is supported by a technical framework for tuneable and actionable lineage, ensuring its relevance to the users' interests, fostering its rapid exploitation to facilitate research practices. By applying concepts such as provenance typing and profiling, users define rules to capture common provenance patterns and activate selective controls based on domain-metadata. The traces are recorded in a document-store with index optimisation and a web API serves advanced interactive tools (S-ProvFlow, These allow different classes of consumers to rapidly explore the provenance data. The system, which contributes to the SKA-Link initiative, within technology and

  14. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít


    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  15. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model (United States)

    Chacón, L.; Stanier, A.


    We demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton-Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm is shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ∼6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.

  16. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces. (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam


    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  17. A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing (United States)

    A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...

  18. Infopreneurs in service of rural enterprise and economic development: Addressing the critical challenges of scalability and sustainability in support of service extension in developing (rural) economies

    CSIR Research Space (South Africa)

    Van Rensburg, JR


    Full Text Available years’ work of ongoing research in a Living Lab fashion to understand and address the two critical challenges of scalability and sustainability in the utilisation of technology (primarily Information and Communication Technologies – ICTs) as enablers...

  19. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Atul K. [Univ. of Illinois, Urbana-Champaign, IL (United States)


    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  20. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)


    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  1. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin


    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  2. Dynamic Scalable Stochastic Petri Net: A Novel Model for Designing and Analysis of Resource Scheduling in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Hua He


    Full Text Available Performance evaluation of cloud computing systems studies the relationships among system configuration, system load, and performance indicators. However, such evaluation is not feasible by dint of measurement methods or simulation methods, due to the properties of cloud computing, such as large scale, diversity, and dynamics. To overcome those challenges, we present a novel Dynamic Scalable Stochastic Petri Net (DSSPN to model and analyze the performance of cloud computing systems. DSSPN can not only clearly depict system dynamic behaviors in an intuitive and efficient way but also easily discover performance deficiencies and bottlenecks of systems. In this study, we further elaborate some properties of DSSPN. In addition, we improve fair scheduling taking into consideration job diversity and resource heterogeneity. To validate the improved algorithm and the applicability of DSSPN, we conduct extensive experiments through Stochastic Petri Net Package (SPNP. The performance results show that the improved algorithm is better than fair scheduling in some key performance indicators, such as average throughput, response time, and average completion time.

  3. PATHLOGIC-S: a scalable Boolean framework for modelling cellular signalling.

    Directory of Open Access Journals (Sweden)

    Liam G Fearnley

    Full Text Available Curated databases of signal transduction have grown to describe several thousand reactions, and efficient use of these data requires the development of modelling tools to elucidate and explore system properties. We present PATHLOGIC-S, a Boolean specification for a signalling model, with its associated GPL-licensed implementation using integer programming techniques. The PATHLOGIC-S specification has been designed to function on current desktop workstations, and is capable of providing analyses on some of the largest currently available datasets through use of Boolean modelling techniques to generate predictions of stable and semi-stable network states from data in community file formats. PATHLOGIC-S also addresses major problems associated with the presence and modelling of inhibition in Boolean systems, and reduces logical incoherence due to common inhibitory mechanisms in signalling systems. We apply this approach to signal transduction networks including Reactome and two pathways from the Panther Pathways database, and present the results of computations on each along with a discussion of execution time. A software implementation of the framework and model is freely available under a GPL license.

  4. Scalable Bayesian nonparametric regression via a Plackett-Luce model for conditional ranks (United States)

    Gray-Davies, Tristan; Holmes, Chris C.; Caron, François


    We present a novel Bayesian nonparametric regression model for covariates X and continuous response variable Y ∈ ℝ. The model is parametrized in terms of marginal distributions for Y and X and a regression function which tunes the stochastic ordering of the conditional distributions F (y|x). By adopting an approximate composite likelihood approach, we show that the resulting posterior inference can be decoupled for the separate components of the model. This procedure can scale to very large datasets and allows for the use of standard, existing, software from Bayesian nonparametric density estimation and Plackett-Luce ranking estimation to be applied. As an illustration, we show an application of our approach to a US Census dataset, with over 1,300,000 data points and more than 100 covariates. PMID:29623150

  5. Scalable decision support at the point of care: a substitutable electronic health record app for monitoring medication adherence. (United States)

    Bosl, William; Mandel, Joshua; Jonikas, Magdalena; Ramoni, Rachel Badovinac; Kohane, Isaac S; Mandl, Kenneth D


    of future adherence on a clinician-facing Web interface. The user interface allows the physician to quickly review all medications in a patient record for potential non-adherence problems. A gap-check and current medication possession ratio (MPR) threshold test are applied to all medications in the record to test for current non-adherence. Predictions of 1-year non-adherence are made for certain drug classes for which external data was available. Information is presented graphically to indicate present non-adherence, or predicted non-adherence at one year, based on early prescription fulfillment patterns. The MPR Monitor app is installed in the SMART reference container as the "MPR Monitor", where it is publically available for use and testing. MPR is an acronym for Medication Possession Ratio, a commonly used measure of adherence to a prescribed medication regime. This app may be used as an example for creating additional functionality by replacing statistical and display algorithms with new code in a cycle of rapid prototyping and implementation or as a framework for a new SMART app. The MPR Monitor app is a useful pilot project for monitoring medication adherence. It also provides an example that integrates several open source software components, including the Python-based Django Web framework and python-based graphics, to build a SMART app that allows complex decision support methods to be encapsulated to enhance EHR functionality.

  6. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp


    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  7. Developmental Impact Analysis of an ICT-Enabled Scalable Healthcare Model in BRICS Economies

    Directory of Open Access Journals (Sweden)

    Dhrubes Biswas


    Full Text Available This article highlights the need for initiating a healthcare business model in a grassroots, emerging-nation context. This article’s backdrop is a history of chronic anomalies afflicting the healthcare sector in India and similarly placed BRICS nations. In these countries, a significant percentage of populations remain deprived of basic healthcare facilities and emergency services. Community (primary care services are being offered by public and private stakeholders as a panacea to the problem. Yet, there is an urgent need for specialized (tertiary care services at all levels. As a response to this challenge, an all-inclusive health-exchange system (HES model, which utilizes information communication technology (ICT to provide solutions in rural India, has been developed. The uniqueness of the model lies in its innovative hub-and-spoke architecture and its emphasis on affordability, accessibility, and availability to the masses. This article describes a developmental impact analysis (DIA that was used to assess the impact of this model. The article contributes to the knowledge base of readers by making them aware of the healthcare challenges emerging nations are facing and ways to mitigate those challenges using entrepreneurial solutions.

  8. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm. (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S


    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Web-video-mining-supported workflow modeling for laparoscopic surgeries. (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao


    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics (United States)

    Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan


    We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.

  11. Performance Evaluation of the WSN Routing Protocols Scalability

    Directory of Open Access Journals (Sweden)

    L. Alazzawi


    Full Text Available Scalability is an important factor in designing an efficient routing protocol for wireless sensor networks (WSNs. A good routing protocol has to be scalable and adaptive to the changes in the network topology. Thus scalable protocol should perform well as the network grows larger or as the workload increases. In this paper, routing protocols for wireless sensor networks are simulated and their performances are evaluated to determine their capability for supporting network scalability.

  12. Helicopter model rotor-blade vortex interaction impulsive noise: Scalability and parametric variations (United States)

    Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.


    Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.

  13. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas


    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  14. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André


    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  15. Architecture Knowledge for Evaluating Scalable Databases (United States)


    Tyree . "Using ontology to support development of software architectures," IBM Systems Journal, 45:4 (2006): 813-825. [18] F. de Almeida, F. Ricardo, Abstract—Designing massively scalable, highly available big data systems is an immense challenge for software architects. applications require distributed systems design principles to create scalable solutions, and the selection and adoption of open source and

  16. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab


    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  17. An organizational model to support the flexible workflow based on ontology

    International Nuclear Information System (INIS)

    Yuan Feng; Li Xudong; Zhu Guangying; Zhang Xiankun


    Based on ontology theory, the paper addresses an organizational model for flexible workflow. Firstly, the paper describes the conceptual model of the organizational model on ontology chart, which provides a consistent semantic framework of organization. Secondly, the paper gives the formalization of the model and describes the six key ontology elements of the mode in detail. Finally, the paper discusses deeply how the model supports the flexible workflow and indicates that the model has the advantages of cross-area, cross-organization and cross-domain, multi-process support and scalability. Especially, because the model is represented by ontology, the paper produces the conclusion that the model has covered the defect of unshared feature in traditional models, at the same time, it is more capable and flexible. (authors)

  18. Institutional model for supporting standardization

    International Nuclear Information System (INIS)

    Sanford, M.O.; Jackson, K.J.


    Restoring the nuclear option for utilities requires standardized designs. This premise is widely accepted by all parties involved in ALWR development activities. Achieving and maintaining standardization, however, demands new perspectives on the roles and responsibilities for the various commercial organizations involved in nuclear power. Some efforts are needed to define a workable model for a long-term support structure that will allow the benefits of standardization to be realized. The Nuclear Power Oversight Committee (NPOC) has developed a strategic plan that lays out the steps necessary to enable the nuclear industry to be in a position to order a new nuclear power plant by the mid 1990's. One of the key elements of the plan is the, ''industry commitment to standardization: through design certification, combined license, first-of-a-kind engineering, construction, operation, and maintenance of nuclear power plants.'' This commitment is a result of the recognition by utilities of the substantial advantages to standardization. Among these are economic benefits, licensing benefits from being treated as one of a family, sharing risks across a broader ownership group, sharing operating experiences, enhancing public safety, and a more coherent market force. Utilities controlled the construction of the past generation of nuclear units in a largely autonomous fashion procuring equipment and designs from a vendor, engineering services from an architect/engineer, and construction from a construction management firm. This, in addition to forcing the utility to assume virtually all of the risks associated with the project, typically resulted in highly customized designs based on preferences of the individual utility. However, the benefits of standardization can be realized only through cooperative choices and decision making by the utilities and through working as partners with reactor vendors, architect/engineers, and construction firms

  19. Complexity scalable motion-compensated temporal filtering (United States)

    Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; Andreopoulos, Yiannis; Devos, Harald; Eeckhaut, Hendrik; Christiaens, Mark; Stroobandt, Dirk; Verkest, Diederik; Schelkens, Peter


    Computer networks and the internet have taken an important role in modern society. Together with their development, the need for digital video transmission over these networks has grown. To cope with the user demands and limitations of the network, compression of the video material has become an important issue. Additionally, many video-applications require flexibility in terms of scalability and complexity (e.g. HD/SD-TV, video-surveillance). Current ITU-T and ISO/IEC video compression standards (MPEG-x, H.26-x) lack efficient support for these types of scalability. Wavelet-based compression techniques have been proposed to tackle this problem, of which the Motion Compensated Temporal Filtering (MCTF)-based architectures couple state-of-the-art performance with full (quality, resolution, and frame-rate) scalability. However, a significant drawback of these architectures is their high complexity. The computational and memory complexity of both spatial domain (SD) MCTF and in-band (IB) MCTF video codec instantiations are examined in this study. Comparisons in terms of complexity versus performance are presented for both types of codecs. The paper indicates how complexity scalability can be achieved in such video-codecs, and analyses some of the trade-offs between complexity and coding performance. Finally, guidelines on how to implement a fully scalable video-codec that incorporates quality, temporal, resolution and complexity scalability are proposed.

  20. Equalizer: a scalable parallel rendering framework. (United States)

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato


    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  1. Mathematical models for planning support

    NARCIS (Netherlands)

    L.G. Kroon (Leo); R.A. Zuidwijk (Rob)


    textabstractIn this paper we describe how computer systems can provide planners with active planning support, when these planners are carrying out their daily planning activities. This means that computer systems actively participate in the planning process by automatically generating plans or

  2. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee


    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  3. Physics modeling support contract: Final report

    Energy Technology Data Exchange (ETDEWEB)


    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II.

  4. Physics modeling support contract: Final report

    International Nuclear Information System (INIS)


    This document is the final report for the Physics Modeling Support contract between TRW, Inc. and the Lawrence Livermore National Laboratory for fiscal year 1987. It consists of following projects: TIBER physics modeling and systems code development; advanced blanket modeling task; time dependent modeling; and free electron maser for TIBER II

  5. Nurse managers and the sandwich support model. (United States)

    Chisengantambu, Christine; Robinson, Guy M; Evans, Nina


    To explore the interplay between the work of nurse managers and the support they receive and provide. Support is the cornerstone of management practices and is pivotal in employees feeling committed to an organisation. Support for nurse managers is integral to effective health sector management; its characteristics merit more attention. The experiences of 15 nurse managers in rural health institutions in South Australia were explored using structured interviews, observation and document review. Effective decision making requires adequate support, which influences the perceptions and performance of nurse managers, creating an environment in which they feel appreciated and valued. An ideal support system is proposed, the "sandwich support model," to promote effective functioning and desirable patient outcomes via support "from above" and "from below." The need to support nurse managers effectively is crucial to how they function. The sandwich support model can improve management practices, more effectively assisting nurse managers. Organisations should revisit and strengthen support processes for nurse managers to maximize efficiencies. This paper contributes to understanding the importance of supporting nurse managers, identifying the processes used and the type of support offered. It highlights challenges and issues affecting support practices within the health sector. © 2017 John Wiley & Sons Ltd.

  6. Uncertainty modeling and decision support

    International Nuclear Information System (INIS)

    Yager, Ronald R.


    We first formulate the problem of decision making under uncertainty. The importance of the representation of our knowledge about the uncertainty in formulating a decision process is pointed out. We begin with a brief discussion of the case of probabilistic uncertainty. Next, in considerable detail, we discuss the case of decision making under ignorance. For this case the fundamental role of the attitude of the decision maker is noted and its subjective nature is emphasized. Next the case in which a Dempster-Shafer belief structure is used to model our knowledge of the uncertainty is considered. Here we also emphasize the subjective choices the decision maker must make in formulating a decision function. The case in which the uncertainty is represented by a fuzzy measure (monotonic set function) is then investigated. We then return to the Dempster-Shafer belief structure and show its relationship to the fuzzy measure. This relationship allows us to get a deeper understanding of the formulation the decision function used Dempster- Shafer framework. We discuss how this deeper understanding allows a decision analyst to better make the subjective choices needed in the formulation of the decision function

  7. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads...

  8. Linking Remote Sensing Data and Energy Balance Models for a Scalable Agriculture Insurance System for sub-Saharan Africa (United States)

    Brown, M. E.; Osgood, D. E.; McCarty, J. L.; Husak, G. J.; Hain, C.; Neigh, C. S. R.


    One of the most immediate and obvious impacts of climate change is on the weather-sensitive agriculture sector. Both local and global impacts on production of food will have a negative effect on the ability of humanity to meet its growing food demands. Agriculture has become more risky, particularly for farmers in the most vulnerable and food insecure regions of the world such as East Africa. Smallholders and low-income farmers need better financial tools to reduce the risk to food security while enabling productivity increases to meet the needs of a growing population. This paper will describe a recently funded project that brings together climate science, economics, and remote sensing expertise to focus on providing a scalable and sensor-independent remote sensing based product that can be used in developing regional rainfed agriculture insurance programs around the world. We will focus our efforts in Ethiopia and Kenya in East Africa and in Senegal and Burkina Faso in West Africa, where there are active index insurance pilots that can test the effectiveness of our remote sensing-based approach for use in the agriculture insurance industry. The paper will present the overall program, explain links to the insurance industry, and present comparisons of the four remote sensing datasets used to identify drought: the CHIRPS 30-year rainfall data product, the GIMMS 30-year vegetation data product from AVHRR, the ESA soil moisture ECV-30 year soil moisture data product, and a MODIS Evapotranspiration (ET) 15-year dataset. A summary of next year's plans for this project will be presented at the close of the presentation.

  9. Mathematical Modeling Of Life-Support Systems (United States)

    Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.


    Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.

  10. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason


    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  11. Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi) (United States)

    Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.


    Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.

  12. The DIAMOND Model of Peace Support Operations

    National Research Council Canada - National Science Library

    Bailey, Peter


    DIAMOND (Diplomatic And Military Operations in a Non-warfighting Domain) is a high-level stochastic simulation developed at Dstl as a key centerpiece within the Peace Support Operations (PSO) 'modelling jigsaw...

  13. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V


    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  14. Developing a scalable training model in global mental health: pilot study of a video-assisted training Program for Generalist Clinicians in Rural Nepal. (United States)

    Acharya, B; Tenpa, J; Basnet, M; Hirachan, S; Rimal, P; Choudhury, N; Thapa, P; Citrin, D; Halliday, S; Swar, S B; van Dyke, C; Gauchan, B; Sharma, B; Hung, E; Ekstrand, M


    In low- and middle-income countries, mental health training often includes sending few generalist clinicians to specialist-led programs for several weeks. Our objective is to develop and test a video-assisted training model addressing the shortcomings of traditional programs that affect scalability: failing to train all clinicians, disrupting clinical services, and depending on specialists. We implemented the program -video lectures and on-site skills training- for all clinicians at a rural Nepali hospital. We used Wilcoxon signed-rank tests to evaluate pre- and post-test change in knowledge (diagnostic criteria, differential diagnosis, and appropriate treatment). We used a series of 'Yes' or 'No' questions to assess attitudes about mental illness, and utilized exact McNemar's test to analyze the proportions of participants who held a specific belief before and after the training. We assessed acceptability and feasibility through key informant interviews and structured feedback. For each topic except depression, there was a statistically significant increase (Δ) in median scores on knowledge questionnaires: Acute Stress Reaction (Δ = 20, p = 0.03), Depression (Δ = 11, p = 0.12), Grief (Δ = 40, p training received high ratings; key informants shared examples and views about the training's positive impact and complementary nature of the program's components. Video lectures and on-site skills training can address the limitations of a conventional training model while being acceptable, feasible, and impactful toward improving knowledge and attitudes of the participants.

  15. Scalable Boson Sampling with Noisy Components (United States)

    Keating, Tyler; Slote, Joseph; Muraleedharan, Gopikrishnan; Carrasco, Ezequiel; Deutsch, Ivan

    The goal of a Boson Sampler is to efficiently and scalably sample from a probability distribution that cannot be simulated efficiently on a classical computer, thus violating the Extended Church-Turing Thesis (ECTT). To properly falsify the ECTT, the physical device must do so even in the face of realistic noise. Scaling a Boson Sampler requires increasing quantities of a set of fixed-size components (beamsplitters, detectors, etc.), so it is natural to consider noise models that act on each component independently. We show that for any such model, the per-component noise need only decrease polynomially to keep the sampling problem hard. In this sense, Boson Sampling with noise is scalable. However, the same result applies to a number of other quantum information systems, including universal circuit-model quantum computers. Such devices are widely believed to require error correction in order to be truly scalable, even though polynomial reduction of per-component errors would allow them to work without error correction. This belief is consistent with the stricter requirement that error rates should be not just polynomially small, but constant in problem size. We conclude that a more precise definition of scalability with noise is needed to properly evaluate Boson Samplers.

  16. Three-Dimensional Printing of a Scalable Molecular Model and Orbital Kit for Organic Chemistry Teaching and Learning (United States)

    Penny, Matthew R.; Cao, Zi Jing; Patel, Bhaven; dos Santos, Bruno Sil; Asquith, Christopher R. M.; Szulc, Blanka R.; Rao, Zenobia X.; Muwaffak, Zaid; Malkinson, John P.; Hilton, Stephen T.


    Three-dimensional (3D) chemical models are a well-established learning tool used to enhance the understanding of chemical structures by converting two-dimensional paper or screen outputs into realistic three-dimensional objects. While commercial atom model kits are readily available, there is a surprising lack of large molecular and orbital models…

  17. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data. (United States)

    Castruccio, Stefano; Ombao, Hernando; Genton, Marc G


    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)-coarser or larger spatial units-rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke. © 2018, The International Biometric Society.

  18. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data

    KAUST Repository

    Castruccio, Stefano


    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke.

  19. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)


    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  20. Towards better modelling and decision support

    DEFF Research Database (Denmark)

    Meli, Mattia; Grimm, V; Augusiak, J.


    The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE......, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its......, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing...

  1. Autism Treatment and Family Support Models Review

    Directory of Open Access Journals (Sweden)

    Mehrnoush Esbati


    Full Text Available Autism is a lifelong neurological disability of unknown etiology. The criteria for a diagnosis of autism are based on a triad of impairments in social interaction, communication and a lack of flexibility in thinking and behavior There are several factors which are likely to contribute to this variation including the definition of autism and variability in diagnosis amongst professionals, however anecdotally there appears to have been a steadily increasing demand for services. The purpose of this review of research literature relating to the management and treatment of children with autism is to identify the most effective models of best practice. The review includes Comparative evidence supporting a range of treatment and intervention models, across the range of individuals included within autism spectrum disorders, psychodynamic treatment/management which are based on the assumption that autism is the result of emotional damage to the child, usually because of failure to develop a close attachment to parents, especially the mother, biological treatments, educational and behavioral interventions, communication therapies, cost benefits and supporting families.The research is examined for evidence to support best practice models in supporting families at the time of diagnosis and assessment and an overview of the nature of comprehensive supports that help reduce stresses that may be experienced by families of a child with autism and promote inclusion in community activities.

  2. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro


    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  3. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams (United States)

    Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel


    Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100s–10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...

  4. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples. (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A


    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  5. Quantifying XRootD scalability and overheads

    International Nuclear Information System (INIS)

    De Witt, S; Lahiff, A


    Both ATLAS and CMS experiments are making increasing use of the XRootD architecture to provide access to data not held locally to where a job is running. The anticipation is that this will lead to fewer job failures, although the efficiency of jobs that would otherwise have failed may be reduced. In this paper we look at the overhead and scalability of the XRootD software system, and the overhead of the infrastructure needed to support remote access to data.

  6. Intratracheal Bleomycin Aerosolization: The Best Route of Administration for a Scalable and Homogeneous Pulmonary Fibrosis Rat Model?

    Directory of Open Access Journals (Sweden)

    Alexandre Robbe


    Full Text Available Idiopathic pulmonary fibrosis (IPF is a chronic disease with a poor prognosis and is characterized by the accumulation of fibrotic tissue in lungs resulting from a dysfunction in the healing process. In humans, the pathological process is patchy and temporally heterogeneous and the exact mechanisms remain poorly understood. Different animal models were thus developed. Among these, intratracheal administration of bleomycin (BML is one of the most frequently used methods to induce lung fibrosis in rodents. In the present study, we first characterized histologically the time-course of lung alteration in rats submitted to BLM instillation. Heterogeneous damages were observed among lungs, consisting in an inflammatory phase at early time-points. It was followed by a transition to a fibrotic state characterized by an increased myofibroblast number and collagen accumulation. We then compared instillation and aerosolization routes of BLM administration. The fibrotic process was studied in each pulmonary lobe using a modified Ashcroft scale. The two quantification methods were confronted and the interobserver variability evaluated. Both methods induced fibrosis development as demonstrated by a similar progression of the highest modified Ashcroft score. However, we highlighted that aerosolization allows a more homogeneous distribution of lesions among lungs, with a persistence of higher grade damages upon time.

  7. A scalable satellite-based crop yield mapper: Integrating satellites and crop models for field-scale estimation in India (United States)

    Jain, M.; Singh, B.; Srivastava, A.; Lobell, D. B.


    Food security will be challenged over the upcoming decades due to increased food demand, natural resource degradation, and climate change. In order to identify potential solutions to increase food security in the face of these changes, tools that can rapidly and accurately assess farm productivity are needed. With this aim, we have developed generalizable methods to map crop yields at the field scale using a combination of satellite imagery and crop models, and implement this approach within Google Earth Engine. We use these methods to examine wheat yield trends in Northern India, which provides over 15% of the global wheat supply and where over 80% of farmers rely on wheat as a staple food source. In addition, we identify the extent to which farmers are shifting sow date in response to heat stress, and how well shifting sow date reduces the negative impacts of heat stress on yield. To identify local-level decision-making, we map wheat sow date and yield at a high spatial resolution (30 m) using Landsat satellite imagery from 1980 to the present. This unique dataset allows us to examine sow date decisions at the field scale over 30 years, and by relating these decisions to weather experienced over the same time period, we can identify how farmers learn and adapt cropping decisions based on weather through time.

  8. Modeling, simulation, and fabrication of a fully integrated, acid-stable, scalable solar-driven water-splitting system. (United States)

    Walczak, Karl; Chen, Yikai; Karp, Christoph; Beeman, Jeffrey W; Shaner, Matthew; Spurgeon, Joshua; Sharp, Ian D; Amashukeli, Xenia; West, William; Jin, Jian; Lewis, Nathan S; Xiang, Chengxiang


    A fully integrated solar-driven water-splitting system comprised of WO3 /FTO/p(+) n Si as the photoanode, Pt/TiO2 /Ti/n(+) p Si as the photocathode, and Nafion as the membrane separator, was simulated, assembled, operated in 1.0 M HClO4 , and evaluated for performance and safety characteristics under dual side illumination. A multi-physics model that accounted for the performance of the photoabsorbers and electrocatalysts, ion transport in the solution electrolyte, and gaseous product crossover was first used to define the optimal geometric design space for the system. The photoelectrodes and the membrane separators were then interconnected in a louvered design system configuration, for which the light-absorbing area and the solution-transport pathways were simultaneously optimized. The performance of the photocathode and the photoanode were separately evaluated in a traditional three-electrode photoelectrochemical cell configuration. The photocathode and photoanode were then assembled back-to-back in a tandem configuration to provide sufficient photovoltage to sustain solar-driven unassisted water-splitting. The current-voltage characteristics of the photoelectrodes showed that the low photocurrent density of the photoanode limited the overall solar-to-hydrogen (STH) conversion efficiency due to the large band gap of WO3 . A hydrogen-production rate of 0.17 mL hr(-1) and a STH conversion efficiency of 0.24 % was observed in a full cell configuration for >20 h with minimal product crossover in the fully operational, intrinsically safe, solar-driven water-splitting system. The solar-to-hydrogen conversion efficiency, ηSTH , calculated using the multiphysics numerical simulation was in excellent agreement with the experimental behavior of the system. The value of ηSTH was entirely limited by the performance of the photoelectrochemical assemblies employed in this study. The louvered design provides a robust platform for implementation of various types of

  9. Myria: Scalable Analytics as a Service (United States)

    Howe, B.; Halperin, D.; Whitaker, A.


    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  10. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...... goal. The results for Tileheat show that the prediction method offers a substantial improvement over the current method used by the Danish Geodata Agency. Thus, a large amount of computations can potentially be saved by this public institution, who is responsible for the distribution of government...

  11. Scalable coherent interface: Links to the future (United States)

    Gustavson, D. B.; Kristiansen, E.


    The Scalable Coherent Interface (SCI) was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be a powerful I/O channel for a supercomputer. It can be the processor, cache-memory I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  12. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)


    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  13. Decision support models for natural gas dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Chin, L. (Bentley College, Waltham, MA (United States)); Vollmann, T.E. (International Inst. for Management Development, Lausanne (Switzerland))

    A decision support model is presented which will give utilities the support tools to manage the purchasing of natural gas supplies in the most cost effective manner without reducing winter safety stocks to below minimum levels. In Business As Usual (BAU) purchasing quantities vary with the daily forecasts. With Material Requirements Planning (MRP) and Linear Programming (LP), two types of factors are used: seasonal weather and decision rule. Under current practices, BAU simulation uses the least expensive gas source first, then adding successively more expensive sources. Material Requirements Planning is a production planning technique which uses a parent item master production schedule to determine time phased requirements for component points. Where the MPS is the aggregate gas demand forecasts for the contract year. This satisfies daily demand with least expensive gas and uses more expensive when necessary with automatic computation of available-to-promise (ATP) gas a dispacher knows daily when extra gas supplies may be ATP. Linear Programming is a mathematical algorithm used to determine optimal allocations of scarce resources to achieve a desired result. The LP model determines optimal daily gas purchase decisions with respect to supply cost minimization. Using these models, it appears possible to raise gross income margins 6 to 10% with minimal additions of customers and no new gas supply.

  14. Decision support models for natural gas dispatch

    International Nuclear Information System (INIS)

    Chin, L.; Vollmann, T.E.


    A decision support model is presented which will give utilities the support tools to manage the purchasing of natural gas supplies in the most cost effective manner without reducing winter safety stocks to below minimum levels. In Business As Usual (BAU) purchasing quantities vary with the daily forecasts. With Material Requirements Planning (MRP) and Linear Programming (LP), two types of factors are used: seasonal weather and decision rule. Under current practices, BAU simulation uses the least expensive gas source first, then adding successively more expensive sources. Material Requirements Planning is a production planning technique which uses a parent item master production schedule to determine time phased requirements for component points. Where the MPS is the aggregate gas demand forecasts for the contract year. This satisfies daily demand with least expensive gas and uses more expensive when necessary with automatic computation of available-to-promise (ATP) gas a dispacher knows daily when extra gas supplies may be ATP. Linear Programming is a mathematical algorithm used to determine optimal allocations of scarce resources to achieve a desired result. The LP model determines optimal daily gas purchase decisions with respect to supply cost minimization. Using these models, it appears possible to raise gross income margins 6 to 10% with minimal additions of customers and no new gas supply

  15. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation. (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann


    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  16. Supporting Collaborative Model and Data Service Development and Deployment with DevOps (United States)

    David, O.


    Adopting DevOps practices for model service development and deployment enables a community to engage in service-oriented modeling and data management. The Cloud Services Integration Platform (CSIP) developed the last 5 years at Colorado State University provides for collaborative integration of environmental models into scalable model and data services as a micro-services platform with API and deployment infrastructure. Originally developed to support USDA natural resource applications, it proved suitable for a wider range of applications in the environmental modeling domain. While extending its scope and visibility it became apparent community integration and adequate work flow support through the full model development and application cycle drove successful outcomes.DevOps provide best practices, tools, and organizational structures to optimize the transition from model service development to deployment by minimizing the (i) operational burden and (ii) turnaround time for modelers. We have developed and implemented a methodology to fully automate a suite of applications for application lifecycle management, version control, continuous integration, container management, and container scaling to enable model and data service developers in various institutions to collaboratively build, run, deploy, test, and scale services within minutes.To date more than 160 model and data services are available for applications in hydrology (PRMS, Hydrotools, CFA, ESP), water and wind erosion prediction (WEPP, WEPS, RUSLE2), soil quality trends (SCI, STIR), water quality analysis (SWAT-CP, WQM, CFA, AgES-W), stream degradation assessment (SWAT-DEG), hydraulics (cross-section), and grazing management (GRAS). In addition, supporting data services include soil (SSURGO), ecological site (ESIS), climate (CLIGEN, WINDGEN), land management and crop rotations (LMOD), and pesticides (WQM), developed using this workflow automation and decentralized governance.

  17. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean


    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  18. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)



    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  19. Scalable cloud without dedicated storage (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.


    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  20. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard


    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...

  1. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa


    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  2. Scalable Video Coding

    NARCIS (Netherlands)

    Choupani, R.


    With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when

  3. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E


    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  4. High quality scalable audio codec (United States)

    Kim, Miyoung; Oh, Eunmi; Kim, JungHoe


    The MPEG-4 BSAC (Bit Sliced Arithmetic Coding) is a fine-grain scalable codec with layered structure which consists of a single base-layer and several enhancement layers. The scalable functionality allows us to decode the subsets of a full bitstream and to deliver audio contents adaptively under conditions of heterogeneous network and devices, and user interaction. This bitrate scalability can be provided at the cost of high frequency components. It means that the decoded output of BSAC sounds muffled as the transmitted layers become less and less due to deprived conditions of network and devices. The goal of the proposed technology is to compensate the missing high frequency components, while maintaining the fine grain scalability of BSAC. This paper describes the integration of SBR (Spectral Bandwidth Replication) tool to existing MPEG-4 BSAC. Listening test results show that the sound quality of BSAC is improved when the full bitstream is truncated for lower bitrates, and this quality is comparable to that of BSAC using SBR tool without truncation at the same bitrate.

  5. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.


    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  6. Applications for the scalable coherent interface (United States)

    Gustavson, David B.


    IEEE P1596, the Scalable Coherent Interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960-1986, IEC 935), Futurebus (IEEE P896.x) and other modern high-performance buses. SCI goals include a minimum and bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for bridges which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper reports to the status of the work in progress and suggests some applications in data acquisition and physics.

  7. Building new computational models to support health behavior change and maintenance: new opportunities in behavioral research. (United States)

    Spruijt-Metz, Donna; Hekler, Eric; Saranummi, Niilo; Intille, Stephen; Korhonen, Ilkka; Nilsen, Wendy; Rivera, Daniel E; Spring, Bonnie; Michie, Susan; Asch, David A; Sanna, Alberto; Salcedo, Vicente Traver; Kukakfa, Rita; Pavel, Misha


    Adverse and suboptimal health behaviors and habits are responsible for approximately 40 % of preventable deaths, in addition to their unfavorable effects on quality of life and economics. Our current understanding of human behavior is largely based on static "snapshots" of human behavior, rather than ongoing, dynamic feedback loops of behavior in response to ever-changing biological, social, personal, and environmental states. This paper first discusses how new technologies (i.e., mobile sensors, smartphones, ubiquitous computing, and cloud-enabled processing/computing) and emerging systems modeling techniques enable the development of new, dynamic, and empirical models of human behavior that could facilitate just-in-time adaptive, scalable interventions. The paper then describes concrete steps to the creation of robust dynamic mathematical models of behavior including: (1) establishing "gold standard" measures, (2) the creation of a behavioral ontology for shared language and understanding tools that both enable dynamic theorizing across disciplines, (3) the development of data sharing resources, and (4) facilitating improved sharing of mathematical models and tools to support rapid aggregation of the models. We conclude with the discussion of what might be incorporated into a "knowledge commons," which could help to bring together these disparate activities into a unified system and structure for organizing knowledge about behavior.

  8. Mutual Support: A Model of Participatory Support by and for People with Learning Difficulties (United States)

    Keyes, Sarah E.; Brandon, Toby


    Mutual Support, a model of peer support by and for people with learning difficulties, was constructed through a participatory research process. The research focussed on individual narratives from people with learning difficulties. These narratives were then brought together to form a collective model of support. This paper outlines the detailed…

  9. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.


    A range of applications call for a mobile client to continuously monitor others in close proximity. Past research on such problems has covered two extremes: It has offered totally centralized solutions, where a server takes care of all queries, and totally distributed solutions, in which...... there is no central authority at all. Unfortunately, none of these two solutions scales to intensive moving object tracking applications, where each client poses a query. In this paper, we formulate the moving continuous query (MCQ) problem and propose a balanced model where servers cooperatively take care...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  10. On Formal Methods for Collective Adaptive System Engineering. Scalable Approximated, Spatial Analysis Techniques. Extended Abstract.

    Directory of Open Access Journals (Sweden)

    Diego Latella


    Full Text Available In this extended abstract a view on the role of Formal Methods in System Engineering is briefly presented. Then two examples of useful analysis techniques based on solid mathematical theories are discussed as well as the software tools which have been built for supporting such techniques. The first technique is Scalable Approximated Population DTMC Model-checking. The second one is Spatial Model-checking for Closure Spaces. Both techniques have been developed in the context of the EU funded project QUANTICOL.

  11. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research. (United States)

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila


    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage

  12. Responsive, Flexible and Scalable Broader Impacts (Invited) (United States)

    Decharon, A.; Companion, C.; Steinman, M.


    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  13. Using MPI to Implement Scalable Libraries (United States)

    Lusk, Ewing

    MPI is an instantiation of a general-purpose programming model, and high-performance implementations of the MPI standard have provided scalability for a wide range of applications. Ease of use was not an explicit goal of the MPI design process, which emphasized completeness, portability, and performance. Thus it is not surprising that MPI is occasionally criticized for being inconvenient to use and thus a drag on software developer productivity. One approach to the productivity issue is to use MPI to implement simpler programming models. Such models may limit the range of parallel algorithms that can be expressed, yet provide sufficient generality to benefit a significant number of applications, even from different domains.We illustrate this concept with the ADLB (Asynchronous, Dynamic Load-Balancing) library, which can be used to express manager/worker algorithms in such a way that their execution is scalable, even on the largestmachines. ADLB makes sophisticated use ofMPI functionality while providing an extremely simple API for the application programmer.We will describe it in the context of solving Sudoku puzzles and a nuclear physics Monte Carlo application currently running on tens of thousands of processors.

  14. Support Center for Regulatory Atmospheric Modeling (SCRAM) (United States)

    This technical site provides access to air quality models (including computer code, input data, and model processors) and other mathematical simulation techniques used in assessing air emissions control strategies and source impacts.

  15. Examining the Support Peer Supporters Provide Using Structural Equation Modeling: Nondirective and Directive Support in Diabetes Management. (United States)

    Kowitt, Sarah D; Ayala, Guadalupe X; Cherrington, Andrea L; Horton, Lucy A; Safford, Monika M; Soto, Sandra; Tang, Tricia S; Fisher, Edwin B


    Little research has examined the characteristics of peer support. Pertinent to such examination may be characteristics such as the distinction between nondirective support (accepting recipients' feelings and cooperative with their plans) and directive (prescribing "correct" choices and feelings). In a peer support program for individuals with diabetes, this study examined (a) whether the distinction between nondirective and directive support was reflected in participants' ratings of support provided by peer supporters and (b) how nondirective and directive support were related to depressive symptoms, diabetes distress, and Hemoglobin A1c (HbA1c). Three hundred fourteen participants with type 2 diabetes provided data on depressive symptoms, diabetes distress, and HbA1c before and after a diabetes management intervention delivered by peer supporters. At post-intervention, participants reported how the support provided by peer supporters was nondirective or directive. Confirmatory factor analysis (CFA), correlation analyses, and structural equation modeling examined the relationships among reports of nondirective and directive support, depressive symptoms, diabetes distress, and measured HbA1c. CFA confirmed the factor structure distinguishing between nondirective and directive support in participants' reports of support delivered by peer supporters. Controlling for demographic factors, baseline clinical values, and site, structural equation models indicated that at post-intervention, participants' reports of nondirective support were significantly associated with lower, while reports of directive support were significantly associated with greater depressive symptoms, altogether (with control variables) accounting for 51% of the variance in depressive symptoms. Peer supporters' nondirective support was associated with lower, but directive support was associated with greater depressive symptoms.

  16. CASTOR: Widely Distributed Scalable Infospaces (United States)


    Howie Shrobe, Jonathan Bachrach, Lester Foster. To Appear in Proceedings of the 2006 International Workshop on Wireless Ad-hoc and Sensor platform. ACM Queue, 4(4), May 2006. [16] S. D. Gribble, E. A. Brewer, J. M. Hellerstein, and D. E. Culler . Scalable, distributed data...Apache Software Foundation. Apache Axis, 2006. [29] M. Welsh, D. E. Culler , and E. A. Brewer. SEDA: An Ar- chitecture for

  17. Le Bon Samaritain: A Community-Based Care Model Supported by Technology. (United States)

    Gay, Valerie; Leijdekkers, Peter; Gill, Asif; Felix Navarro, Karla


    The effective care and well-being of a community is a challenging task especially in an emergency situation. Traditional technology-based silos between health and emergency services are challenged by the changing needs of the community that could benefit from integrated health and safety services. Low-cost smart-home automation solutions, wearable devices and Cloud technology make it feasible for communities to interact with each other, and with health and emergency services in a timely manner. This paper proposes a new community-based care model, supported by technology, that aims at reducing healthcare and emergency services costs while allowing community to become resilient in response to health and emergency situations. We looked at models of care in different industries and identified the type of technology that can support the suggested new model of care. Two prototypes were developed to validate the adequacy of the technology. The result is a new community-based model of care called 'Le Bon Samaritain'. It relies on a network of people called 'Bons Samaritains' willing to help and deal with the basic care and safety aspects of their community. Their role is to make sure that people in their community receive and understand the messages from emergency and health services. The new care model is integrated with existing emergency warning, community and health services. Le Bon Samaritain model is scalable, community-based and can help people feel safer, less isolated and more integrated in their community. It could be the key to reduce healthcare cost, increase resilience and drive the change for a more integrated emergency and care system.

  18. Modeling uncertainty in requirements engineering decision support (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.


    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  19. Modeling a support system for the evaluator

    International Nuclear Information System (INIS)

    Lozano Lima, B.; Ilizastegui Perez, F; Barnet Izquierdo, B.


    This work gives evaluators a tool they can employ to give more soundness to their review of operational limits and conditions. The system will establish the most adequate method to carry out the evaluation, as well as to evaluate the basis for technical operational specifications. It also includes the attainment of alternative questions to be supplied to the operating entity to support it in decision-making activities

  20. Predictive analytics can support the ACO model. (United States)

    Bradley, Paul


    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  1. Controlled Ecological Life Support System (CELSS) modeling (United States)

    Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray


    Attention is given to CELSS, a critical technology for the Space Exploration Initiative. OCAM (object-oriented CELSS analysis and modeling) models carbon, hydrogen, and oxygen recycling. Multiple crops and plant types can be simulated. Resource recovery options from inedible biomass include leaching, enzyme treatment, aerobic digestion, and mushroom and fish growth. The benefit of using many small crops overlapping in time, instead of a single large crop, is demonstrated. Unanticipated results include startup transients which reduce the benefit of multiple small crops. The relative contributions of mass, energy, and manpower to system cost are analyzed in order to determine appropriate research directions.

  2. On Support Functions for the Development of MFM Models

    DEFF Research Database (Denmark)

    Heussen, Kai; Lind, Morten


    A modeling environment and methodology are necessary to ensure quality and reusability of models in any domain. For MFM in particular, as a tool for modeling complex systems, awareness has been increasing for this need. Introducing the context of modeling support functions, this paper provides...... a review of MFM applications, and contextualizes the model development with respect to process design and operation knowledge. Developing a perspective for an environment for MFM-oriented model- and application-development a tool-chain is outlined and relevant software functions are discussed....... With a perspective on MFM-modeling for existing processes and automation design, modeling stages and corresponding formal model properties are identified. Finally, practically feasible support functions and model-checks to support the model-development are suggested....

  3. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich


    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  4. Strategies to Support Students' Mathematical Modeling (United States)

    Jung, Hyunyi


    An important question for mathematics teachers is this: "How can we help students learn mathematics to solve everyday problems, rather than teaching them only to memorize rules and practice mathematical procedures?" Teaching students using modeling activities can help them learn mathematics in real-world problem-solving situations that…

  5. Using Covariation Reasoning to Support Mathematical Modeling (United States)

    Jacobson, Erik


    For many students, making connections between mathematical ideas and the real world is one of the most intriguing and rewarding aspects of the study of mathematics. In the Common Core State Standards for Mathematics (CCSSI 2010), mathematical modeling is highlighted as a mathematical practice standard for all grades. To engage in mathematical…

  6. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis (United States)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.


    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and

  7. Dengue human infection models supporting drug development. (United States)

    Whitehorn, James; Van, Vinh Chau Nguyen; Simmons, Cameron P


    Dengue is a arboviral infection that represents a major global health burden. There is an unmet need for effective dengue therapeutics to reduce symptoms, duration of illness and incidence of severe complications. Here, we consider the merits of a dengue human infection model (DHIM) for drug development. A DHIM could allow experimentally controlled studies of candidate therapeutics in preselected susceptible volunteers, potentially using smaller sample sizes than trials that recruited patients with dengue in an endemic country. In addition, the DHIM would assist the conduct of intensive pharmacokinetic and basic research investigations and aid in determining optimal drug dosage. Furthermore, a DHIM could help establish proof of concept that chemoprophylaxis against dengue is feasible. The key challenge in developing the DHIM for drug development is to ensure the model reliably replicates the typical clinical and laboratory features of naturally acquired, symptomatic dengue. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America.

  8. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip


    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  9. Statistical modeling to support power system planning (United States)

    Staid, Andrea

    This dissertation focuses on data-analytic approaches that improve our understanding of power system applications to promote better decision-making. It tackles issues of risk analysis, uncertainty management, resource estimation, and the impacts of climate change. Tools of data mining and statistical modeling are used to bring new insight to a variety of complex problems facing today's power system. The overarching goal of this research is to improve the understanding of the power system risk environment for improved operation, investment, and planning decisions. The first chapter introduces some challenges faced in planning for a sustainable power system. Chapter 2 analyzes the driving factors behind the disparity in wind energy investments among states with a goal of determining the impact that state-level policies have on incentivizing wind energy. Findings show that policy differences do not explain the disparities; physical and geographical factors are more important. Chapter 3 extends conventional wind forecasting to a risk-based focus of predicting maximum wind speeds, which are dangerous for offshore operations. Statistical models are presented that issue probabilistic predictions for the highest wind speed expected in a three-hour interval. These models achieve a high degree of accuracy and their use can improve safety and reliability in practice. Chapter 4 examines the challenges of wind power estimation for onshore wind farms. Several methods for wind power resource assessment are compared, and the weaknesses of the Jensen model are demonstrated. For two onshore farms, statistical models outperform other methods, even when very little information is known about the wind farm. Lastly, chapter 5 focuses on the power system more broadly in the context of the risks expected from tropical cyclones in a changing climate. Risks to U.S. power system infrastructure are simulated under different scenarios of tropical cyclone behavior that may result from climate

  10. Key Elements of the Tutorial Support Management Model (United States)

    Lynch, Grace; Paasuke, Philip


    In response to an exponential growth in enrolments the "Tutorial Support Management" (TSM) model has been adopted by Open Universities Australia (OUA) after a two-year project on the provision of online tutor support in first year, online undergraduate units. The essential focus of the TSM model was the development of a systemic approach…

  11. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III (United States)

    Rhode Island Department of Education, 2015


    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  12. Invention software support by integrating function and mathematical modeling

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, Wessel Willems; Bakker, H.M.


    New idea generation is imperative for successful product innovation and technology development. This paper presents the development of a novel type of invention support software. The support tool integrates both function modeling and mathematical modeling, thereby enabling quantitative analyses on a

  13. A Traceability-based Method to Support Conceptual Model Evolution


    Ruiz Carmona, Luz Marcela


    Renewing software systems is one of the most cost-effective ways to protect software investment, which saves time, money and ensures uninter- rupted access to technical support and product upgrades. There are several mo- tivations to promote investment and scientific effort for specifying systems by means of conceptual models and supporting its evolution. As an example, the software engineering community is addressing solutions for supporting model traceability, continuous improvement of busi...

  14. A methodology to support multidisciplinary model-based water management

    NARCIS (Netherlands)

    Scholten, H.; Kassahun, A.; Refsgaard, J.C.; Kargas, Th.; Gavardinas, C.; Beulens, A.J.M.


    Quality assurance in model based water management is needed because of some frequently perceived shortcomings, e.g. a lack of mutual understanding between modelling team members, malpractice and a tendency of modellers to oversell model capabilities. Initiatives to support quality assurance focus on

  15. Highly Scalable Matching Pursuit Signal Decomposition Algorithm (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  16. Subjective comparison of temporal and quality scalability

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong


    be reduced either by downscaling the frame rate (temporal scalability) or the image quality (quality scalability). However, the user preferences between different scalability types are not well known in different scenarios. In this paper, we present a methodology for subjective comparison between temporal...... and quality scalability. The practical experiments with low resolution video sequences show that in general, distortion is a more crucial factor for the perceived subjective quality than frame rate. However, the results also depend on the content. Moreover,, we discuss the role of other different influence...

  17. Organizational Learning Supported by Reference Architecture Models

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John


    The wave of the fourth industrial revolution (Industry 4.0) is bringing a new vision of the manufacturing industry. In manufacturing, one of the buzzwords of the moment is “Smart production”. Smart production involves manufacturing equipment with many sensors that can generate and transmit large...... amounts of data. These data and information from manufacturing operations are however not shared in the organization. Therefore the organization is not using them to learn and improve their operations. To address this problem, the authors implemented in an Industry 4.0 laboratory an instance...... of an emerging technical standard specific for the manufacturing industry. Global manufacturing experts consider the Reference Architecture Model Industry 4.0 (RAMI4.0) as one of the corner stones for the implementation of Industry 4.0. The instantiation contributed to organizational learning in the laboratory...

  18. Scalable rendering on PC clusters

    Energy Technology Data Exchange (ETDEWEB)



    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  19. Are complex DCE-MRI models supported by clinical data?

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F; Bretthorst, G Larry


    PURPOSE: To ascertain whether complex dynamic contrast enhanced (DCE) MRI tracer kinetic models are supported by data acquired in the clinic and to determine the consequences of limited contrast-to-noise. METHODS: Generically representative in silico and clinical (cervical cancer) DCE-MRI data were...... selection is particularly important when high-order, multiparametric models are under consideration. (Parameters obtained from kinetic modeling of cervical cancer clinical DCE-MRI data showed significant changes at an early stage of radiotherapy.)...... examined. Bayesian model selection evaluated support for four compartmental DCE-MRI models: the Tofts model (TM), Extended Tofts model, Compartmental Tissue Uptake model (CTUM), and Two-Compartment Exchange model. RESULTS: Complex DCE-MRI models were more sensitive to noise than simpler models with respect...

  20. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng; Wu, Yinghui; Holder, Lawrence B.; Agarwal, Khushbu


    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walking through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.

  1. Network selection, Information filtering and Scalable computation (United States)

    Ye, Changqing

    This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over

  2. Reference models supporting enterprise networks and virtual enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Bernus, Peter


    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing ...

  3. Fully Scalable Porous Metal Electrospray Propulsion (United States)


    CubeSat nanosatellites . This was done mostly as an exercise on scalability as originally described in the proposal. Graphic summary of the research...Canada This is one of our papers describing systems-level scalability of electrospray propulsion in the pure ionic regime to nanosatellites . A

  4. Functional scalability through generative representations: the evolution of table designs


    Gregory S Hornby


    One of the main limitations for the functional scalability of automated design systems is the representation used for encoding designs. I argue that generative representations , those which are capable of reusing elements of the encoded design in the translation to the actual artifact, are better suited for automated design because reuse of building blocks captures some design dependencies and improves the ability to make large changes in design space. To support this argument I compare a gen...

  5. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur


    This thesis focuses on developing a spreadsheet decision support model that can be used by combat engineer platoon and company commanders in determining the material requirements and estimated costs...

  6. A Generic Modeling Process to Support Functional Fault Model Development (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.


    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  7. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.


    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  8. Reviewing model application to support animal health decision making. (United States)

    Singer, Alexander; Salman, Mo; Thulke, Hans-Hermann


    Animal health is of societal importance as it affects human welfare, and anthropogenic interests shape decision making to assure animal health. Scientific advice to support decision making is manifold. Modelling, as one piece of the scientific toolbox, is appreciated for its ability to describe and structure data, to give insight in complex processes and to predict future outcome. In this paper we study the application of scientific modelling to support practical animal health decisions. We reviewed the 35 animal health related scientific opinions adopted by the Animal Health and Animal Welfare Panel of the European Food Safety Authority (EFSA). Thirteen of these documents were based on the application of models. The review took two viewpoints, the decision maker's need and the modeller's approach. In the reviewed material three types of modelling questions were addressed by four specific model types. The correspondence between tasks and models underpinned the importance of the modelling question in triggering the modelling approach. End point quantifications were the dominating request from decision makers, implying that prediction of risk is a major need. However, due to knowledge gaps corresponding modelling studies often shed away from providing exact numbers. Instead, comparative scenario analyses were performed, furthering the understanding of the decision problem and effects of alternative management options. In conclusion, the most adequate scientific support for decision making - including available modelling capacity - might be expected if the required advice is clearly stated. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. The Scalable Coherent Interface and related standards projects

    International Nuclear Information System (INIS)

    Gustavson, D.B.


    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs

  10. Model based decision support for planning of road maintenance

    NARCIS (Netherlands)

    van Harten, Aart; Worm, J.M.; Worm, J.M.


    In this article we describe a Decision Support Model, based on Operational Research methods, for the multi-period planning of maintenance of bituminous pavements. This model is a tool for the road manager to assist in generating an optimal maintenance plan for a road. Optimal means: minimising the

  11. Integrating Collaborative and Decentralized Models to Support Ubiquitous Learning (United States)

    Barbosa, Jorge Luis Victória; Barbosa, Débora Nice Ferrari; Rigo, Sandro José; de Oliveira, Jezer Machado; Rabello, Solon Andrade, Jr.


    The application of ubiquitous technologies in the improvement of education strategies is called Ubiquitous Learning. This article proposes the integration between two models dedicated to support ubiquitous learning environments, called Global and CoolEdu. CoolEdu is a generic collaboration model for decentralized environments. Global is an…

  12. [Model transfer method based on support vector machine]. (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi


    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  13. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability


    Sumiyana, Sumiyana; Baridwan, Zaki; Sugiri, Slamet; Hartono, Jogiyanto


    This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007). This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment s...

  14. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair


    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  15. Prioritization of engineering support requests and advanced technology projects using decision support and industrial engineering models (United States)

    Tavana, Madjid


    The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.

  16. Designing Psychological Treatments for Scalability: The PREMIUM Approach.

    Directory of Open Access Journals (Sweden)

    Sukumar Vellakkal

    Full Text Available Lack of access to empirically-supported psychological treatments (EPT that are contextually appropriate and feasible to deliver by non-specialist health workers (referred to as 'counsellors' are major barrier for the treatment of mental health problems in resource poor countries. To address this barrier, the 'Program for Effective Mental Health Interventions in Under-resourced Health Systems' (PREMIUM designed a method for the development of EPT for severe depression and harmful drinking. This was implemented over three years in India. This study assessed the relative usefulness and costs of the five 'steps' (Systematic reviews, In-depth interviews, Key informant surveys, Workshops with international experts, and Workshops with local experts in the first phase of identifying the strategies and theoretical model of the treatment and two 'steps' (Case series with specialists, and Case series and pilot trial with counsellors in the second phase of enhancing the acceptability and feasibility of its delivery by counsellors in PREMIUM with the aim of arriving at a parsimonious set of steps for future investigators to use for developing scalable EPT.The study used two sources of data: the usefulness ratings by the investigators and the resource utilization. The usefulness of each of the seven steps was assessed through the ratings by the investigators involved in the development of each of the two EPT, viz. Healthy Activity Program for severe depression and Counselling for Alcohol Problems for harmful drinking. Quantitative responses were elicited to rate the utility (usefulness/influence, followed by open-ended questions for explaining the rankings. The resources used by PREMIUM were computed in terms of time (months and monetary costs.The theoretical core of the new treatments were consistent with those of EPT derived from global evidence, viz. Behavioural Activation and Motivational Enhancement for severe depression and harmful drinking respectively

  17. Model Checking for Licensing Support in the Finnish Nuclear Industry

    Energy Technology Data Exchange (ETDEWEB)

    Antti, Pakonen; Janne, Valkonen [VTT Technical Research, VTT (Finland); Sami, Matinaho; Markus, Hartikainen [Protum Power and Heat, Fortum (Finland)


    This paper examines how model checking can be used to support the qualification of digital I and C software in nuclear power plants, in a way that is consistent with regulatory demands specifically, the common position of seven European nuclear regulators and authorised technical support organisations. As a practical example, we discuss the third-party review service provided by VTT for the power company Fortum in the I and C renewal project of the Loviisa plant in southern Finland.

  18. "SERPS Up": Support, Engagement and Retention of Postgraduate Students--A Model of Postgraduate Support (United States)

    Alston, Margaret; Allan, Julaine; Bell, Karen; Brown, Andy; Dowling, Jane; Hamilton, Pat; McKinnon, Jenny; McKinnon, Noela; Mitchell, Rol; Whittenbury, Kerri; Valentine, Bruce; Wicks, Alison; Williams, Rachael


    The federal government's 1999 White Paper Knowledge and Innovation: a policy statement on research and research training, notes concerns about retention and completion rates in doctoral studies programs in Australia. This paper outlines a model of higher education support developed at the Centre for Rural Social Research at Charles Sturt…

  19. Design Approaches to Support Preservice Teachers in Scientific Modeling (United States)

    Kenyon, Lisa; Davis, Elizabeth A.; Hug, Barbara


    Engaging children in scientific practices is hard for beginning teachers. One such scientific practice with which beginning teachers may have limited experience is scientific modeling. We have iteratively designed preservice teacher learning experiences and materials intended to help teachers achieve learning goals associated with scientific modeling. Our work has taken place across multiple years at three university sites, with preservice teachers focused on early childhood, elementary, and middle school teaching. Based on results from our empirical studies supporting these design decisions, we discuss design features of our modeling instruction in each iteration. Our results suggest some successes in supporting preservice teachers in engaging students in modeling practice. We propose design principles that can guide science teacher educators in incorporating modeling in teacher education.

  20. Scalable Gravity Offload System, Phase II (United States)

    National Aeronautics and Space Administration — The proposed innovation is a scalable gravity off-load system that enables controlled integrated testing of Surface System elements such as rovers, habitats, and...

  1. Scalable Gravity Offload System, Phase I (United States)

    National Aeronautics and Space Administration — The proposed innovation is a scalable gravity off-load system that enables controlled integrated testing of Surface System elements such as rovers, habitats, and...

  2. Effective Team Support: From Modeling to Software Agents (United States)

    Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia


    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.

  3. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas


    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  4. Investigation on Reliability and Scalability of an FBG-Based Hierarchical AOFSN

    Directory of Open Access Journals (Sweden)

    Li-Mei Peng


    Full Text Available The reliability and scalability of large-scale based optical fiber sensor networks (AOFSN are considered in this paper. The AOFSN network consists of three-level hierarchical sensor network architectures. The first two levels consist of active interrogation and remote nodes (RNs and the third level, called the sensor subnet (SSN, consists of passive Fiber Bragg Gratings (FBGs and a few switches. The switch architectures in the RN and various SSNs to improve the reliability and scalability of AOFSN are studied. Two SSNs with a regular topology are proposed to support simple routing and scalability in AOFSN: square-based sensor cells (SSC and pentagon-based sensor cells (PSC. The reliability and scalability are evaluated in terms of the available sensing coverage in the case of one or multiple link failures.

  5. Internal Models Support Specific Gaits in Orthotic Devices

    DEFF Research Database (Denmark)

    Matthias Braun, Jan; Wörgötter, Florentin; Manoonpong, Poramate


    such limitations is to supply the patient—via the orthosis—with situation-dependent gait models. To achieve this, we present a method for gait recognition using model invalidation. We show that these models are capable to predict the individual patient's movements and supply the correct gait. We investigate...... the system's accuracy and robustness on a Knee-Ankle-Foot-Orthosis, introducing behaviour changes depending on the patient's current walking situation. We conclude that the here presented model-based support of different gaits has the power to enhance the patient's mobility....

  6. Visualisation and interpretation of Support Vector Regression models. (United States)

    Ustün, B; Melssen, W J; Buydens, L M C


    This paper introduces a technique to visualise the information content of the kernel matrix and a way to interpret the ingredients of the Support Vector Regression (SVR) model. Recently, the use of Support Vector Machines (SVM) for solving classification (SVC) and regression (SVR) problems has increased substantially in the field of chemistry and chemometrics. This is mainly due to its high generalisation performance and its ability to model non-linear relationships in a unique and global manner. Modeling of non-linear relationships will be enabled by applying a kernel function. The kernel function transforms the input data, usually non-linearly related to the associated output property, into a high dimensional feature space where the non-linear relationship can be represented in a linear form. Usually, SVMs are applied as a black box technique. Hence, the model cannot be interpreted like, e.g., Partial Least Squares (PLS). For example, the PLS scores and loadings make it possible to visualise and understand the driving force behind the optimal PLS machinery. In this study, we have investigated the possibilities to visualise and interpret the SVM model. Here, we exclusively have focused on Support Vector Regression to demonstrate these visualisation and interpretation techniques. Our observations show that we are now able to turn a SVR black box model into a transparent and interpretable regression modeling technique.

  7. Applications of system dynamics modelling to support health policy. (United States)

    Atkinson, Jo-An M; Wells, Robert; Page, Andrew; Dominello, Amanda; Haines, Mary; Wilson, Andrew


    The value of systems science modelling methods in the health sector is increasingly being recognised. Of particular promise is the potential of these methods to improve operational aspects of healthcare capacity and delivery, analyse policy options for health system reform and guide investments to address complex public health problems. Because it lends itself to a participatory approach, system dynamics modelling has been a particularly appealing method that aims to align stakeholder understanding of the underlying causes of a problem and achieve consensus for action. The aim of this review is to determine the effectiveness of system dynamics modelling for health policy, and explore the range and nature of its application. A systematic search was conducted to identify articles published up to April 2015 from the PubMed, Web of Knowledge, Embase, ScienceDirect and Google Scholar databases. The grey literature was also searched. Papers eligible for inclusion were those that described applications of system dynamics modelling to support health policy at any level of government. Six papers were identified, comprising eight case studies of the application of system dynamics modelling to support health policy. No analytic studies were found that examined the effectiveness of this type of modelling. Only three examples engaged multidisciplinary stakeholders in collective model building. Stakeholder participation in model building reportedly facilitated development of a common 'mental map' of the health problem, resulting in consensus about optimal policy strategy and garnering support for collaborative action. The paucity of relevant papers indicates that, although the volume of descriptive literature advocating the value of system dynamics modelling is considerable, its practical application to inform health policy making is yet to be routinely applied and rigorously evaluated. Advances in software are allowing the participatory model building approach to be extended to

  8. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William


    During Heisei-15, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU Underground Rock Laboratory support during H-15 involved development of new discrete fracture network (DFN) models for the MIU Shoba-sama Site, in the region of shaft development. Golder developed three DFN models for the site using discrete fracture network, equivalent porous medium (EPM), and nested DFN/EPM approaches. Each of these models were compared based upon criteria established for the multiple modeling project (MMP). Golder supported JNC participation in Task 6AB, 6D and 6E of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-15. For Task 6AB, Golder implemented an updated microstructural model in GoldSim, and used this updated model to simulate the propagation of uncertainty from experimental to safety assessment time scales, for 5 m scale transport path lengths. Task 6D and 6E compared safety assessment (PA) and experimental time scale simulations in a 200 m scale discrete fracture network. For Task 6D, Golder implemented a DFN model using FracMan/PA Works, and determined the sensitivity of solute transport to a range of material property and geometric assumptions. For Task 6E, Golder carried out demonstration FracMan/PA Works transport calculations at a 1 million year time scale, to ensure that task specifications are realistic. The majority of work for Task 6E will be carried out during H-16. During H-15, Golder supported JNC's Total System Performance Assessment (TSPO) strategy by developing technologies for the analysis of precipitant concentration. These approaches were based on the GoldSim precipitant data management features, and were

  9. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William


    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA). MIU Underground Rock Laboratory support during H-14 involved discrete fracture network (DFN) modelling in support of the Multiple Modelling Project (MMP) and the Long Term Pumping Test (LPT). Golder developed updated DFN models for the MIU site, reflecting updated analyses of fracture data. Golder also developed scripts to support JNC simulations of flow and transport pathways within the MMP. Golder supported JNC participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-14. Task 6A and 6B compared safety assessment (PA) and experimental time scale simulations along a pipe transport pathway. Task 6B2 extended Task 6B simulations from 1-D to 2-D. For Task 6B2, Golder carried out single fracture transport simulations on a wide variety of generic heterogeneous 2D fractures using both experimental and safety assessment boundary conditions. The heterogeneous 2D fractures were implemented according to a variety of in plane heterogeneity patterns. Multiple immobile zones were considered including stagnant zones, infillings, altered wall rock, and intact rock. During H-14, JNC carried out extensive studies of the distributed rock zone (DRZ) surrounding repository tunnels and drifts. Golder supported this activity be evaluating the calculation time necessary for simulating a reference heterogeneous DRZ cell network for a range of computational strategies. To support the development of JNC's total system performance assessment (TSPA) strategy, Golder carried out a review of the US DOE Yucca Mountain Project TSPA. This

  10. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.


    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  11. A telepsychiatry model to support psychiatric outreach in the public ...

    African Journals Online (AJOL)

    A telepsychiatry model to support psychiatric outreach in the public sector in South Africa. J Chipps, S Ramlall, M Mars. Abstract. The access of rural Mental Health Care Users in South Africa to specialist psychiatrists and quality mental health care is currently sub-optimal. Health professionals and planners working in ...

  12. Fracture Network Modeling and GoldSim Simulation Support


    杉田 健一郎; Dershowiz, W.


    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  13. Making Risk Models Operational for Situational Awareness and Decision Support

    International Nuclear Information System (INIS)

    Paulson, P.R.; Coles, G.; Shoemaker, S.


    We present CARIM, a decision support tool to aid in the evaluation of plans for converting control systems to digital instruments. The model provides the capability to optimize planning and resource allocation to reduce risk from multiple safety and economic perspectives. (author)

  14. Supporting Sophomore Success through a New Learning Community Model (United States)

    Virtue, Emily E.; Wells, Gayle; Virtue, Andrew D.


    The creation of a Sophomore Learning Community (SLC) model can help address concerns about the "sophomore slump" and sophomore attrition. While managing the logistics of a sophomore LC can be difficult, with proper faculty, staff, and administrative support, positive results can be produced. This article outlines the need for Sophomore…

  15. Supporting universal prevention programs: a two-phased coaching model. (United States)

    Becker, Kimberly D; Darney, Dana; Domitrovich, Celene; Keperling, Jennifer Pitchford; Ialongo, Nicholas S


    Schools are adopting evidence-based programs designed to enhance students' emotional and behavioral competencies at increasing rates (Hemmeter et al. in Early Child Res Q 26:96-109, 2011). At the same time, teachers express the need for increased support surrounding implementation of these evidence-based programs (Carter and Van Norman in Early Child Educ 38:279-288, 2010). Ongoing professional development in the form of coaching may enhance teacher skills and implementation (Noell et al. in School Psychol Rev 34:87-106, 2005; Stormont et al. 2012). There exists a need for a coaching model that can be applied to a variety of teacher skill levels and one that guides coach decision-making about how best to support teachers. This article provides a detailed account of a two-phased coaching model with empirical support developed and tested with coaches and teachers in urban schools (Becker et al. 2013). In the initial universal coaching phase, all teachers receive the same coaching elements regardless of their skill level. Then, in the tailored coaching phase, coaching varies according to the strengths and needs of each teacher. Specifically, more intensive coaching strategies are used only with teachers who need additional coaching supports, whereas other teachers receive just enough support to consolidate and maintain their strong implementation. Examples of how coaches used the two-phased coaching model when working with teachers who were implementing two universal prevention programs (i.e., the PATHS curriculum and PAX Good Behavior Game [PAX GBG]) provide illustrations of the application of this model. The potential reach of this coaching model extends to other school-based programs as well as other settings in which coaches partner with interventionists to implement evidence-based programs.

  16. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston


    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  17. Evaluation of atmospheric dispersion/consequence models supporting safety analysis

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Lazaro, M.A.; Woodard, K.


    Two DOE Working Groups have completed evaluation of accident phenomenology and consequence methodologies used to support DOE facility safety documentation. The independent evaluations each concluded that no one computer model adequately addresses all accident and atmospheric release conditions. MACCS2, MATHEW/ADPIC, TRAC RA/HA, and COSYMA are adequate for most radiological dispersion and consequence needs. ALOHA, DEGADIS, HGSYSTEM, TSCREEN, and SLAB are recommended for chemical dispersion and consequence applications. Additional work is suggested, principally in evaluation of new models, targeting certain models for continued development, training, and establishing a Web page for guidance to safety analysts

  18. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering


    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  19. A multicriteria prioritization model to support public safety planning

    Directory of Open Access Journals (Sweden)

    André Morais Gurgel


    Full Text Available Setting out to solve operational problems is a frequent part of decision making on public safety. However, the pillars of tactics and strategy are normally disregarded. Thus, this paper focuses on a strategic issue, namely that of a city prioritizing areasin which there is a degree of occurrences for criminality to increase. A multiple criteria approach is taken. The reason for this is that such a situation is normally analyzed from the perspective of the degree of police occurrences. The proposed model is based on a SMARTS multicriteria method and was applied in a Brazilian City. It combines a multicriteria method and a Monte Carlo Simulation to support an analysis of robustness. As a result, we highlight some differences between the model developed and police occurrences model. It might support differentiated policies for zones, by indicating where there should be strong actions, infrastructure investments, monitoring procedures and others public safety policies.

  20. The effect of alkylating agents on model supported metal clusters

    Energy Technology Data Exchange (ETDEWEB)

    Erdem-Senatalar, A.; Blackmond, D.G.; Wender, I. (Pittsburgh Univ., PA (USA). Dept. of Chemical and Petroleum Engineering); Oukaci, R. (CERHYD, Algiers (Algeria))


    Interactions between model supported metal clusters and alkylating agents were studied in an effort to understand a novel chemical trapping technique developed for identifying species adsorbed on catalyst surfaces. It was found that these interactions are more complex than had previously been suggested. Studies were completed using deuterium-labeled dimethyl sulfate (DMS), (CH{sub 3}){sub 2}SO{sub 4}, as a trapping agent to interact with the supported metal cluster ethylidyne tricobalt enneacarbonyl. Results showed that oxygenated products formed during the trapping reaction contained {minus}OCD{sub 3} groups from the DMS, indicating that the interaction was not a simple alkylation. 18 refs., 1 fig., 3 tabs.

  1. Relationship model and supporting activities of JIT, TQM and TPM

    Directory of Open Access Journals (Sweden)

    Nuttapon SaeTong


    Full Text Available This paper gives a relationship model and supporting activities of Just-in-time (JIT, Total Quality Management (TQM,and Total Productive Maintenance (TPM. By reviewing the concepts, 5S, Kaizen, preventive maintenance, Kanban, visualcontrol, Poka-Yoke, and Quality Control tools are the main supporting activities. Based on the analysis, 5S, preventive maintenance,and Kaizen are the foundation of the three concepts. QC tools are required activities for implementing TQM, whereasPoka-Yoke and visual control are necessary activities for implementing TPM. After successfully implementing TQM andTPM, Kanban is needed for JIT.

  2. Side-Information Generation for Temporally and Spatially Scalable Wyner-Ziv Codecs

    Directory of Open Access Journals (Sweden)


    Full Text Available The distributed video coding paradigm enables video codecs to operate with reversed complexity, in which the complexity is shifted from the encoder toward the decoder. Its performance is heavily dependent on the quality of the side information generated by motio estimation at the decoder. We compare the rate-distortion performance of different side-information estimators, for both temporally and spatially scalable Wyner-Ziv codecs. For the temporally scalable codec we compared an established method with a new algorithm that uses a linear-motion model to produce side-information. As a continuation of previous works, in this paper, we propose to use a super-resolution method to upsample the nonkey frame, for the spatial scalable codec, using the key frames as reference. We verify the performance of the spatial scalable WZ coding using the state-of-the-art video coding standard H.264/AVC.

  3. Support for an expanded tripartite influence model with gay men. (United States)

    Tylka, Tracy L; Andorka, Michael J


    This study investigated whether an expanded tripartite influence model would represent gay men's experiences. This model was extended by adding partners and gay community involvement as sources of social influence and considering dual body image pathways (muscularity and body fat dissatisfaction) to muscularity enhancement and disordered eating behaviors. Latent variable structural equation modeling analyses upheld this model for 346 gay men. Dual body image pathways to body change behaviors were supported, although three unanticipated interrelationships emerged, suggesting that muscularity and body fat concerns and behaviors may be more integrated for gay men. Internalization of the mesomorphic ideal, appearance comparison, muscularity dissatisfaction, and body fat dissatisfaction were key mediators in the model. Of the sources of social influence, friend and media pressure to be lean, gay community involvement, and partner, friend, media, and family pressures to be muscular made incremental contributions. Unexpectedly, certain sources were directly connected to body change behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Rate control scheme for consistent video quality in scalable video codec. (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q


    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  5. Memory-Scalable GPU Spatial Hierarchy Construction. (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D


    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  6. Porflow modeling supporting the FY14 salstone special analysis

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Taylor, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    PORFLOW related analyses supporting the Saltstone FY14 Special Analysis (SA) described herein are based on prior modeling supporting the Saltstone FY13 SA. Notable changes to the previous round of simulations include: a) consideration of Saltstone Disposal Unit (SDU) design type 6 under “Nominal” and “Margin” conditions, b) omission of the clean cap fill from the nominal SDU 2 and 6 modeling cases as a reasonable approximation of greater waste grout fill heights, c) minor updates to the cementitious materials degradation analysis, d) use of updated I-129 sorption coefficient (Kd) values in soils, e) assignment of the pH/Eh environment of saltstone to the underlying floor concrete, considering down flow through an SDU, and f) implementation of an improved sub-model for Tc release in an oxidizing environment. These new model developments are discussed and followed by a cursory presentation of simulation results. The new Tc release sub-model produced significantly improved (smoother) flux results compared to the FY13 SA. Further discussion of PORFLOW model setup and simulation results will be presented in the FY14 SA, including dose results.

  7. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh


    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  8. Modeling the capacity of riverscapes to support beaver dams (United States)

    Macfarlane, William W.; Wheaton, Joseph M.; Bouwes, Nicolaas; Jensen, Martha L.; Gilbert, Jordan T.; Hough-Snee, Nate; Shivik, John A.


    The construction of beaver dams facilitates a suite of hydrologic, hydraulic, geomorphic, and ecological feedbacks that increase stream complexity and channel-floodplain connectivity that benefit aquatic and terrestrial biota. Depending on where beaver build dams within a drainage network, they impact lateral and longitudinal connectivity by introducing roughness elements that fundamentally change the timing, delivery, and storage of water, sediment, nutrients, and organic matter. While the local effects of beaver dams on streams are well understood, broader coverage network models that predict where beaver dams can be built and highlight their impacts on connectivity across diverse drainage networks are lacking. Here we present a capacity model to assess the limits of riverscapes to support dam-building activities by beaver across physiographically diverse landscapes. We estimated dam capacity with freely and nationally-available inputs to evaluate seven lines of evidence: (1) reliable water source, (2) riparian vegetation conducive to foraging and dam building, (3) vegetation within 100 m of edge of stream to support expansion of dam complexes and maintain large colonies, (4) likelihood that channel-spanning dams could be built during low flows, (5) the likelihood that a beaver dam is likely to withstand typical floods, (6) a suitable stream gradient that is neither too low to limit dam density nor too high to preclude the building or persistence of dams, and (7) a suitable river that is not too large to restrict dam building or persistence. Fuzzy inference systems were used to combine these controlling factors in a framework that explicitly also accounts for model uncertainty. The model was run for 40,561 km of streams in Utah, USA, and portions of surrounding states, predicting an overall network capacity of 356,294 dams at an average capacity of 8.8 dams/km. We validated model performance using 2852 observed dams across 1947 km of streams. The model showed

  9. WWER reactor fuel performance, modelling and experimental support. Proceedings

    International Nuclear Information System (INIS)

    Stefanova, S.; Chantoin, P.; Kolev, I.


    This publication is a compilation of 36 papers presented at the International Seminar on WWER Reactor Fuel Performance, Modelling and Experimental Support, organised by the Institute for Nuclear Research and Nuclear Energy (BG), in cooperation with the International Atomic Energy Agency. The Seminar was attended by 76 participants from 16 countries, including representatives of all major Russian plants and institutions responsible for WWER reactor fuel manufacturing, design and research. The reports are grouped in four chapters: 1) WWER Fuel Performance and Economics: Status and Improvement Prospects: 2) WWER Fuel Behaviour Modelling and Experimental Support; 3) Licensing of WWER Fuel and Fuel Analysis Codes; 4) Spent Fuel of WWER Plants. The reports from the corresponding four panel discussion sessions are also included. All individual papers are recorded in INIS as separate items

  10. Scalable persistent identifier systems for dynamic datasets (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.


    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  11. A Model for an Intelligent Support Decision System in Aquaculture


    Novac Ududec, Cornelia / C


    The paper purpose an intelligent software system agents–based to support decision in aquculture and the approach of fish diagnosis with informatics methods, techniques and solutions. A major purpose is to develop new methods and techniques for quick fish diagnosis, treatment and prophyilaxis at infectious and parasite-based known disorders, that may occur at fishes raised in high density in intensive raising systems. But, the goal of this paper is to presents a model of an intelligent agents-...

  12. Information Model Translation to Support a Wider Science Community (United States)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald


    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  13. Modeling bacteria fate and transport in watersheds to support TMDLs


    Benham, B. L.; Baffaut, C.; Zeckoski, R. W.; Mankin, K. R.; Pachepsky, Y. A.; Sadeghi, A. A.; Brannan, Kevin M.; Soupir, M. L.; Habersack, M. J.


    Fecal contamination of surface waters is a critical water-quality issue, leading to human illnesses and deaths. Total Maximum Daily Loads (TMDLs), which set pollutant limits, are being developed to address fecal bacteria impairments. Watershed models are widely used to support TMDLs, although their use for simulating in-stream fecal bacteria concentrations is somewhat rudimentary. This article provides an overview of fecal microorganism fate and transport within watersheds, describes current ...

  14. Support vector machine based battery model for electric vehicles

    International Nuclear Information System (INIS)

    Wang Junping; Chen Quanshi; Cao Binggang


    The support vector machine (SVM) is a novel type of learning machine based on statistical learning theory that can map a nonlinear function successfully. As a battery is a nonlinear system, it is difficult to establish the relationship between the load voltage and the current under different temperatures and state of charge (SOC). The SVM is used to model the battery nonlinear dynamics in this paper. Tests are performed on an 80Ah Ni/MH battery pack with the Federal Urban Driving Schedule (FUDS) cycle to set up the SVM model. Compared with the Nernst and Shepherd combined model, the SVM model can simulate the battery dynamics better with small amounts of experimental data. The maximum relative error is 3.61%

  15. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.


    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  16. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.


    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  17. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi


    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  18. Interactive segmentation: a scalable superpixel-based method (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste


    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  19. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires (United States)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.


    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  20. A Cost Model for Integrated Logistic Support Activities

    Directory of Open Access Journals (Sweden)

    M. Elena Nenni


    Full Text Available An Integrated Logistic Support (ILS service has the objective of improving a system’s efficiency and availability for the life cycle. The system constructor offers the service to the customer, and she becomes the Contractor Logistic Support (CLS. The aim of this paper is to propose an approach to support the CLS in the budget formulation. Specific goals of the model are the provision of the annual cost of ILS activities through a specific cost model and a comprehensive examination of expected benefits, costs and savings under alternative ILS strategies. A simple example derived from an industrial application is also provided to illustrate the idea. Scientific literature is lacking in the topic and documents from the military are just dealing with the issue of performance measurement. Moreover, they are obviously focused on the customer’s perspective. Other scientific papers are general and focused only on maintenance or life cycle management. The model developed in this paper approaches the problem from the perspective of the CLS, and it is specifically tailored on the main issues of an ILS service.

  1. Scalable multi-core model checking

    NARCIS (Netherlands)

    Laarman, Alfons


    Our modern society relies increasingly on the sound performance of digital systems. Guaranteeing that these systems actually behave correctly according to their specification is not a trivial task, yet it is essential for mission-critical systems like auto-pilots, (nuclear) power-plant controllers

  2. Transformation of UML Behavioral Diagrams to Support Software Model Checking

    Directory of Open Access Journals (Sweden)

    Luciana Brasil Rebelo dos Santos


    Full Text Available Unified Modeling Language (UML is currently accepted as the standard for modeling (object-oriented software, and its use is increasing in the aerospace industry. Verification and Validation of complex software developed according to UML is not trivial due to complexity of the software itself, and the several different UML models/diagrams that can be used to model behavior and structure of the software. This paper presents an approach to transform up to three different UML behavioral diagrams (sequence, behavioral state machines, and activity into a single Transition System to support Model Checking of software developed in accordance with UML. In our approach, properties are formalized based on use case descriptions. The transformation is done for the NuSMV model checker, but we see the possibility in using other model checkers, such as SPIN. The main contribution of our work is the transformation of a non-formal language (UML to a formal language (language of the NuSMV model checker towards a greater adoption in practice of formal methods in software development.

  3. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience. (United States)

    Corradi, Luca; Porro, Ivan; Schenone, Andrea; Momeni, Parastoo; Ferrari, Raffaele; Nobili, Flavio; Ferrara, Michela; Arnulfo, Gabriele; Fato, Marco M


    Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of "meta" data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been

  4. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience

    Directory of Open Access Journals (Sweden)

    Corradi Luca


    Full Text Available Abstract Background Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. Methods A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of “meta” data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach

  5. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H


    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  6. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat


    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  7. Knowledge representation to support reasoning based on multiple models (United States)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.


    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  8. Energy modelling platforms for policy and strategy support

    International Nuclear Information System (INIS)

    Dyner, I.


    The energy field has been dominated by 'hard' modelling approaches by researchers from engineering and economics discipline. The recent trend towards a more liberalised environment moves away from central planning to market-based resource allocation, leading to the creation and use of strategic tools, with much 'softer' specifications, in the 'system-thinking' tradition. This paper presents the use of system dynamics in a generalised way, to provide a platform for integrated energy analysis. Issues of modularity and policy evolution are important in the design of the modelling platform to facilitate its use, and reuse. Hence the concepts of a platform, rather than a model, has to be implemented in a coherent way if it is to provide sustained value for ongoing support to both government policy and corporate strategy. (author)

  9. Job Demands-Control-Support model and employee safety performance. (United States)

    Turner, Nick; Stride, Chris B; Carter, Angela J; McCaughey, Deirdre; Carroll, Anthony E


    The aim of this study was to explore whether work characteristics (job demands, job control, social support) comprising Karasek and Theorell's (1990) Job Demands-Control-Support framework predict employee safety performance (safety compliance and safety participation; Neal and Griffin, 2006). We used cross-sectional data of self-reported work characteristics and employee safety performance from 280 healthcare staff (doctors, nurses, and administrative staff) from Emergency Departments of seven hospitals in the United Kingdom. We analyzed these data using a structural equation model that simultaneously regressed safety compliance and safety participation on the main effects of each of the aforementioned work characteristics, their two-way interactions, and the three-way interaction among them, while controlling for demographic, occupational, and organizational characteristics. Social support was positively related to safety compliance, and both job control and the two-way interaction between job control and social support were positively related to safety participation. How work design is related to employee safety performance remains an important area for research and provides insight into how organizations can improve workplace safety. The current findings emphasize the importance of the co-worker in promoting both safety compliance and safety participation. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  10. Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation

    Directory of Open Access Journals (Sweden)

    Teemu Kanstrén


    Full Text Available We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.

  11. Distributed Hydrologic Modeling Apps for Decision Support in the Cloud (United States)

    Swain, N. R.; Latu, K.; Christiensen, S.; Jones, N.; Nelson, J.


    Advances in computation resources and greater availability of water resources data represent an untapped resource for addressing hydrologic uncertainties in water resources decision-making. The current practice of water authorities relies on empirical, lumped hydrologic models to estimate watershed response. These models are not capable of taking advantage of many of the spatial datasets that are now available. Physically-based, distributed hydrologic models are capable of using these data resources and providing better predictions through stochastic analysis. However, there exists a digital divide that discourages many science-minded decision makers from using distributed models. This divide can be spanned using a combination of existing web technologies. The purpose of this presentation is to present a cloud-based environment that will offer hydrologic modeling tools or 'apps' for decision support and the web technologies that have been selected to aid in its implementation. Compared to the more commonly used lumped-parameter models, distributed models, while being more intuitive, are still data intensive, computationally expensive, and difficult to modify for scenario exploration. However, web technologies such as web GIS, web services, and cloud computing have made the data more accessible, provided an inexpensive means of high-performance computing, and created an environment for developing user-friendly apps for distributed modeling. Since many water authorities are primarily interested in the scenario exploration exercises with hydrologic models, we are creating a toolkit that facilitates the development of a series of apps for manipulating existing distributed models. There are a number of hurdles that cloud-based hydrologic modeling developers face. One of these is how to work with the geospatial data inherent with this class of models in a web environment. Supporting geospatial data in a website is beyond the capabilities of standard web frameworks and it

  12. Numerical Model Metrics Tools in Support of Navy Operations (United States)

    Dykes, J. D.; Fanguy, P.


    Increasing demands of accurate ocean forecasts that are relevant to the Navy mission decision makers demand tools that quickly provide relevant numerical model metrics to the forecasters. Increasing modelling capabilities with ever-higher resolution domains including coupled and ensemble systems as well as the increasing volume of observations and other data sources to which to compare the model output requires more tools for the forecaster to enable doing more with less. These data can be appropriately handled in a geographic information system (GIS) fused together to provide useful information and analyses, and ultimately a better understanding how the pertinent model performs based on ground truth.. Oceanographic measurements like surface elevation, profiles of temperature and salinity, and wave height can all be incorporated into a set of layers correlated to geographic information such as bathymetry and topography. In addition, an automated system that runs concurrently with the models on high performance machines matches routinely available observations to modelled values to form a database of matchups with which statistics can be calculated and displayed, to facilitate validation of forecast state and derived variables. ArcMAP, developed by Environmental Systems Research Institute, is a GIS application used by the Naval Research Laboratory (NRL) and naval operational meteorological and oceanographic centers to analyse the environment in support of a range of Navy missions. For example, acoustic propagation in the ocean is described with a three-dimensional analysis of sound speed that depends on profiles of temperature, pressure and salinity predicted by the Navy Coastal Ocean Model. The data and model output must include geo-referencing information suitable for accurately placing the data within the ArcMAP framework. NRL has developed tools that facilitate merging these geophysical data and their analyses, including intercomparisons between model

  13. Laplacian embedded regression for scalable manifold regularization. (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong


    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  14. Performance of scalable coding in depth domain (United States)

    Sjöström, Mårten; Karlsson, Linda S.


    Common autostereoscopic 3D displays are based on multi-view projection. The diversity of resolutions and number of views of such displays implies a necessary flexibility of 3D content formats in order to make broadcasting efficient. Furthermore, distribution of content over a heterogeneous network should adapt to an available network capacity. Present scalable video coding provides the ability to adapt to network conditions; it allows for quality, temporal and spatial scaling of 2D video. Scalability for 3D data extends this list to the depth and the view domains. We have introduced scalability with respect to depth information. Our proposed scheme is based on the multi-view-plus-depth format where the center view data are preserved, and side views are extracted in enhancement layers depending on depth values. We investigate the performance of various layer assignment strategies: number of layers, and distribution of layers in depth, either based on equal number of pixels or histogram characteristics. We further consider the consequences to variable distortion due to encoder parameters. The results are evaluated considering their overall distortion verses bit rate, distortion per enhancement layer, as well as visual quality appearance. Scalability with respect to depth (and views) allows for an increased number of quality steps; the cost is a slight increase of required capacity for the whole sequence. The main advantage is, however, an improved quality for objects close to the viewer, even if overall quality is worse.

  15. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.


    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  16. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)


    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  17. Ubicrawler: a scalable fully distributed web crawler


    Codenotti, Bruno


    We present the design and implementation of UbiCrawler, a scalable distributed web crawler, and we analyze its performance. The main features of UbiCrawler are platform independence, fault tolerance, a very effective assignment function for partitioning the domain to crawl, and more in general the complete decentralization of every task.

  18. Realization of a scalable airborne radar

    NARCIS (Netherlands)

    Otten, M.P.G.; Vermeulen, B.C.B.; Liempt, L.J. van; Halsema, D. van; Jongh, R.V. de; Es, J. van


    Modern airborne ground surveillance radar systems are increasingly based on Active Electronically Scanned Array (AESA) antennas. Efficient use of array technology and the need for radar solutions for various airborne platforms, manned and unmanned, leads to the design of scalable radar systems. The

  19. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, Giovane; Pras, Aiko


    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web

  20. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco


    The future smart power grid will consist of an unlimited number of smart devices that communicate with control units to maintain the grid’s sustainability, efficiency, and balancing. In order to build and verify such controllers over a large grid, a scalable simulation environment is needed...

  1. Models to support students’ understanding of measuring area of circles (United States)

    Rejeki, S.; Putri, R. I. I.


    Many studies showed that enormous students got confused about the concepts of measuring area of circles. The main reason is because mathematics classroom practices emphasized on memorizing formulas rather than understanding concepts. Therefore, in this study, a set of learning activities were designed as an innovation in learning area measurement of circles. The activities involved two models namely grid paper and reshaping which are respectively as a means and a strategy to support students’ learning of area measurement of circles. Design research was used as the research approach to achieve the aim. Thirty-eight of 8th graders in Indonesia were involved in this study. In this study, together with the contextual problems, the grid paper and reshaping sectors, which used as the models in this learning, helped the students to gradually develop their understanding of the area measurement of circles. The grid papers plays important role in comparing and estimating areas. Whereas, the reshaping sectors might support students’ understanding of the circumference and the area measurement of circles. Those two models could be the tool for promoting the informal theory of area measurement. Besides, the whole activities gave important role on distinguishing the area and perimeter of circles.

  2. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen


    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertaintie...

  3. Design of Graph Analysis Model to support Decision Making

    International Nuclear Information System (INIS)

    An, Sang Ha; Lee, Sung Jin; Chang, Soon Heung; Kim, Sung Ho; Kim, Tae Woon


    Korea is meeting the growing electric power needs by using nuclear, fissile, hydro energy and so on. But we can not use fissile energy forever, and the people's consideration about nature has been changed. So we have to prepare appropriate energy by the conditions before people need more energy. And we should prepare dynamic response because people's need would be changed as the time goes on. So we designed graphic analysis model (GAM) for the dynamic analysis of decision on the energy sources. It can support Analytic Hierarchy Process (AHP) analysis based on Graphic User Interface

  4. Advancing LGBT Elder Policy and Support Services: The Massachusetts Model. (United States)

    Krinsky, Lisa; Cahill, Sean R


    The Massachusetts-based LGBT Aging Project has trained elder service providers in affirming and culturally competent care for LGBT older adults, supported development of LGBT-friendly meal programs, and advanced LGBT equality under aging policy. Working across sectors, this innovative model launched the country's first statewide Legislative Commission on Lesbian, Gay, Bisexual, and Transgender Aging. Advocates are working with policymakers to implement key recommendations, including cultural competency training and data collection in statewide networks of elder services. The LGBT Aging Project's success provides a template for improving services and policy for LGBT older adults throughout the country.

  5. Hydraulic modeling support for conflict analysis: The Manayunk canal revisited

    International Nuclear Information System (INIS)

    Chadderton, R.A.; Traver, R.G.; Rao, J.N.


    This paper presents a study which used a standard, hydraulic computer model to generate detailed design information to support conflict analysis of a water resource use issue. As an extension of previous studies, the conflict analysis in this case included several scenarios for stability analysis - all of which reached the conclusion that compromising, shared access to the water resources available would result in the most benefits to society. This expected equilibrium outcome was found to maximize benefit-cost estimates. 17 refs., 1 fig., 2 tabs

  6. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform. (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy


    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  7. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.


    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas


    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  9. A community college model to support nursing workforce diversity. (United States)

    Colville, Janet; Cottom, Sherry; Robinette, Teresa; Wald, Holly; Waters, Tomi


    Community College of Allegheny County (CCAC), Allegheny Campus, is situated on the North Side of Pittsburgh. The neighborhood is 60% African American. At the time of the Health Resources and Services Administration (HRSA) application, approximately one third of the students admitted to the program were African American, less than one third of whom successfully completed it. With the aid of HRSA funding, CCAC developed a model that significantly improved the success rate of disadvantaged students. Through the formation of a viable cohort, the nursing faculty nurtured success among the most at-risk students. The cohort was supported by a social worker, case managers who were nursing faculty, and tutors. Students formed study groups, actively participated in community activities, and developed leadership skills through participation in the Student Nurse Association of Pennsylvania. This article provides the rationale for the Registered Nurse (RN) Achievement Model, describes the components of RN Achievement, and discusses the outcomes of the initiative.

  10. Mass balances for a biological life support system simulation model (United States)

    Volk, Tyler; Rummel, John D.


    Design decisions to aid the development of future space based biological life support systems (BLSS) can be made with simulation models. The biochemistry stoichiometry was developed for: (1) protein, carbohydrate, fat, fiber, and lignin production in the edible and inedible parts of plants; (2) food consumption and production of organic solids in urine, feces, and wash water by the humans; and (3) operation of the waste processor. Flux values for all components are derived for a steady state system with wheat as the sole food source. The large scale dynamics of a materially closed (BLSS) computer model is described in a companion paper. An extension of this methodology can explore multifood systems and more complex biochemical dynamics while maintaining whole system closure as a focus.

  11. Making Risk Models Operational for Situational Awareness and Decision Support

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R.; Coles, Garill A.; Shoemaker, Steven V.


    Modernization of nuclear power operations control systems, in particular the move to digital control systems, creates an opportunity to modernize existing legacy infrastructure and extend plant life. We describe here decision support tools that allow the assessment of different facets of risk and support the optimization of available resources to reduce risk as plants are upgraded and maintained. This methodology could become an integrated part of the design review process and a part of the operations management systems. The methodology can be applied to the design of new reactors such as small nuclear reactors (SMR), and be helpful in assessing the risks of different configurations of the reactors. Our tool provides a low cost evaluation of alternative configurations and provides an expanded safety analysis by considering scenarios while early in the implementation cycle where cost impacts can be minimized. The effects of failures can be modeled and thoroughly vetted to understand their potential impact on risk. The process and tools presented here allow for an integrated assessment of risk by supporting traditional defense in depth approaches while taking into consideration the insertion of new digital instrument and control systems.

  12. Subspace identification of Hammer stein models using support vector machines

    International Nuclear Information System (INIS)

    Al-Dhaifallah, Mujahed


    System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.

  13. Oxide-supported metal clusters: models for heterogeneous catalysts

    International Nuclear Information System (INIS)

    Santra, A K; Goodman, D W


    Understanding the size-dependent electronic, structural and chemical properties of metal clusters on oxide supports is an important aspect of heterogeneous catalysis. Recently model oxide-supported metal catalysts have been prepared by vapour deposition of catalytically relevant metals onto ultra-thin oxide films grown on a refractory metal substrate. Reactivity and spectroscopic/microscopic studies have shown that these ultra-thin oxide films are excellent models for the corresponding bulk oxides, yet are sufficiently electrically conductive for use with various modern surface probes including scanning tunnelling microscopy (STM). Measurements on metal clusters have revealed a metal to nonmetal transition as well as changes in the crystal and electronic structures (including lattice parameters, band width, band splitting and core-level binding energy shifts) as a function of cluster size. Size-dependent catalytic reactivity studies have been carried out for several important reactions, and time-dependent catalytic deactivation has been shown to arise from sintering of metal particles under elevated gas pressures and/or reactor temperatures. In situ STM methodologies have been developed to follow the growth and sintering kinetics on a cluster-by-cluster basis. Although several critical issues have been addressed by several groups worldwide, much more remains to be done. This article highlights some of these accomplishments and summarizes the challenges that lie ahead. (topical review)

  14. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    Directory of Open Access Journals (Sweden)

    Sumiyana Sumiyana


    Full Text Available This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007. This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment scalabilities, and growth opportunities, co associate positively with stock price. The remaining factor,which is the pure interest rate, is negatively related to annualstock return. This study finds that inducing short-run and long-run investment scalabilities into the model could improve the degree of association. In other words, they have value rel-evance. Finally, this study suggests that basic trading strategieswill improve if investors revert to the accounting fundamentals. Keywords: accounting fundamentals; book value; earnings yield; growth opportuni­ties; short­run and long­run investment scalabilities; trading strategy;value relevance

  15. Model catalytic oxidation studies using supported monometallic and heterobimetallic oxides

    Energy Technology Data Exchange (ETDEWEB)

    Ekerdt, J.G.


    This research program is directed toward a more fundamental understanding of the effects of catalyst composition and structure on the catalytic properties of metal oxides. Metal oxide catalysts play an important role in many reactions bearing on the chemical aspects of energy processes. Metal oxides are the catalysts for water-gas shift reactions, methanol and higher alcohol synthesis, isosynthesis, selective catalytic reduction of nitric oxides, and oxidation of hydrocarbons. A key limitation to developing insight into how oxides function in catalytic reactions is in not having precise information of the surface composition under reaction conditions. To address this problem we have prepared oxide systems that can be used to study cation-cation effects and the role of bridging (-O-) and/or terminal (=O) surface oxygen anion ligands in a systematic fashion. Since many oxide catalyst systems involve mixtures of oxides, we selected a model system that would permit us to examine the role of each cation separately and in pairwise combinations. Organometallic molybdenum and tungsten complexes were proposed for use, to prepare model systems consisting of isolated monomeric cations, isolated monometallic dimers and isolated bimetallic dimers supported on silica and alumina. The monometallic and bimetallic dimers were to be used as models of more complex mixed- oxide catalysts. Our current program was to develop the systems and use them in model oxidation reactions.

  16. Modeling Global Urbanization Supported by Nighttime Light Remote Sensing (United States)

    Zhou, Y.


    Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.

  17. Towards a Tool-Supported Quality Model for Model-Driven Engineering


    Mohagheghi, Parastoo


    This paper reviews definitions of model quality before introducing five properties of models that are important for building high-quality models. These are identified to be correctness, completeness, consistency, comprehensibility and confinement. We have earlier defined a quality model that separates intangible quality goals from tangible quality-carrying properties and practices that should be in place to support these properties.  A part of that work was to define a metamodel for deve...

  18. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report (United States)

    Luke, Edward Allen


    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  19. Urban modeling over Houston in support of SIMMER (United States)

    Barlage, M. J.; Monaghan, A. J.; Feddema, J. J.; Oleson, K. W.; Brunsell, N. A.; Wilhelmi, O.


    Extreme heat is a leading cause of weather-related human mortality in the United States. As global warming patterns continue, researchers anticipate increases in the severity, frequency and duration of extreme heat events, especially in the southern and western U.S. Many cities in these regions may have amplified vulnerability due to their rapidly evolving socioeconomic fabric (for example, growing elderly populations). This raises a series of questions about the increased health risks of urban residents to extreme heat, and about effective means of mitigation and adaptation in present and future climates. We will introduce a NASA-funded project aimed at addressing these questions via the System for Integrated Modeling of Metropolitan Extreme Heat Risk (SIMMER). Through SIMMER, we hope to advance methodology for assessing current and future urban vulnerabilities from the heat waves through the refinement and integration of physical and social science models, and to build local capacity for heat hazard mitigation and climate change adaptation in the public health sector. We will also present results from a series of sensitivity studies over Houston and surrounding area employing a recently-implemented multi-layer urban canopy model (UCM) within the Noah Land Surface Model. The UCM has multiple layers in the atmosphere to explicitly resolve the effects of buildings, and has an indoor-outdoor exchange model that directly interacts with the atmospheric boundary layer. The goal of this work, which supports the physical science component of SIMMER, is to characterize the ill-defined and uncertain parameter space, including building characteristics and spatial organization, in the new multi-layer UCM for Houston, and to assess whether and how this parameter space is sensitive to the choice of urban morphology datasets. Results focus on the seasonal and inter-annual range of both the modeled urban heat island effect and the magnitude of surface energy components and

  20. Integrated models to support multiobjective ecological restoration decisions. (United States)

    Fraser, Hannah; Rumpff, Libby; Yen, Jian D L; Robinson, Doug; Wintle, Brendan A


    Many objectives motivate ecological restoration, including improving vegetation condition, increasing the range and abundance of threatened species, and improving species richness and diversity. Although models have been used to examine the outcomes of ecological restoration, few researchers have attempted to develop models to account for multiple, potentially competing objectives. We developed a combined state-and-transition, species-distribution model to predict the effects of restoration actions on vegetation condition and extent, bird diversity, and the distribution of several bird species in southeastern Australian woodlands. The actions reflected several management objectives. We then validated the models against an independent data set and investigated how the best management decision might change when objectives were valued differently. We also used model results to identify effective restoration options for vegetation and bird species under a constrained budget. In the examples we evaluated, no one action (improving vegetation condition and extent, increasing bird diversity, or increasing the probability of occurrence for threatened species) provided the best outcome across all objectives. In agricultural lands, the optimal management actions for promoting the occurrence of the Brown Treecreeper (Climacteris picumnus), an iconic threatened species, resulted in little improvement in the extent of the vegetation and a high probability of decreased vegetation condition. This result highlights that the best management action in any situation depends on how much the different objectives are valued. In our example scenario, no management or weed control were most likely to be the best management options to satisfy multiple restoration objectives. Our approach to exploring trade-offs in management outcomes through integrated modeling and structured decision-support approaches has wide application for situations in which trade-offs exist between competing

  1. Green Transport Balanced Scorecard Model with Analytic Network Process Support

    Directory of Open Access Journals (Sweden)

    David Staš


    Full Text Available In recent decades, the performance of economic and non-economic activities has required them to be friendly with the environment. Transport is one of the areas having considerable potential within the scope. The main assumption to achieve ambitious green goals is an effective green transport evaluation system. However, these systems are researched from the industrial company and supply chain perspective only sporadically. The aim of the paper is to design a conceptual framework for creating the Green Transport (GT Balanced Scorecard (BSC models from the viewpoint of industrial companies and supply chains using an appropriate multi-criteria decision making method. The models should allow green transport performance evaluation and support of an effective implementation of green transport strategies. Since performance measures used in Balanced Scorecard models are interdependent, the Analytic Network Process (ANP was used as the appropriate multi-criteria decision making method. The verification of the designed conceptual framework was performed on a real supply chain of the European automotive industry.

  2. Early diagnosis model for meningitis supports public health decision making. (United States)

    Close, Rebecca M; Ejidokun, Oluwatoyin O; Verlander, Neville Q; Fraser, Graham; Meltzer, Margie; Rehman, Yasmin; Muir, Peter; Ninis, Nelly; Stuart, James M


    To develop a predictive model for rapid differential diagnosis of meningitis and meningococcal septicaemia to support public health decisions on chemoprophylaxis for contacts. Prospective study of suspected cases of acute meningitis and meningococcal septicaemia admitted to hospitals in the South West, West Midlands and London Regions of England from July 2008 to June 2009. Epidemiological, clinical and laboratory variables on admission were recorded. Logistic regression was used to derive a predictive model. Of the 719 suspect cases reported, 385 confirmed cases were included in analysis. Peripheral blood polymorphonuclear count of >16 × 10(9)/l, serum C-reactive protein of >100 mg/l and haemorrhagic rash were strongly and independently associated with diagnosis of bacterial meningitis and meningococcal septicaemia. Using a simple scoring system, the presence of any one of these factors gave a probability of >95% in predicting the final diagnosis. We have developed a model using laboratory and clinical factors, but not dependent on availability of CSF, for differentiating acute bacterial from viral meningitis within a few hours of admission to hospital. This scoring system is recommended in public health management of suspected cases of meningitis and meningococcal septicaemia to inform decisions on chemoprophylaxis. Copyright © 2011 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  3. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman


    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  4. Scour around Support Structures of Scaled Model Marine Hydrokinetic Devices (United States)

    Volpe, M. A.; Beninati, M. L.; Krane, M.; Fontaine, A.


    Experiments are presented to explore scour due to flows around support structures of marine hydrokinetic (MHK) devices. Three related studies were performed to understand how submergence, scour condition, and the presence of an MHK device impact scour around the support structure (cylinder). The first study focuses on clear-water scour conditions for a cylinder of varying submergence: surface-piercing and fully submerged. The second study centers on three separate scour conditions (clear-water, transitional and live-bed) around the fully submerged cylinder. Lastly, the third study emphasizes the impact of an MHK turbine on scour around the support structure, in live-bed conditions. Small-scale laboratory testing of model devices can be used to help predict the behavior of MHK devices at full-scale. Extensive studies have been performed on single cylinders, modeling bridge piers, though few have focused on fully submerged structures. Many of the devices being used to harness marine hydrokinetic energy are fully submerged in the flow. Additionally, scour hole dimensions and scour rates have not been addressed. Thus, these three studies address the effect of structure blockage/drag, and the ambient scour conditions on scour around the support structure. The experiments were performed in the small-scale testing platform in the hydraulic flume facility (9.8 m long, 1.2 m wide and 0.4 m deep) at Bucknell University. The support structure diameter (D = 2.54 cm) was held constant for all tests. The submerged cylinder (l/D = 5) and sediment size (d50 = 790 microns) were held constant for all three studies. The MHK device (Dturbine = 10.2 cm) is a two-bladed horizontal axis turbine and the rotating shaft is friction-loaded using a metal brush motor. For each study, bed form topology was measured after a three-hour time interval using a traversing two-dimensional bed profiler. During the experiments, scour hole depth measurements at the front face of the support structure

  5. Conscientiousness in the workplace : Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, B.P.

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

  6. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai


    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  7. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano


    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  8. Odor supported place cell model and goal navigation in rodents

    DEFF Research Database (Denmark)

    Kulvicius, Tomas; Tamosiunaite, Minija; Ainge, James


    -generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self......-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environmental and self......-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation....

  9. Neutronic Modelling in Support of the Irradiation Programmes

    International Nuclear Information System (INIS)

    Koonen, E.


    Irradiation experiments are generally conducted to determine some specific characteristics of the concerned fuels and structural materials under well defined irradiation conditions. For the determination of the latter the BR2 division has an autonomous reactor physics cell and has implemented the required computational tools. The major tool used is a three-dimensional full-scale Monte Carlo model of the BR2 reactor developed under MCNP-4C for the simulation of irradiation conditions. The objectives of work performed by SCK-CEN are to evaluate and adjust irradiation conditions by adjustments of the environment, differential rod positions, axial and azimuthal positioning of the samples, global power level, ...; to deliver reliable, well defined irradiation condition and fluence data during and after irradiation; to assist the designer of new irradiation devices by simulations and neutronic optimisations of design options; to provide computational support to related projects as a way to valorise the capabilities that the BR2 reactor can offer

  10. Experiments and Modeling in Support of Generic Salt Repository Science

    Energy Technology Data Exchange (ETDEWEB)

    Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caporuscio, Florie Andre [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Otto, Shawn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jordan, Amy B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.

  11. Progressor: social navigation support through open social student modeling (United States)

    Hsiao, I.-Han; Bakalov, Fedor; Brusilovsky, Peter; König-Ries, Birgitta


    The increased volumes of online learning content have produced two problems: how to help students to find the most appropriate resources and how to engage them in using these resources. Personalized and social learning have been suggested as potential ways to address these problems. Our work presented in this paper combines the ideas of personalized and social learning in the context of educational hypermedia. We introduce Progressor, an innovative Web-based tool based on the concepts of social navigation and open student modeling that helps students to find the most relevant resources in a large collection of parameterized self-assessment questions on Java programming. We have evaluated Progressor in a semester-long classroom study, the results of which are presented in this paper. The study confirmed the impact of personalized social navigation support provided by the system in the target context. The interface encouraged students to explore more topics attempting more questions and achieving higher success rates in answering them. A deeper analysis of the social navigation support mechanism revealed that the top students successfully led the way to discovering most relevant resources by creating clear pathways for weaker students.

  12. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan


    method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  13. Highly Scalable Eigensolvers for Petaflop Applications


    Auckenthaler, Thomas


    This thesis presents the development of a new eigensolver for the use in massively parallel systems. Current implementations lack in both, parallel and sequential efficiency on modern computer architectures and are becoming the bottleneck of many scientific applications, e.g. in quantum chemistry. Efficiency and scalability of the new eigensolver are unprecedented and result in an up to 10-fold improvement compared to current state-of-the-art libraries. Diese Arbeit beschreibt die Entwick...

  14. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni


    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  15. Optimizing Nanoelectrode Arrays for Scalable Intracellular Electrophysiology. (United States)

    Abbott, Jeffrey; Ye, Tianyang; Ham, Donhee; Park, Hongkun


    , clarifying how the nanoelectrode attains intracellular access. This understanding will be translated into a circuit model for the nanobio interface, which we will then use to lay out the strategies for improving the interface. The intracellular interface of the nanoelectrode is currently inferior to that of the patch clamp electrode; reaching this benchmark will be an exciting challenge that involves optimization of electrode geometries, materials, chemical modifications, electroporation protocols, and recording/stimulation electronics, as we describe in the Account. Another important theme of this Account, beyond the optimization of the individual nanoelectrode-cell interface, is the scalability of the nanoscale electrodes. We will discuss this theme using a recent development from our groups as an example, where an array of ca. 1000 nanoelectrode pixels fabricated on a CMOS integrated circuit chip performs parallel intracellular recording from a few hundreds of cardiomyocytes, which marks a new milestone in electrophysiology.

  16. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee


    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  17. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: [Renato Archer Information Technology Center-CTI, Campinas (Brazil)


    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  18. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade


    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  19. On eliminating synchronous communication in molecular simulations to improve scalability (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.


    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.


    Directory of Open Access Journals (Sweden)

    V. A. Bogatyrev


    Full Text Available Subject of Research. The paper deals with the effectiveness of multipath transfer of request copies through the network and their redundant service without the use of laborious analytical modeling. The model and support tools for the design of highly reliable distributed systems based on simulation modeling have been created. Method. The effectiveness of many variants of service organization and delivery through the network to the query servers is formed and analyzed. Options for providing redundant service and delivery via the network to the servers of request copies are also considered. The choice of variants for the distribution and service of requests is carried out taking into account the criticality of queries to the time of their stay in the system. The request is considered successful if at least one of its copies is accurately delivered to the working server, ready to service the request received through a network, if it is fulfilled in the set time. Efficiency analysis of the redundant transmission and service of requests is based on the model built in AnyLogic 7 simulation environment. Main Results. Simulation experiments based on the proposed models have shown the effectiveness of redundant transmission of copies of queries (packets to the servers in the cluster through multiple paths with redundant service of request copies by a group of servers in the cluster. It is shown that this solution allows increasing the probability of exact execution of at least one copy of the request within the required time. We have carried out efficiency evaluation of destruction of outdated request copies in the queues of network nodes and the cluster. We have analyzed options for network implementation of multipath transfer of request copies to the servers in the cluster over disjoint paths, possibly different according to the number of their constituent nodes. Practical Relevance. The proposed simulation models can be used when selecting the optimal

  1. Caregiver social support quality when interacting with cancer survivors: advancing the dual-process model of supportive communication. (United States)

    Harvey-Knowles, Jacquelyn; Faw, Meara H


    Cancer caregivers often experience significant challenges in their motivation and ability to comfort cancer survivors, particularly in a spousal or romantic context. Spousal cancer caregivers have been known to report even greater levels of burden and distress than cancer sufferers, yet still take on the role of acting as an informal caregiver so they can attend to their partner's needs. The current study tested whether a theoretical model of supportive outcomes-the dual-process model of supportive communication-explained variations in cancer caregivers' motivation and ability to create high-quality support messages. The study also tested whether participant engagement with reflective journaling on supportive acts was associated with increased motivation or ability to generate high-quality support messages. Based upon the dual-process model, we posited that, following supportive journaling tasks, caregivers of spouses currently managing a cancer experience would report greater motivation but also greater difficulty in generating high-quality support messages, while individuals caring for a patient in remission would report lower motivation but greater ability to create high-quality support messages. Findings provided support for these assertions and suggested that reflective journaling tasks might be a useful tool for improving remission caregivers' ability to provide high-quality social support to survivors. Corresponding theoretical and applied implications are discussed.

  2. An integrated crop model and GIS decision support system for assisting agronomic decision making under climate change. (United States)

    Kadiyala, M D M; Nedumaran, S; Singh, Piara; S, Chukka; Irshad, Mohammad A; Bantilan, M C S


    The semi-arid tropical (SAT) regions of India are suffering from low productivity which may be further aggravated by anticipated climate change. The present study analyzes the spatial variability of climate change impacts on groundnut yields in the Anantapur district of India and examines the relative contribution of adaptation strategies. For this purpose, a web based decision support tool that integrates crop simulation model and Geographical Information System (GIS) was developed to assist agronomic decision making and this tool can be scalable to any location and crop. The climate change projections of five global climate models (GCMs) relative to the 1980-2010 baseline for Anantapur district indicates an increase in rainfall activity to the tune of 10.6 to 25% during Mid-century period (2040-69) with RCP 8.5. The GCMs also predict warming exceeding 1.4 to 2.4°C by 2069 in the study region. The spatial crop responses to the projected climate indicate a decrease in groundnut yields with four GCMs (MPI-ESM-MR, MIROC5, CCSM4 and HadGEM2-ES) and a contrasting 6.3% increase with the GCM, GFDL-ESM2M. The simulation studies using CROPGRO-Peanut model reveals that groundnut yields can be increased on average by 1.0%, 5.0%, 14.4%, and 20.2%, by adopting adaptation options of heat tolerance, drought tolerant cultivars, supplemental irrigation and a combination of drought tolerance cultivar and supplemental irrigation respectively. The spatial patterns of relative benefits of adaptation options were geographically different and the greatest benefits can be achieved by adopting new cultivars having drought tolerance and with the application of one supplemental irrigation at 60days after sowing. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Modelling of the Human Knee Joint Supported by Active Orthosis (United States)

    Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.


    The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.

  4. Overcoming barriers to development of cooperative medical decision support models. (United States)

    Hudson, Donna L; Cohen, Maurice E


    Attempts to automate the medical decision making process have been underway for the at least fifty years, beginning with data-based approaches that relied chiefly on statistically-based methods. Approaches expanded to include knowledge-based systems, both linear and non-linear neural networks, agent-based systems, and hybrid methods. While some of these models produced excellent results none have been used extensively in medical practice. In order to move these methods forward into practical use, a number of obstacles must be overcome, including validation of existing systems on large data sets, development of methods for including new knowledge as it becomes available, construction of a broad range of decision models, and development of non-intrusive methods that allow the physician to use these decision aids in conjunction with, not instead of, his or her own medical knowledge. None of these four requirements will come easily. A cooperative effort among researchers, including practicing MDs, is vital, particularly as more information on diseases and their contributing factors continues to expand resulting in more parameters than the human decision maker can process effectively. In this article some of the basic structures that are necessary to facilitate the use of an automated decision support system are discussed, along with potential methods for overcoming existing barriers.

  5. Modelling of the Human Knee Joint Supported by Active Orthosis

    Directory of Open Access Journals (Sweden)

    Musalimov V.


    Full Text Available The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC. The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.

  6. Observations and models of centrifugally supported magnetospheres in massive stars (United States)

    Oksala, Mary Elizabeth

    Magnetic massive stars, via their strong magnetic fields and radiation-driven winds, strongly influence the dynamical and chemical evolution of their surroundings. The interaction between these two intrinsic stellar properties can produce dynamic circumstellar structures, and, in the case of rapidly rotating stars, centrifugally supported magnetospheres. This thesis uses new observations to confront current magnetosphere models, testing their predictive power using photometry and spectropolarimetry of the prototypical magnetic B2Vp star sigma Ori E. In addition, we present the discovery of a magnetic field in a second rapidly rotating massive star. At the time of its discovery, this star was the most rapidly rotating non-degenerate magnetic star. We begin with an overview of magnetism in massive stars and wind-field interactions (Chapter 2) and the observational techniques involved in their study (Chapter 3), and summarize historical studies of sigma Ori E (Chapter 4). Chapter 5 describes the detection of rotational braking in sigma Ori E. We find a 77 ms yr-1 lengthening of the rotational period, corresponding to a spindown time of 1.34+0.10 -0.09 Myr. This observed period change agrees well with theoretical predictions for angular momentum loss in a magnetically channeled, line-driven wind. Next we present new spectropolarimetric observations of sigma Ori E (Chapter 6). The observed Halpha variability matches the predictions from a rigidly rotating magnetosphere (RRM) model with an offset dipole magnetic field configuration. However, our new, precise longitudinal magnetic field measurements reveal significant discrepancies with respect to the RRM model, challenging the current form as applied to sigma Ori E and suggesting that the field configuration of this star is more complex than a simple dipole. Chapter 7 describes the first detection of a magnetic field in the B2Vn star HR 7355. From analyzing photometric data, we find a 0.5214404(6) d rotational period

  7. Support Vector Machines for Petrophysical Modelling and Lithoclassification (United States)

    Al-Anazi, Ammal Fannoush Khalifah


    Given increasing challenges of oil and gas production from partially depleted conventional or unconventional reservoirs, reservoir characterization is a key element of the reservoir development workflow. Reservoir characterization impacts well placement, injection and production strategies, and field management. Reservoir characterization projects point and line data to a large three-dimensional volume. The relationship between variables, e.g. porosity and permeability, is often established by regression yet the complexities between measured variables often lead to poor correlation coefficients between the regressed variables. Recent advances in machine learning methods have provided attractive alternatives for constructing interpretation models of rock properties in heterogeneous reservoirs. Here, Support Vector Machines (SVMs), a class of a learning machine that is formulated to output regression models and classifiers of competitive generalization capability, has been explored to determine its capabilities for determining the relationship, both in regression and in classification, between reservoir rock properties. This thesis documents research on the capability of SVMs to model petrophysical and elastic properties in heterogeneous sandstone and carbonate reservoirs. Specifically, the capabilities of SVM regression and classification has been examined and compared to neural network-based methods, namely multilayered neural networks, radial basis function neural networks, general regression neural networks, probabilistic neural networks, and linear discriminant analysis. The petrophysical properties that have been evaluated include porosity, permeability, Poisson's ratio and Young's modulus. Statistical error analysis reveals that the SVM method yields comparable or superior predictions of petrophysical and elastic rock properties and classification of the lithology compared to neural networks. The SVM method also shows uniform prediction capability under the

  8. Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

    Directory of Open Access Journals (Sweden)

    Auli-Llinas Francesc


    Full Text Available Quality scalability is an important feature of image and video coding systems. In JPEG2000, quality scalability is achieved through the use of quality layers that are formed in the encoder through rate-distortion optimization techniques. Quality layers provide optimal rate-distortion representations of the image when the codestream is transmitted and/or decoded at layer boundaries. Nonetheless, applications such as interactive image transmission, video streaming, or transcoding demand layer fragmentation. The common approach to truncate layers is to keep the initial prefix of the to-be-truncated layer, which may greatly penalize the quality of decoded images, especially when the layer allocation is inadequate. So far, only one method has been proposed in the literature providing enhanced quality scalability for compressed JPEG2000 imagery. However, that method provides quality scalability at the expense of high computational costs, which prevents its application to the aforementioned applications. This paper introduces a Block-Wise Layer Truncation (BWLT that, requiring negligible computational costs, enhances the quality scalability of compressed JPEG2000 images. The main insight behind BWLT is to dismantle and reassemble the to-be-fragmented layer by selecting the most relevant codestream segments of codeblocks within that layer. The selection process is conceived from a rate-distortion model that finely estimates rate-distortion contributions of codeblocks. Experimental results suggest that BWLT achieves near-optimal performance even when the codestream contains a single quality layer.

  9. Efficient Region-of-Interest Scalable Video Coding with Adaptive Bit-Rate Control

    Directory of Open Access Journals (Sweden)

    Dan Grois


    Full Text Available This work relates to the regions-of-interest (ROI coding that is a desirable feature in future applications based on the scalable video coding, which is an extension of the H.264/MPEG-4 AVC standard. Due to the dramatic technological progress, there is a plurality of heterogeneous devices, which can be used for viewing a variety of video content. Devices such as smartphones and tablets are mostly resource-limited devices, which make it difficult to display high-quality content. Usually, the displayed video content contains one or more ROI(s, which should be adaptively selected from the preencoded scalable video bitstream. Thus, an efficient scalable ROI video coding scheme is proposed in this work, thereby enabling the extraction of the desired regions-of-interest and the adaptive setting of the desirable ROI location, size, and resolution. In addition, an adaptive bit-rate control is provided for the region-of-interest scalable video coding. The performance of the presented techniques is demonstrated and compared with the joint scalable video model reference software (JSVM 9.19, thereby showing significant bit-rate savings as a tradeoff for the relatively low PSNR degradation.

  10. using explanatory models to derive simple tools for Avanced Life Support system studies - Crop Modelling (United States)

    Cavazzoni, J.

    System-level analyses for Advanced Life Support (ALS) require mathematical models for various processes, such as biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of processes specific to ALS studies. However, integrating such models may not always be practicable because of their complexity, especially for initial system-level analyses where simple sub-models may be satisfactory. One way to address this is to capture important features of explanatory models in simple models that may be readily integrated for system-level analyses. In this paper, explanatory crop models were used to generate parameters and multi-variable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling due to off- nominal conditions for ALS system studies. The simplest variant of these models consists of only a few equations, and has been integrated into a top-level SIMULINK model for the Bioregenerative Planetary Life Support Systems Test Complex (BIO-Plex), a large-scale human-rated test facility under development at NASA Johnson Space Center. When included in systems studies, the simple crop models may help identify issues that need to be addressed using more detailed modeling studies and specific experiments. Similar modeling simplifications may also prove useful for other ALS sub-systems, as well as for Earth system applications.

  11. Autonomy support, need satisfaction, and motivation for support among adults with intellectual disability : Testing a self-determination theory model

    NARCIS (Netherlands)

    Frielink, Noud; Schuengel, Carlo; Embregts, Petri J.C.M.


    The tenets of self-determination theory as applied to support were tested with structural equation modelling for 186 people with ID with a mild to borderline level of functioning. The results showed that (a) perceived autonomy support was positively associated with autonomous motivation and with

  12. Autonomy Support, Need Satisfaction, and Motivation for Support among Adults with Intellectual Disability: Testing a Self-Determination Theory Model (United States)

    Frielink, Noud; Schuengel, Carlo; Embregts, Petri J. C. M.


    The tenets of self-determination theory as applied to support were tested with structural equation modelling for 186 people with ID with a mild to borderline level of functioning. The results showed that (a) perceived autonomy support was positively associated with autonomous motivation and with satisfaction of need for autonomy, relatedness, and…

  13. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.


    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  14. GSKY: A scalable distributed geospatial data server on the cloud (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben


    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  15. Towards Scalable Strain Gauge-Based Joint Torque Sensors (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred


    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  16. Characterization of infiltration rates from landfills: supporting groundwater modeling efforts. (United States)

    Moo-Young, Horace; Johnson, Barnes; Johnson, Ann; Carson, David; Lew, Christine; Liu, Salley; Hancocks, Katherine


    The purpose of this paper is to review the literature to characterize infiltration rates from landfill liners to support groundwater modeling efforts. The focus of this investigation was on collecting studies that describe the performance of liners 'as installed' or 'as operated'. This document reviews the state of the science and practice on the infiltration rate through compacted clay liner (CCL) for 149 sites and geosynthetic clay liner (GCL) for 1 site. In addition, it reviews the leakage rate through geomembrane (GM) liners and composite liners for 259 sites. For compacted clay liners (CCL), there was limited information on infiltration rates (i.e., only 9 sites reported infiltration rates.), thus, it was difficult to develop a national distribution. The field hydraulic conductivities for natural clay liners range from 1 x 10(-9) cm s(-1) to 1 x 10(-4) cm s(-1), with an average of 6.5 x 10(-8) cm s(-1). There was limited information on geosynthetic clay liner. For composite lined and geomembrane systems, the leak detection system flow rates were utilized. The average monthly flow rate for composite liners ranged from 0-32 lphd for geomembrane and GCL systems to 0 to 1410 lphd for geomembrane and CCL systems. The increased infiltration for the geomembrane and CCL system may be attributed to consolidation water from the clay.

  17. Improving Sparsity and Scalability in Regularized Nonconvex Truncated-Loss Learning Problems. (United States)

    Tao, Qing; Wu, Gaowei; Chu, Dejun


    The truncated regular L₁-loss support vector machine can eliminate the excessive number of support vectors (SVs); thus, it has significant advantages in robustness and scalability. However, in this paper, we discover that the associated state-of-the-art solvers, such as difference convex algorithm and concave-convex procedure, not only have limited sparsity promoting property for general truncated losses especially the L₂-loss but also have poor scalability for large-scale problems. To circumvent these drawbacks, we present a general multistage scheme with explicit interpretation regarding SVs as well as outliers. In particular, we solve the general nonconvex truncated loss minimization through a sequence of associated convex subproblems, in which the outliers are removed in advance. The proposed algorithm can be regarded as a structural optimization attempt carefully considering sparsity imposed by the nonconvex truncated losses. We show that this general multistage algorithm offers sufficient sparsity especially for the truncated L₂-loss. To further improve the scalability, we propose a linear multistep algorithm by employing a single iteration of coordinate descent to monotonically decrease the objective function at each stage and a kernel algorithm by using the Karush-Kuhn-Tucker conditions to cheaply find most part of the outliers for the next stage. Comparison experiments demonstrate that our methods have superiority in sparsity as well as efficiency in scalability.

  18. Is there a need for hydrological modelling in decision support systems for nuclear emergencies

    International Nuclear Information System (INIS)

    Raskob, W.; Heling, R.; Zheleznyak, M.


    This paper discusses the role of hydrological modelling in decision support systems for nuclear emergencies. In particular, most recent developments such as, the radionuclide transport models integrated in to the decision support system RODOS will be explored. Recent progress in the implementation of physically-based distributed hydrological models for operational forecasting in national and supranational centres, may support a closer cooperation between national hydrological services and therefore, strengthen the use of hydrological and radiological models implemented in decision support systems. (authors)

  19. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García


    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  20. CloudETL: Scalable Dimensional ETL for Hadoop and Hive

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    Extract-Transform-Load (ETL) programs process data from sources into data warehouses (DWs). Due to the rapid growth of data volumes, there is an increasing demand for systems that can scale on demand. Recently, much attention has been given to MapReduce which is a framework for highly parallel...... handling of massive data sets in cloud environments. The MapReduce-based Hive has been proposed as a DBMS-like system for DWs and provides good and scalable analytical features. It is,however, still challenging to do proper dimensional ETL processing with Hive; for example, UPDATEs are not supported which...... makes handling of slowly changing dimensions (SCDs) very difficult. To remedy this, we here present the cloud-enabled ETL framework CloudETL. CloudETL uses the open source MapReduce implementation Hadoop to parallelize the ETL execution and to process data into Hive. The user defines the ETL process...

  1. Modelling Supported Driving as an Optimal Control Cycle : Framework and Model Characteristics

    NARCIS (Netherlands)

    Wang, M.; Treiber, M.; Daamen, W.; Hoogendoorn, S.P.; Van Arem, B.


    Driver assistance systems support drivers in operating vehicles in a safe, comfortable and efficient way, and thus may induce changes in traffic flow characteristics. This paper puts forward a receding horizon control framework to model driver assistance and cooperative systems. The accelerations of

  2. Epidemiological models to support animal disease surveillance activities

    DEFF Research Database (Denmark)

    Willeberg, Preben; Paisley, Larry; Lind, Peter


    Epidemiological models have been used extensively as a tool in improving animal disease surveillance activities. A review of published papers identified three main groups of model applications: models for planning surveillance, models for evaluating the performance of surveillance systems...... and models for interpreting surveillance data as part of ongoing control or eradication programmes. Two Danish examples are outlined. The first illustrates how models were used in documenting country freedom from disease (trichinellosis) and the second demonstrates how models were of assistance in predicting...

  3. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu


    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  4. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong


    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  5. Towards a Scalable, Biomimetic, Antibacterial Coating (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  6. Scalable Optical-Fiber Communication Networks (United States)

    Chow, Edward T.; Peterson, John C.


    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  7. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack


    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  8. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou


    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  9. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    International Nuclear Information System (INIS)

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.


    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory 'bus' to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition

  10. Modular Universal Scalable Ion-trap Quantum Computer (United States)


    trap quantum computer . This architecture has two separate layers of scalability: the first is to increase the number of ion qubits in a single trap...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation , scalable modular architectures REPORT DOCUMENTATION PAGE 11

  11. Parametric vs. Nonparametric Regression Modelling within Clinical Decision Support

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Zvárová, Jana


    Roč. 5, č. 1 (2017), s. 21-27 ISSN 1805-8698 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : decision support systems * decision rules * statistical analysis * nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  12. Cause and Event: Supporting Causal Claims through Logistic Models (United States)

    O'Connell, Ann A.; Gray, DeLeon L.


    Efforts to identify and support credible causal claims have received intense interest in the research community, particularly over the past few decades. In this paper, we focus on the use of statistical procedures designed to support causal claims for a treatment or intervention when the response variable of interest is dichotomous. We identify…

  13. Decision support telemedicine systems: A conceptual model and reusable templates

    NARCIS (Netherlands)

    Nannings, Barry; Abu-Hanna, A.


    Decision support telemedicine systems (DSTSs) are systems combining elements from telemedicine and clinical decision support systems. Although emerging more, these types of systems have not been given much attention in the literature. Our objective is to define the term DSTS, to propose a general

  14. Think 500, not 50! A scalable approach to student success in STEM. (United States)

    LaCourse, William R; Sutphin, Kathy Lee; Ott, Laura E; Maton, Kenneth I; McDermott, Patrice; Bieberich, Charles; Farabaugh, Philip; Rous, Philip


    UMBC, a diverse public research university, "builds" upon its reputation in producing highly capable undergraduate scholars to create a comprehensive new model, STEM BUILD at UMBC. This program is designed to help more students develop the skills, experience and motivation to excel in science, technology, engineering, and mathematics (STEM). This article provides an in-depth description of STEM BUILD at UMBC and provides the context of this initiative within UMBC's vision and mission. The STEM BUILD model targets promising STEM students who enter as freshmen or transfer students and do not qualify for significant university or other scholarship support. Of primary importance to this initiative are capacity, scalability, and institutional sustainability, as we distill the advantages and opportunities of UMBC's successful scholars programs and expand their application to more students. The general approach is to infuse the mentoring and training process into the fabric of the undergraduate experience while fostering community, scientific identity, and resilience. At the heart of STEM BUILD at UMBC is the development of BUILD Group Research (BGR), a sequence of experiences designed to overcome the challenges that undergraduates without programmatic support often encounter (e.g., limited internship opportunities, mentorships, and research positions for which top STEM students are favored). BUILD Training Program (BTP) Trainees serve as pioneers in this initiative, which is potentially a national model for universities as they address the call to retain and graduate more students in STEM disciplines - especially those from underrepresented groups. As such, BTP is a research study using random assignment trial methodology that focuses on the scalability and eventual incorporation of successful measures into the traditional format of the academy. Critical measures to transform institutional culture include establishing an extensive STEM Living and Learning Community to

  15. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit


    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  16. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj


    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  17. Towards Scalable Graph Computation on Mobile Devices (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng


    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  18. Towards Scalable Graph Computation on Mobile Devices. (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng


    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  19. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit


    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  20. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi


    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  1. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang


    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R;, a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  2. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models (United States)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee


    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  3. A scalable method for computing quadruplet wave-wave interactions (United States)

    Van Vledder, Gerbrant


    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  4. Modeling the Construct of an Expert Evidence-Adaptive Knowledge Base for a Pressure Injury Clinical Decision Support System

    Directory of Open Access Journals (Sweden)

    Peck Chui Betty Khong


    Full Text Available The selection of appropriate wound products for the treatment of pressure injuries is paramount in promoting wound healing. However, nurses find it difficult to decide on the most optimal wound product(s due to limited live experiences in managing pressure injuries resulting from successfully implemented pressure injury prevention programs. The challenges of effective decision-making in wound treatments by nurses at the point of care are compounded by the yearly release of wide arrays of newly researched wound products into the consumer market. A clinical decision support system for pressure injury (PI-CDSS was built to facilitate effective decision-making and selection of optimal wound treatments. This paper describes the development of PI-CDSS with an expert knowledge base using an interactive development environment, Blaze Advisor. A conceptual framework using decision-making and decision theory, knowledge representation, and process modelling guided the construct of the PI-CDSS. This expert system has incorporated the practical and relevant decision knowledge of wound experts in assessment and wound treatments in its algorithm. The construct of the PI-CDSS is adaptive, with scalable capabilities for expansion to include other CDSSs and interoperability to interface with other existing clinical and administrative systems. The algorithm was formatively evaluated and tested for usability. The treatment modalities generated after using patient-specific assessment data were found to be consistent with the treatment plan(s proposed by the wound experts. The overall agreement exceeded 90% between the wound experts and the generated treatment modalities for the choice of wound products, instructions, and alerts. The PI-CDSS serves as a just-in-time wound treatment protocol with suggested clinical actions for nurses, based on the best evidence available.

  5. Additional Research Needs to Support the GENII Biosphere Models

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Snyder, Sandra F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Arimescu, Carmen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models.

  6. Data Collecting and Processing System and Hydraulic Control System of Hydraulic Support Model Test


    Hong-Yu LIU; Jun-Qing LIU; Jun-Jie XI


    Hydraulic support is an important equipment of mechanization caving coal in modernization coal mine. Hydraulic support must pass national strength test before it quantity production and use. Hydraulic support model test based on similarity theory is a new effective hydraulic support design and test method. The test information such as displacement, stress, strain and so on can be generalized to hydraulic support prototype, which can prompt hydraulic support design. In order to satisfy the nee...

  7. Enhancing Formal Modelling Tool Support with Increased Automation

    DEFF Research Database (Denmark)

    Lausdahl, Kenneth

    Progress report for the qualification exam report for PhD Student Kenneth Lausdahl. Initial work on enhancing tool support for the formal method VDM and the concept of unifying a abstract syntax tree with the ability for isolated extensions is described. The tool support includes a connection...... to UML and a test automation principle based on traces written as a kind of regular expressions....

  8. Modeling snail breeding in Bioregenerative Life Support System (United States)

    Kovalev, Vladimir; Tikhomirov, Alexander A.; Nickolay Manukovsky, D..

    It is known that snail meat is a high quality food that is rich in protein. Hence, heliciculture or land snail farming spreads worldwide because it is a profitable business. The possibility to use the snails of Helix pomatia in Biological Life Support System (BLSS) was studied by Japanese Researches. In that study land snails were considered to be producers of animal protein. Also, snail breeding was an important part of waste processing, because snails were capable to eat the inedible plant biomass. As opposed to the agricultural snail farming, heliciculture in BLSS should be more carefully planned. The purpose of our work was to develop a model for snail breeding in BLSS that can predict mass flow rates in and out of snail facility. There are three linked parts in the model called “Stoichiometry”, “Population” and “Mass balance”, which are used in turn. Snail population is divided into 12 age groups from oviposition to one year. In the submodel “Stoichiometry” the individual snail growth and metabolism in each of 12 age groups are described with stoichiometry equations. Reactants are written on the left side of the equations, while products are written on the right side. Stoichiometry formulas of reactants and products consist of four chemical elements: C, H, O, N. The reactants are feed and oxygen, products are carbon dioxide, metabolic water, snail meat, shell, feces, slime and eggs. If formulas of substances in the stoichiometry equations are substituted with their molar masses, then stoichiometry equations are transformed to the equations of molar mass balance. To get the real mass balance of individual snail growth and metabolism one should multiply the value of each molar mass in the equations on the scale parameter, which is the ratio between mass of monthly consumed feed and molar mass of feed. Mass of monthly consumed feed and stoichiometry coefficients of formulas of meat, shell, feces, slime and eggs should be determined experimentally

  9. A lightweight scalable agarose-gel-synthesized thermoelectric composite (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy


    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  10. Mathematical Models for the Education Sector, Supporting Material to the Survey. (Les Modeles Mathematiques du Secteur Enseignement. Annexes.) Technical Report. (United States)

    Organisation for Economic Cooperation and Development, Paris (France).

    This document contains supporting material for the survey on current practice in the construction and use of mathematical models for education. Two kinds of supporting material are included: (1) the responses to the questionnaire, and (2) supporting documents and other materials concerning the mathematical model-building effort in education.…

  11. Artificial intelligence support for scientific model-building (United States)

    Keller, Richard M.


    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  12. Bridge deterioration models to support Indiana's bridge management system. (United States)


    An effective bridge management system that is equipped with reliable deterioration models enables agency engineers to carry out : monitoring and long-term programming of bridge repair actions. At the project level, deterioration models help the agenc...

  13. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability. (United States)

    Racoceanu, Daniel; Capron, Frédérique


    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics. © 2016 S. Karger AG, Basel.

  14. A 3D Geometry Model Search Engine to Support Learning (United States)

    Tam, Gary K. L.; Lau, Rynson W. H.; Zhao, Jianmin


    Due to the popularity of 3D graphics in animation and games, usage of 3D geometry deformable models increases dramatically. Despite their growing importance, these models are difficult and time consuming to build. A distance learning system for the construction of these models could greatly facilitate students to learn and practice at different…

  15. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  16. Coupling hydrological modeling and support vector regression to model hydropeaking in alpine catchments. (United States)

    Chiogna, Gabriele; Marcolini, Giorgia; Liu, Wanying; Pérez Ciria, Teresa; Tuo, Ye


    Water management in the alpine region has an important impact on streamflow. In particular, hydropower production is known to cause hydropeaking i.e., sudden fluctuations in river stage caused by the release or storage of water in artificial reservoirs. Modeling hydropeaking with hydrological models, such as the Soil Water Assessment Tool (SWAT), requires knowledge of reservoir management rules. These data are often not available since they are sensitive information belonging to hydropower production companies. In this short communication, we propose to couple the results of a calibrated hydrological model with a machine learning method to reproduce hydropeaking without requiring the knowledge of the actual reservoir management operation. We trained a support vector machine (SVM) with SWAT model outputs, the day of the week and the energy price. We tested the model for the Upper Adige river basin in North-East Italy. A wavelet analysis showed that energy price has a significant influence on river discharge, and a wavelet coherence analysis demonstrated the improved performance of the SVM model in comparison to the SWAT model alone. The SVM model was also able to capture the fluctuations in streamflow caused by hydropeaking when both energy price and river discharge displayed a complex temporal dynamic. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Constitutive modelling of an arterial wall supported by microscopic measurements

    Directory of Open Access Journals (Sweden)

    Vychytil J.


    Full Text Available An idealized model of an arterial wall is proposed as a two-layer system. Distinct mechanical response of each layer is taken into account considering two types of strain energy functions in the hyperelasticity framework. The outer layer, considered as a fibre-reinforced composite, is modelled using the structural model of Holzapfel. The inner layer, on the other hand, is represented by a two-scale model mimicing smooth muscle tissue. For this model, material parameters such as shape, volume fraction and orientation of smooth muscle cells are determined using the microscopic measurements. The resulting model of an arterial ring is stretched axially and loaded with inner pressure to simulate the mechanical response of a porcine arterial segment during inflation and axial stretching. Good agreement of the model prediction with experimental data is promising for further progress.

  18. Cognitive Support using BDI Agent and Adaptive User Modeling

    DEFF Research Database (Denmark)

    Hossain, Shabbir


    challenges of an ageing population. This thesis work is one attempt towards that. The thesis focused on research the approaches to provide cognitive support for users with cognitive disabilities through ICT-based technological solutions. The recent advancement of Articial Intelligence and wireless sensor...... networks have shown potential to improve the quality of life of elder people with disabilities using current technologies. The primary objective of this thesis is to conduct research on the approach to provide support for the elderly users with cognitive disabilities. In our research conduct, we have dened...... a set of goals for attaining the objective of this thesis. The initial goal is to recognize the activities of the users to assess the need of support for the user during the activity. However, one of the challenges of the recognition process is the adaptability for variant user behaviour due to physical...

  19. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks (United States)

    Kaltenbacher, Barbara; Hasenauer, Jan


    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  20. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M


    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  1. Tree-Homomorphic Encryption and Scalable Hierarchical Secret-Ballot Elections (United States)

    Kiayias, Aggelos; Yung, Moti

    In this work we present a new paradigm for trust and work distribution in a hierarchy of servers that aims to achieve scalability of work and trust simultaneously. The paradigm is implemented with a decryption capability which is distributed and forces a workflow along a tree structure, enforcing distribution of the workload as well as fairness and partial disclosure (privacy) properties. We call the method "tree-homomorphic" since it extends traditional homomorphic encryption and we exemplify its usage within a large scale election scheme, showing how it contributes to the properties that such a scheme needs. We note that existing design models over which e-voting schemes have been designed for, do not adapt to scale with respect to a combination of privacy and trust (fairness); thus we present a model emphasizing the scaling of privacy and fairness in parallel to the growth and distribution of the election structure. We present two instantiations of e-voting schemes that are robust, publicly verifiable, and support multiple candidate ballot casting employing tree-homomorphic encryption schemes. We extend the scheme to allow the voters in a smallest administrated election unit to employ a security mechanism that protects their privacy even if all authorities are corrupt.

  2. Atomic structure of graphene supported heterogeneous model catalysts

    International Nuclear Information System (INIS)

    Franz, Dirk


    Graphene on Ir(111) forms a moire structure with well defined nucleation centres. Therefore it can be utilized to create hexagonal metal cluster lattices with outstanding structural quality. At diffraction experiments these 2D surface lattices cause a coherent superposition of the moire cell structure factor, so that the measured signal intensity scales with the square of coherently scattering unit cells. This artificial signal enhancement enables the opportunity for X-ray diffraction to determine the atomic structure of small nano-objects, which are hardly accessible with any experimental technique. The uniform environment of every metal cluster makes the described metal cluster lattices on graphene/Ir(111) an attractive model system for the investigation of catalytic, magnetic and quantum size properties of ultra-small nano-objects. In this context the use of x-rays provides a maximum of flexibility concerning the possible sample environments (vacuum, selected gases, liquids, sample temperature) and allows in-situ/operando measurements. In the framework of the present thesis the structure of different metal clusters grown by physical vapor deposition in an UHV environment and after gas exposure have been investigated. On the one hand the obtained results will explore many aspects of the atomic structure of these small metal clusters and on the other hand the presented results will proof the capabilities of the described technique (SXRD on cluster lattices). For iridium, platinum, iridium/palladium and platinum/rhodium the growth on graphene/Ir(111) of epitaxial, crystalline clusters with an ordered hexagonal lattice arrangement has been confirmed using SXRD. The clusters nucleate at the hcp sites of the moire cell and bind via rehybridization of the carbon atoms (sp 2 → sp 3 ) to the Ir(111) substrate. This causes small displacements of the substrate atoms, which is revealed by the diffraction experiments. All metal clusters exhibit a fcc structure, whereupon

  3. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.


    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  4. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua


    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  5. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens


    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  6. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig. (United States)

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D


    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit

  7. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian


    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  8. Scalable on-chip quantum state tomography (United States)

    Titchener, James G.; Gräfe, Markus; Heilmann, René; Solntsev, Alexander S.; Szameit, Alexander; Sukhorukov, Andrey A.


    Quantum information systems are on a path to vastly exceed the complexity of any classical device. The number of entangled qubits in quantum devices is rapidly increasing, and the information required to fully describe these systems scales exponentially with qubit number. This scaling is the key benefit of quantum systems, however it also presents a severe challenge. To characterize such systems typically requires an exponentially long sequence of different measurements, becoming highly resource demanding for large numbers of qubits. Here we propose and demonstrate a novel and scalable method for characterizing quantum systems based on expanding a multi-photon state to larger dimensionality. We establish that the complexity of this new measurement technique only scales linearly with the number of qubits, while providing a tomographically complete set of data without a need for reconfigurability. We experimentally demonstrate an integrated photonic chip capable of measuring two- and three-photon quantum states with statistical reconstruction fidelity of 99.71%.

  9. Parallel scalability of Hartree-Fock calculations. (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R


    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  10. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.


    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  11. Towards scalable Byzantine fault-tolerant replication (United States)

    Zbierski, Maciej


    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  12. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T


    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  13. Scalable graphene aptasensors for drug quantification (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie


    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  14. Appropriate models in decision support systems for river basin management

    NARCIS (Netherlands)

    Xu, YuePing; Booij, Martijn J.; Morell, M.; Todorovik, O.; Dimitrov, D.; Selenica, A.; Spirkovski, Z.


    In recent years, new ideas and techniques appear very quickly, like sustainability, adaptive management, Geographic Information System, Remote Sensing and participations of new stakeholders, which contribute a lot to the development of decision support systems in river basin management. However, the

  15. Zeolite supported palladium catalysts for hydroalkylation of phenolic model compounds

    Czech Academy of Sciences Publication Activity Database

    Akhmetzyanova, U.; Opanasenko, Maksym; Horáček, J.; Montanari, E.; Čejka, Jiří; Kikhtyanin, O.


    Roč. 252, NOV 2017 (2017), s. 116-124 ISSN 1387-1811 R&D Projects: GA ČR GBP106/12/G015 Institutional support: RVO:61388955 Keywords : Phenol hydroalkylation * Cyclohexylcyclohexane * MWW Subject RIV: CF - Physical ; Theoretical Chemistry OBOR OECD: Physical chemistry Impact factor: 3.615, year: 2016

  16. Designing, Modeling and Evaluating Influence Strategiesfor Behavior Change Support Systems

    NARCIS (Netherlands)

    Öörni, Anssi; Kelders, Saskia Marion; van Gemert-Pijnen, Julia E.W.C.; Oinas-Kukkonen, Harri


    Behavior change support systems (BCSS) research is an evolving area. While the systems have been demonstrated to work to the effect, there is still a lot of work to be done to better understand the influence mechanisms of behavior change, and work out their influence on the systems architecture. The

  17. Real time traffic models, decision support for traffic management

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; de Romph, E.; Friso, K.; Zantema, K.


    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  18. Real Time Traffic Models, Decision Support for Traffic Management

    NARCIS (Netherlands)

    Wismans, L.; De Romph, E.; Friso, K.; Zantema, K.


    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  19. Ordered mesoporous materials as model supports to study catalyst preparation

    NARCIS (Netherlands)

    Sietsma, J.R.A.


    Catalysts are indispensable to modern-day society because of their prominent role in petroleum refining, chemical processing, and the reduction of environmental pollution. The catalytically active component often consists of small metal (oxide) particles that are supported on a carrier such as

  20. Decision support for sustainable forestry: enhancing the basic rational model. (United States)

    H.R. Ekbia; K.M. Reynolds


    Decision-support systems (DSS) have been extensively used in the management of natural resources for nearly two decades. However, practical difficulties with the application of DSS in real-world situations have become increasingly apparent. Complexities of decisionmaking, encountered in the context of ecosystem management, are equally present in sustainable forestry....

  1. Model of Early Support of Child Development in Poland (United States)

    Czyz, Anna Katarzyna


    The development of a child, especially a child with a disability, is conditional upon the initiation of rehabilitation measures immediately after the problem has been identified. The quality of the reaction is conditioned by the functioning of the therapeutic team. The main purpose of the research was the diagnosis of early support system for…

  2. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL


    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  3. The adapted model of institutional support for Hispanic student degree completion: revisions and recommendations. (United States)

    Bond, Mary Lou; Cason, Carolyn L; Gray, Jennifer R


    This article describes the historical development of the adapted model of institutional support (AMIS) for Hispanic student degree completion. The model was developed using 6 major categories of support: financial support, emotional and moral support, mentoring, professional socialization, academic advising, and technical support. Studies used to validate the inclusion of each of the components are presented. Two self-assessment instruments based on the model, the Institutional Self-Assessment for Factors Supporting Hispanic Student Recruitment and Persistence and the Healthcare Professions Education Program Self-Assessment (PSA), used to evaluate institutional supports for Hispanic student degree completion are described. This article describes the results of 2 studies using the PSA. The findings from these studies provide support for the AMIS. Limitations of the model and recommendations for further research are presented.


    Almquist, Zack W; Butts, Carter T


    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.

  5. Autonomy Support, Need Satisfaction, and Motivation for Support Among Adults With Intellectual Disability: Testing a Self-Determination Theory Model. (United States)

    Frielink, Noud; Schuengel, Carlo; Embregts, Petri J C M


    The tenets of self-determination theory as applied to support were tested with structural equation modelling for 186 people with ID with a mild to borderline level of functioning. The results showed that (a) perceived autonomy support was positively associated with autonomous motivation and with satisfaction of need for autonomy, relatedness, and competence; (b) autonomous motivation and need satisfaction were associated with higher psychological well-being; (c) autonomous motivation and need satisfaction statistically mediated the association between autonomy support and well-being; and (d) satisfaction of need for autonomy and relatedness was negatively associated with controlled motivation, whereas satisfaction of need for relatedness was positively associated with autonomous motivation. The self-determination theory provides insights relevant for improving support for people with intellectual disability.

  6. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu


    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  7. An integrated tiered service delivery model (ITSDM based on local CD4 testing demands can improve turn-around times and save costs whilst ensuring accessible and scalable CD4 services across a national programme.

    Directory of Open Access Journals (Sweden)

    Deborah K Glencross

    Full Text Available The South African National Health Laboratory Service (NHLS responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART, has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT, to extend national coverage, improve local turn-around/(TAT and contain programmatic costs.NHLS Corporate Data Warehouse CD4 data, from 60-70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009-2012.An integrated tiered service delivery model (ITSDM is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (350-1500 tests/day, serving ≥ 200 health-clinics. Tier-6 provides national support for standardisation, harmonization and quality across the organization.The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service can facilitate sustainable 'full service coverage' across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers.

  8. An Integrated Tiered Service Delivery Model (ITSDM) Based on Local CD4 Testing Demands Can Improve Turn-Around Times and Save Costs whilst Ensuring Accessible and Scalable CD4 Services across a National Programme (United States)

    Glencross, Deborah K.; Coetzee, Lindi M.; Cassim, Naseem


    Background The South African National Health Laboratory Service (NHLS) responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART), has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT), to extend national coverage, improve local turn-around/(TAT) and contain programmatic costs. Methods NHLS Corporate Data Warehouse CD4 data, from 60–70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009–2012. Results An integrated tiered service delivery model (ITSDM) is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (350–1500 tests/day, serving ≥200 health-clinics). Tier-6 provides national support for standardisation, harmonization and quality across the organization. Conclusion The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service) can facilitate sustainable ‘full service coverage’ across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers. PMID:25490718

  9. Prediction Models and Decision Support: Chances and Challenges

    NARCIS (Netherlands)

    Kappen, T.H.


    A clinical prediction model can assist doctors in arriving at the most likely diagnosis or estimating the prognosis. By utilizing various patient- and disease-related properties, such models can yield objective estimations of the risk of a disease or the probability of a certain disease course for

  10. Exploiting Modelling and Simulation in Support of Cyber Defence

    NARCIS (Netherlands)

    Klaver, M.H.A.; Boltjes, B.; Croom-Jonson, S.; Jonat, F.; Çankaya, Y.


    The rapidly evolving environment of Cyber threats against the NATO Alliance has necessitated a renewed focus on the development of Cyber Defence policy and capabilities. The NATO Modelling and Simulation Group is looking for ways to leverage Modelling and Simulation experience in research, analysis

  11. Clearing the air : Air quality modelling for policy support

    NARCIS (Netherlands)

    Hendriks, C.


    The origin of particulate matter (PM) concentrations in the Netherlands is established using the LOTOS-EUROS model with a source attribution module. Emissions from the ten main economic sectors (SNAP1) were tracked, separating Dutch and foreign sources. Of the modelled PM10 in the Netherlands, about

  12. Support of Modelling in Process-Engineering Education

    NARCIS (Netherlands)

    Schaaf, van der H.; Vermuë, M.H.; Tramper, J.; Hartog, R.J.M.


    An objective of the Process Technology curriculum at Wageningen University is to teach students a stepwise modeling approach in the context of process engineering. Many process-engineering students have difficulty with learning to design a model. Some common problems are lack of structure in the

  13. Disaster Reintegration Model: A Qualitative Analysis on Developing Korean Disaster Mental Health Support Model

    Directory of Open Access Journals (Sweden)

    Yun-Jung Choi


    Full Text Available This study sought to describe the mental health problems experienced by Korean disaster survivors, using a qualitative research method to provide empirical resources for effective disaster mental health support in Korea. Participants were 16 adults or elderly adults who experienced one or more disasters at least 12 months ago recruited via theoretical sampling. Participants underwent in-depth individual interviews on their disaster experiences, which were recorded and transcribed for qualitative analysis, which followed Strauss and Corbin’s (1998 Grounded theory. After open coding, participants’ experiences were categorized into 130 codes, 43 sub-categories and 17 categories. The categories were further analyzed in a paradigm model, conditional model and the Disaster Reintegration Model, which proposed potentially effective mental health recovery strategies for disaster survivors, health providers and administrators. To provide effective assistance for mental health recovery of disaster survivors, both personal and public resilience should be promoted while considering both cultural and spiritual elements.

  14. Model of hospital-supported discharge after stroke

    DEFF Research Database (Denmark)

    Torp, Claus Rydahl; Vinkler, Sonja; Pedersen, Kirsten Damgaard


    BACKGROUND AND PURPOSE: Readmission rate within 6 months after a stroke is 40% to 50%. The purpose of the project was to evaluate whether an interdisciplinary stroke team could reduce length of hospital stay, readmission rate, increase patient satisfaction and reduce dependency of help. METHODS......: One hundred and ninety-eight patients with acute stroke were randomized into 103 patients whose discharge was supported by an interdisciplinary stroke team and 95 control patients who received standard aftercare. Baseline characteristics were comparable in the 2 groups. The patients were evaluated...... services. Furthermore, there was no significant difference in functional scores or patient satisfaction. CONCLUSIONS: In this setting we could not show benefit of an interdisciplinary stroke team supporting patients at discharge perhaps because standard aftercare was very efficient already....

  15. fastBMA: scalable network inference and transitive reduction. (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee


    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (, as part of the updated networkBMA Bioconductor package ( and as ready-to-deploy Docker images ( © The Authors 2017. Published by Oxford University Press.

  16. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores. (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan


    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at:

  17. Using Technology to Support the Army Learning Model (United States)


    more graphic scenes which provided the most impact on the student . The in-house team was successful in coming up with innovative and creative ...for instructors to monitor the students ’ use of the product during classroom time in order to provide feedback and support. The training the...a sense, benefits to the students relied on the creativity of the instructor. For example, some instructors used the products to preview or review

  18. An Exploratory Analysis of the Navy Personnel Support Delivery Model (United States)


    of Secretary of Defense PAPA DET Pay and Personnel Afloat Detachment PASS Pay/Personnel Administrative Support System PASSMAN PASS Management Manual ...made manually face-to-face. Current efforts are focused in developing the Integrated Personnel and Pay System-Army (IPPS-A), the Army specific solution...Association, 2017). It is a pioneer of direct marketing , self-service transactions, and online banking services with its patent on remote

  19. Open Models of Decision Support Towards a Framework


    Diasio, Stephen Ray


    Aquesta tesi presenta un marc per als models oberts de suport a les decisions en les organitzacions. El treball es vehicula a través d’un compendi d’articles on s’analitzen els fluxos d’entrada i de sortida de coneixement en les organitzacions, així como les tecnologies existents de suport a les decisions. Es presenten els factors subjacents que impulsen nous models per a formes obertes de suport a la decisió. La tesis presenta un estudi de les distintes tipologies de models de suport a les d...

  20. Atomic structure of graphene supported heterogeneous model catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Franz, Dirk


    Graphene on Ir(111) forms a moire structure with well defined nucleation centres. Therefore it can be utilized to create hexagonal metal cluster lattices with outstanding structural quality. At diffraction experiments these 2D surface lattices cause a coherent superposition of the moire cell structure factor, so that the measured signal intensity scales with the square of coherently scattering unit cells. This artificial signal enhancement enables the opportunity for X-ray diffraction to determine the atomic structure of small nano-objects, which are hardly accessible with any experimental technique. The uniform environment of every metal cluster makes the described metal cluster lattices on graphene/Ir(111) an attractive model system for the investigation of catalytic, magnetic and quantum size properties of ultra-small nano-objects. In this context the use of x-rays provides a maximum of flexibility concerning the possible sample environments (vacuum, selected gases, liquids, sample temperature) and allows in-situ/operando measurements. In the framework of the present thesis the structure of different metal clusters grown by physical vapor deposition in an UHV environment and after gas exposure have been investigated. On the one hand the obtained results will explore many aspects of the atomic structure of these small metal clusters and on the other hand the presented results will proof the capabilities of the described technique (SXRD on cluster lattices). For iridium, platinum, iridium/palladium and platinum/rhodium the growth on graphene/Ir(111) of epitaxial, crystalline clusters with an ordered hexagonal lattice arrangement has been confirmed using SXRD. The clusters nucleate at the hcp sites of the moire cell and bind via rehybridization of the carbon atoms (sp{sup 2} → sp{sup 3}) to the Ir(111) substrate. This causes small displacements of the substrate atoms, which is revealed by the diffraction experiments. All metal clusters exhibit a fcc structure

  1. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur


    ... associated with military training exercises. The model combines the business practice of Material Requirements Planning and the commercial spreadsheet software capabilities of Lotus 1-2-3 to calculate the requirements for food, consumable...

  2. Computer supported estimation of input data for transportation models


    Cenek, Petr; Tarábek, Peter; Kopf, Marija


    Control and management of transportation systems frequently rely on optimization or simulation methods based on a suitable model. Such a model uses optimization or simulation procedures and correct input data. The input data define transportation infrastructure and transportation flows. Data acquisition is a costly process and so an efficient approach is highly desirable. The infrastructure can be recognized from drawn maps using segmentation, thinning and vectorization. The accurate definiti...

  3. Scalable Inference of Customer Similarities from Interactions Data Using Dirichlet Processes


    Michael Braun; André Bonfrer


    Under the sociological theory of homophily, people who are similar to one another are more likely to interact with one another. Marketers often have access to data on interactions among customers from which, with homophily as a guiding principle, inferences could be made about the underlying similarities. However, larger networks face a quadratic explosion in the number of potential interactions that need to be modeled. This scalability problem renders probability models of social interaction...

  4. Data to support "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations & Biological Condition" (United States)

    U.S. Environmental Protection Agency — Spreadsheets are included here to support the manuscript "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition". This...

  5. Agricultural climate impacts assessment for economic modeling and decision support (United States)

    Thomson, A. M.; Izaurralde, R. C.; Beach, R.; Zhang, X.; Zhao, K.; Monier, E.


    A range of approaches can be used in the application of climate change projections to agricultural impacts assessment. Climate projections can be used directly to drive crop models, which in turn can be used to provide inputs for agricultural economic or integrated assessment models. These model applications, and the transfer of information between models, must be guided by the state of the science. But the methodology must also account for the specific needs of stakeholders and the intended use of model results beyond pure scientific inquiry, including meeting the requirements of agencies responsible for designing and assessing policies, programs, and regulations. Here we present methodology and results of two climate impacts studies that applied climate model projections from CMIP3 and from the EPA Climate Impacts and Risk Analysis (CIRA) project in a crop model (EPIC - Environmental Policy Indicator Climate) in order to generate estimates of changes in crop productivity for use in an agricultural economic model for the United States (FASOM - Forest and Agricultural Sector Optimization Model). The FASOM model is a forward-looking dynamic model of the US forest and agricultural sector used to assess market responses to changing productivity of alternative land uses. The first study, focused on climate change impacts on the UDSA crop insurance program, was designed to use available daily climate projections from the CMIP3 archive. The decision to focus on daily data for this application limited the climate model and time period selection significantly; however for the intended purpose of assessing impacts on crop insurance payments, consideration of extreme event frequency was critical for assessing periodic crop failures. In a second, coordinated impacts study designed to assess the relative difference in climate impacts under a no-mitigation policy and different future climate mitigation scenarios, the stakeholder specifically requested an assessment of a

  6. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W


    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  7. Product Quality Modelling Based on Incremental Support Vector Machine (United States)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.


    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  8. GenePING: secure, scalable management of personal genomic data

    Directory of Open Access Journals (Sweden)

    Kohane Isaac S


    Full Text Available Abstract Background Patient genomic data are rapidly becoming part of clinical decision making. Within a few years, full genome expression profiling and genotyping will be affordable enough to perform on every individual. The management of such sizeable, yet fine-grained, data in compliance with privacy laws and best practices presents significant security and scalability challenges. Results We present the design and implementation of GenePING, an extension to the PING personal health record system that supports secure storage of large, genome-sized datasets, as well as efficient sharing and retrieval of individual datapoints (e.g. SNPs, rare mutations, gene expression levels. Even with full access to the raw GenePING storage, an attacker cannot discover any stored genomic datapoint on any single patient. Given a large-enough number of patient records, an attacker cannot discover which data corresponds to which patient, or even the size of a given patient's record. The computational overhead of GenePING's security features is a small constant, making the system usable, even in emergency care, on today's hardware. Conclusion GenePING is the first personal health record management system to support the efficient and secure storage and sharing of large genomic datasets. GenePING is available online at, licensed under the LGPL.

  9. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone


    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  10. Experimentally supported mathematical modeling of continuous baking processes

    DEFF Research Database (Denmark)

    Stenby Andresen, Mette

    and temperature) and control the process (air flow, temperature, and humidity) are therefore emphasized. The oven is furthermore designed to work outside the range of standard tunnel ovens, making it interesting for manufacturers of both baking products and baking equipment. A mathematical model describing......The scope of the PhD project was to increase knowledge on the process-to-product interactions in continuous tunnel ovens. The work has focused on five main objectives. These objectives cover development of new experimental equipment for pilot plant baking experiments, mathematical modeling of heat...... on mass transfer was examined through comparison of different modeling set-ups and experimental data. It was found that while the baking tray is likely to reduce the evaporation from the bottom surface, it is not correct to assume that no evaporation takes place at the covered surface. Parallel...

  11. Decision Support Model for Introduction of Gamification Solution Using AHP

    Directory of Open Access Journals (Sweden)

    Sangkyun Kim


    Full Text Available Gamification means the use of various elements of game design in nongame contexts including workplace collaboration, marketing, education, military, and medical services. Gamification is effective for both improving workplace productivity and motivating employees. However, introduction of gamification is not easy because the planning and implementation processes of gamification are very complicated and it needs interdisciplinary knowledge such as information systems, organization behavior, and human psychology. Providing a systematic decision making method for gamification process is the purpose of this paper. This paper suggests the decision criteria for selection of gamification platform to support a systematic decision making process for managements. The criteria are derived from previous works on gamification, introduction of information systems, and analytic hierarchy process. The weights of decision criteria are calculated through a survey by the professionals on game, information systems, and business administration. The analytic hierarchy process is used to derive the weights. The decision criteria and weights provided in this paper could support the managements to make a systematic decision for selection of gamification platform.

  12. Decision support model for introduction of gamification solution using AHP. (United States)

    Kim, Sangkyun


    Gamification means the use of various elements of game design in nongame contexts including workplace collaboration, marketing, education, military, and medical services. Gamification is effective for both improving workplace productivity and motivating employees. However, introduction of gamification is not easy because the planning and implementation processes of gamification are very complicated and it needs interdisciplinary knowledge such as information systems, organization behavior, and human psychology. Providing a systematic decision making method for gamification process is the purpose of this paper. This paper suggests the decision criteria for selection of gamification platform to support a systematic decision making process for managements. The criteria are derived from previous works on gamification, introduction of information systems, and analytic hierarchy process. The weights of decision criteria are calculated through a survey by the professionals on game, information systems, and business administration. The analytic hierarchy process is used to derive the weights. The decision criteria and weights provided in this paper could support the managements to make a systematic decision for selection of gamification platform.

  13. Validation of Superelement Modelling of Complex Offshore Support Structures

    DEFF Research Database (Denmark)

    Wang, Shaofeng; Larsen, Torben J.; Hansen, Anders Melchior


    calculations consisting of up to thousands design load cases needs to be evaluated. However, even the simplest aero-elastic model of such structures has many more DOFs than monopile, resulting in excessive computation burden. In order to deal with this problem, the superelement method has been introduced...... for modelling such structures. One superelement method has been proven very promising in the previous project of Wave Loads [1] and a fundamental question in such DOFs reduction methods is which modes that are essential and which modes can be neglected. For the jacket structure, the introduction of a gravity...

  14. On Regional Modeling to Support Air Quality Policies (book chapter) (United States)

    We examine the use of the Community Multiscale Air Quality (CMAQ) model in simulating the changes in the extreme values of air quality that are of interest to the regulatory agencies. Year-to-year changes in ozone air quality are attributable to variations in the prevailing meteo...

  15. Speech act theory in support of idealised warning models | Carstens ...

    African Journals Online (AJOL)

    In applied communication studies warnings (as components of instructional texts) are often characterised in terms of criteria for effectiveness. An idealised model for warnings include the following elements: a signal word or label appropriate to the level of hazard; a hazard statement; references to the consequences of ...

  16. Verifying OCL specifications of UML models : tool support and compositionality

    NARCIS (Netherlands)

    Kyas, Marcel


    The Unified Modelling Language (UML) and the Object Constraint Language (OCL) serve as specification languages for embedded and real-time systems used in a safety-critical environment. In this dissertation class diagrams, object diagrams, and OCL constraints are formalised. The formalisation

  17. Using landscape disturbance and succession models to support forest management (United States)

    Eric J. Gustafson; Brian R. Sturtevant; Anatoly S. Shvidenko; Robert M. Scheller


    Managers of forested landscapes must account for multiple, interacting ecological processes operating at broad spatial and temporal scales. These interactions can be of such complexity that predictions of future forest ecosystem states are beyond the analytical capability of the human mind. Landscape disturbance and succession models (LDSM) are predictive and...

  18. Supporting Renewable energies in Europe - The German Model

    International Nuclear Information System (INIS)

    Kreuzer, Karin


    This document presents some key information and figures about Germany's energy transition (Energiewende), the leading up to the Renewable energy Sources Act (EEG) and its amendments, the Current EEG Act: push to direct marketing and the market premium model, and the future challenges and the planned EEG reform in 2014

  19. Assessing survivability to support power grid investment decisions

    International Nuclear Information System (INIS)

    Koziolek, Anne; Avritzer, Alberto; Suresh, Sindhu; Menasché, Daniel S.; Diniz, Morganna; Souza e Silva, Edmundo de; Leão, Rosa M.; Trivedi, Kishor; Happe, Lucia


    The reliability of power grids has been subject of study for the past few decades. Traditionally, detailed models are used to assess how the system behaves after failures. Such models, based on power flow analysis and detailed simulations, yield accurate characterizations of the system under study. However, they fall short on scalability. In this paper, we propose an efficient and scalable approach to assess the survivability of power systems. Our approach takes into account the phased-recovery of the system after a failure occurs. The proposed phased-recovery model yields metrics such as the expected accumulated energy not supplied between failure and full recovery. Leveraging the predictive power of the model, we use it as part of an optimization framework to assist in investment decisions. Given a budget and an initial circuit to be upgraded, we propose heuristics to sample the solution space in a principled way accounting for survivability-related metrics. We have evaluated the feasibility of this approach by applying it to the design of a benchmark distribution automation circuit. Our empirical results indicate that the combination of survivability and power flow analysis can provide meaningful investment decision support for power systems engineers. - Highlights: • We propose metrics and models for scalable survivability analysis of power systems. • The survivability model captures the system phased-recovery, from failure to repair. • The survivability model is used as a building block of an optimization framework. • Heuristics assist in investment options accounting for survivability-related metrics.

  20. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi


    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  1. A~Scalable~Data~Taking~System at~a~Test~Beam~for~LHC

    CERN Multimedia


    % RD-13 A Scalable Data Taking System at a Test Beam for LHC \\\\ \\\\We have installed a test beam read-out facility for the simultaneous test of LHC detectors, trigger and read-out electronics, together with the development of the supporting architecture in a multiprocessor environment. The aim of the project is to build a system which incorporates all the functionality of a complete read-out chain. Emphasis is put on a highly modular design, such that new hardware and software developments can be conveniently introduced. Exploiting this modularity, the set-up will evolve driven by progress in technologies and new software developments. \\\\ \\\\One of the main thrusts of the project is modelling and integration of different read-out architectures to provide a valuable training ground for new techniques. To address these aspects in a realistic manner, we collaborate with detector R\\&D projects in order to test higher level trigger systems, event building and high rate data transfers, once the techniques involve...

  2. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro


    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  3. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM


    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  4. Modeling snail breeding in a bioregenerative life support system (United States)

    Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.


    The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.

  5. Modeling snail breeding in a bioregenerative life support system. (United States)

    Kovalev, V S; Manukovsky, N S; Tikhomirov, A A; Kolmakova, A A


    The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007. Copyright © 2015 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  6. Model-supported selection of distribution coefficients for performance assessment

    International Nuclear Information System (INIS)

    Ochs, M.; Lothenbach, B.; Shibata, Hirokazu; Yui, Mikazu


    A thermodynamic speciation/sorption model is used to illustrate typical problems encountered in the extrapolation of batch-type K d values to repository conditions. For different bentonite-groundwater systems, the composition of the corresponding equilibrium solutions and the surface speciation of the bentonite is calculated by treating simultaneously solution equilibria of soluble components of the bentonite as well as ion exchange and acid/base reactions at the bentonite surface. K d values for Cs, Ra, and Ni are calculated by implementing the appropriate ion exchange and surface complexation equilibria in the bentonite model. Based on this approach, hypothetical batch experiments are contrasted with expected conditions in compacted backfill. For each of these scenarios, the variation of K d values as a function of groundwater composition is illustrated for Cs, Ra, and Ni. The applicability of measured, batch-type K d values to repository conditions is discussed. (author)

  7. Modeling information flows in clinical decision support: key insights for enhancing system effectiveness

    NARCIS (Netherlands)

    Medlock, Stephanie; Wyatt, Jeremy C.; Patel, Vimla L.; Shortliffe, Edward H.; Abu-Hanna, Ameen


    A fundamental challenge in the field of clinical decision support is to determine what characteristics of systems make them effective in supporting particular types of clinical decisions. However, we lack such a theory of decision support itself and a model to describe clinical decisions and the

  8. Control and modeling of a CELSS (Controlled Ecological Life Support System) (United States)

    Auslander, D. M.; Spear, R. C.; Babcock, P. S.; Nadel, M.


    Research topics that arise from the conceptualization of control for closed life support systems which are life support systems in which all or most of the mass is recycled are discussed. Modeling and control of uncertain and poorly defined systems, resource allocation in closed life support systems, and control structures or systems with delay and closure are emphasized.





    In the contemporary environment characterized by the dynamic structure of factors and the unpredictability of the relations existing between them, the central problem is the selection of strategic goals. Forecasting is the necessary precursor to the planning process and includes research into the future course of events. Numerous methods and techniques of forecasting are used nowadays. Econometric models can be used successfully for predicting the future development of a phenomenon, and there...

  10. AskIT Service Desk Support Value Model

    Energy Technology Data Exchange (ETDEWEB)

    Ashcraft, Phillip Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cummings, Susan M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fogle, Blythe G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Valdez, Christopher D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    The value model discussed herein provides an accurate and simple calculation of the funding required to adequately staff the AskIT Service Desk (SD).  The model is incremental – only technical labor cost is considered.  All other costs, such as management, equipment, buildings, HVAC, and training are considered common elements of providing any labor related IT Service. Depending on the amount of productivity loss and the number of hours the defect was unresolved, the value of resolving work from the SD is unquestionably an economic winner; the average cost of $16 per SD resolution can commonly translate to cost avoidance exceeding well over $100. Attempting to extract too much from the SD will likely create a significant downside. The analysis used to develop the value model indicates that the utilization of the SD is very high (approximately 90%).  As a benchmark, consider a comment from a manager at Vitalyst (a commercial IT service desk) that their utilization target is approximately 60%.  While high SD utilization is impressive, over the long term it is likely to cause unwanted consequences to staff such as higher turnover, illness, or burnout.  A better solution is to staff the SD so that analysts have time to improve skills through training, develop knowledge, improve processes, collaborate with peers, and improve customer relationship skills.

  11. An optimal decision making model for supporting week hospital management. (United States)

    Conforti, Domenico; Guerriero, Francesca; Guido, Rosita; Cerinic, Marco Matucci; Conforti, Maria Letizia


    Week Hospital is an innovative inpatient health care organization and management, by which hospital stay services are planned in advance and delivered on week-time basis to elective patients. In this context, a strategic decision is the optimal clinical management of patients, and, in particular, devising efficient and effective admission and scheduling procedures, by tackling different requirements such as beds' availability, diagnostic resources, and treatment capabilities. The main aim is to maximize the patient flow, by ensuring the delivery of all clinical services during the week. In this paper, the optimal management of Week Hospital patients is considered. We have developed and validated an innovative integer programming model, based on clinical resources allocation and beds utilization. In particular, the model aims at scheduling Week Hospital patients' admission/discharge, possibly reducing the length of stay on the basis of an available timetable of clinical services. The performance of the model has been evaluated, in terms of efficiency and robustness, by considering real data coming from a Week Hospital Rheumatology Division. The experimental results have been satisfactory and demonstrate the effectiveness of the proposed approach.

  12. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  13. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H


    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  14. Systematic flood modelling to support flood-proof urban design (United States)

    Bruwier, Martin; Mustafa, Ahmed; Aliaga, Daniel; Archambeau, Pierre; Erpicum, Sébastien; Nishida, Gen; Zhang, Xiaowei; Pirotton, Michel; Teller, Jacques; Dewals, Benjamin


    Urban flood risk is influenced by many factors such as hydro-meteorological drivers, existing drainage systems as well as vulnerability of population and assets. The urban fabric itself has also a complex influence on inundation flows. In this research, we performed a systematic analysis on how various characteristics of urban patterns control inundation flow within the urban area and upstream of it. An urban generator tool was used to generate over 2,250 synthetic urban networks of 1 km2. This tool is based on the procedural modelling presented by Parish and Müller (2001) which was adapted to generate a broader variety of urban networks. Nine input parameters were used to control the urban geometry. Three of them define the average length, orientation and curvature of the streets. Two orthogonal major roads, for which the width constitutes the fourth input parameter, work as constraints to generate the urban network. The width of secondary streets is given by the fifth input parameter. Each parcel generated by the street network based on a parcel mean area parameter can be either a park or a building parcel depending on the park ratio parameter. Three setback parameters constraint the exact location of the building whithin a building parcel. For each of synthetic urban network, detailed two-dimensional inundation maps were computed with a hydraulic model. The computational efficiency was enhanced by means of a porosity model. This enables the use of a coarser computational grid , while preserving information on the detailed geometry of the urban network (Sanders et al. 2008). These porosity parameters reflect not only the void fraction, which influences the storage capacity of the urban area, but also the influence of buildings on flow conveyance (dynamic effects). A sensitivity analysis was performed based on the inundation maps to highlight the respective impact of each input parameter characteristizing the urban networks. The findings of the study pinpoint

  15. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce (United States)

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.


    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536

  16. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  17. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data. (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi


    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  18. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    Energy Technology Data Exchange (ETDEWEB)

    Tock, Yoav [IBM Corporation, Haifa Research Center; Mandler, Benjamin [IBM Corporation, Haifa Research Center; Moreira, Jose [IBM T. J. Watson Research Center; Jones, Terry R [ORNL


    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliency and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.

  19. Supporting Current Energy Conversion Projects through Numerical Modeling (United States)

    James, S. C.; Roberts, J.


    The primary goals of current energy conversion (CEC) technology being developed today are to optimize energy output and minimize environmental impact. CEC turbines generate energy from tidal and current systems and create wakes that interact with turbines located downstream of a device. The placement of devices can greatly influence power generation and structural reliability. CECs can also alter the environment surrounding the turbines, such as flow regimes, sediment dynamics, and water quality. These alterations pose potential stressors to numerous environmental receptors. Software is needed to investigate specific CEC sites to simulate power generation and hydrodynamic responses of a flow through a CEC turbine array so that these potential impacts can be evaluated. Moreover, this software can be used to optimize array layouts that yield the least changes to the environmental (i.e., hydrodynamics, sediment dynamics, and water quality). Through model calibration exercises, simulated wake profiles and turbulence intensities compare favorably to the experimental data and demonstrate the utility and accuracy of a fast-running tool for future siting and analysis of CEC arrays in complex domains. The Delft3D modeling tool facilitates siting of CEC projects through optimization of array layouts and evaluation of potential environmental effect all while provide a common "language" for academics, industry, and regulators to be able to discuss the implications of marine renewable energy projects. Given the enormity of any full-scale marine renewable energy project, it necessarily falls to modeling to evaluate how array operations must be addressed in an environmental impact statement in a way that engenders confidence in the assessment of the CEC array to minimize environmental effects.

  20. Computer-Supported Modelling of Multi modal Transportation Networks Rationalization

    Directory of Open Access Journals (Sweden)

    Ratko Zelenika


    Full Text Available This paper deals with issues of shaping and functioning ofcomputer programs in the modelling and solving of multimoda Itransportation network problems. A methodology of an integrateduse of a programming language for mathematical modellingis defined, as well as spreadsheets for the solving of complexmultimodal transportation network problems. The papercontains a comparison of the partial and integral methods ofsolving multimodal transportation networks. The basic hypothesisset forth in this paper is that the integral method results inbetter multimodal transportation network rationalization effects,whereas a multimodal transportation network modelbased on the integral method, once built, can be used as the basisfor all kinds of transportation problems within multimodaltransport. As opposed to linear transport problems, multimodaltransport network can assume very complex shapes. This papercontains a comparison of the partial and integral approach totransp01tation network solving. In the partial approach, astraightforward model of a transp01tation network, which canbe solved through the use of the Solver computer tool within theExcel spreadsheet inteiface, is quite sufficient. In the solving ofa multimodal transportation problem through the integralmethod, it is necessmy to apply sophisticated mathematicalmodelling programming languages which supp01t the use ofcomplex matrix functions and the processing of a vast amountof variables and limitations. The LINGO programming languageis more abstract than the Excel spreadsheet, and it requiresa certain programming knowledge. The definition andpresentation of a problem logic within Excel, in a manner whichis acceptable to computer software, is an ideal basis for modellingin the LINGO programming language, as well as a fasterand more effective implementation of the mathematical model.This paper provides proof for the fact that it is more rational tosolve the problem of multimodal transportation networks by

  1. A human performance modelling approach to intelligent decision support systems (United States)

    Mccoy, Michael S.; Boys, Randy M.


    Manned space operations require that the many automated subsystems of a space platform be controllable by a limited number of personnel. To minimize the interaction required of these operators, artificial intelligence techniques may be applied to embed a human performance model within the automated, or semi-automated, systems, thereby allowing the derivation of operator intent. A similar application has previously been proposed in the domain of fighter piloting, where the demand for pilot intent derivation is primarily a function of limited time and high workload rather than limited operators. The derivation and propagation of pilot intent is presented as it might be applied to some programs.

  2. Software Support of Modelling using Ergonomic Tools in Engineering

    Directory of Open Access Journals (Sweden)

    Darina Dupláková


    Full Text Available One of the preconditions for correct development of industrial production is continuous interconnecting of virtual reality and real world by computer software. Computer software are used for product modelling, creation of technical documentation, scheduling, management and optimization of manufacturing processes, and efficiency increase of human work in manufacturing plants. This article describes the frequent used ergonomic software which helping to increase of human work by error rate reducing, risks factors of working environment, injury in workplaces and elimination of arising occupational diseases. They are categorized in the field of micro ergonomics and they are applicable at the manufacturing level with flexible approach in solving of established problems.

  3. Modelling of thermal stress in vapor generator supports

    International Nuclear Information System (INIS)

    Halpert, S.; Vazquez, L.


    To assure safety and availability of a nuclear power plant components or equipment stress analysis are done. When thermal loads are involved it's necessary to know the temperature field of the component or equipment. This paper describes the structural analysis of a steam generator lug with thermal load including the model used for computer simulation and presents the evolution of the temperature profile, the stress intensity and principal stress during start up and shut down of a nuclear power reactor. Temperature field obtained from code calculation show good agreement with the experimental data while stress analysis results are in agreement with a preview estimation. (author) [es

  4. Building Scalable Knowledge Graphs for Earth Science (United States)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.


    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  5. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas


    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  6. Scalability and interoperability within glideinWMS

    International Nuclear Information System (INIS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.


    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  7. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.


    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  8. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC


    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.


    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R.; Hunter, C.


    The United States Forest Service-Savannah River (USFS) routinely performs prescribed fires at the Savannah River Site (SRS), a Department of Energy (DOE) facility located in southwest South Carolina. This facility covers {approx}800 square kilometers and is mainly wooded except for scattered industrial areas containing facilities used in managing nuclear materials for national defense and waste processing. Prescribed fires of forest undergrowth are necessary to reduce the risk of inadvertent wild fires which have the potential to destroy large areas and threaten nuclear facility operations. This paper discusses meteorological observations and numerical model simulations from a period in early 2002 of an incident involving an early-morning multicar accident caused by poor visibility along a major roadway on the northern border of the SRS. At the time of the accident, it was not clear if the limited visibility was due solely to fog or whether smoke from a prescribed burn conducted the previous day just to the northwest of the crash site had contributed to the visibility. Through use of available meteorological information and detailed modeling, it was determined that the primary reason for the low visibility on this night was fog induced by meteorological conditions.

  10. Scalable Design of Paired CRISPR Guide RNAs for Genomic Deletion.

    Directory of Open Access Journals (Sweden)

    Carlos Pulido-Quetglas


    Full Text Available CRISPR-Cas9 technology can be used to engineer precise genomic deletions with pairs of single guide RNAs (sgRNAs. This approach has been widely adopted for diverse applications, from disease modelling of individual loci, to parallelized loss-of-function screens of thousands of regulatory elements. However, no solution has been presented for the unique bioinformatic design requirements of CRISPR deletion. We here present CRISPETa, a pipeline for flexible and scalable paired sgRNA design based on an empirical scoring model. Multiple sgRNA pairs are returned for each target, and any number of targets can be analyzed in parallel, making CRISPETa equally useful for focussed or high-throughput studies. Fast run-times are achieved using a pre-computed off-target database. sgRNA pair designs are output in a convenient format for visualisation and oligonucleotide ordering. We present pre-designed, high-coverage library designs for entire classes of protein-coding and non-coding elements in human, mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. In human cells, we reproducibly observe deletion efficiencies of ≥50% for CRISPETa designs targeting an enhancer and exonic fragment of the MALAT1 oncogene. In the latter case, deletion results in production of desired, truncated RNA. CRISPETa will be useful for researchers seeking to harness CRISPR for targeted genomic deletion, in a variety of model organisms, from single-target to high-throughput scales.

  11. Modelling of the costs of decision support for small and medium-sized enterprises

    Directory of Open Access Journals (Sweden)

    Viera Tomišová


    Full Text Available The support of decision-making activities in small and medium-sized enterprises (SME has its specific features. When suggesting steps for the implementation of decision-support tools in the enterprise, we identified two main ways of decision-making support based on the data analysis: ERP (Enterprise Resource Planning without BI (Business Intelligence and ERP with BI. In our contribution, we present costs models of both mentioned decision support systems and their practical interpretation.

  12. Scalable nanohelices for predictive studies and enhanced 3D visualization. (United States)

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P


    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  13. Intelligent Model Management in a Forest Ecosystem Management Decision Support System (United States)

    Donald Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mark Twery; H. Michael Rauscher; Peter Knopp; Scott Thomasma; Mayukh Dass; Hajime Uchiyama


    Decision making for forest ecosystem management can include the use of a wide variety of modeling tools. These tools include vegetation growth models, wildlife models, silvicultural models, GIS, and visualization tools. NED-2 is a robust, intelligent, goal-driven decision support system that integrates tools in each of these categories. NED-2 uses a blackboard...

  14. Implementing a Technology-Supported Model for Cross-Organisational Learning and Knowledge Building for Teachers (United States)

    Tammets, Kairit; Pata, Kai; Laanpere, Mart


    This study proposed using the elaborated learning and knowledge building model (LKB model) derived from Nonaka and Takeuchi's knowledge management model for supporting cross-organisational teacher development in the temporarily extended organisations composed of universities and schools. It investigated the main LKB model components in the context…

  15. Representative Model of the Learning Process in Virtual Spaces Supported by ICT (United States)

    Capacho, José


    This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…

  16. Family Members Affected by a Close Relative's Addiction: The Stress-Strain-Coping-Support Model (United States)

    Orford, Jim; Copello, Alex; Velleman, Richard; Templeton, Lorna


    This article outlines the stress-strain-coping-support (SSCS) model which underpins the whole programme of work described in this supplement. The need for such a model is explained: previous models of substance misuse and the family have attributed dysfunction or deficiency to families or family members. In contrast, the SSCS model assumes that…

  17. Spatiotemporal Organization of Spin-Coated Supported Model Membranes (United States)

    Simonsen, Adam Cohen

    All cells of living organisms are separated from their surroundings and organized internally by means of flexible lipid membranes. In fact, there is consensus that the minimal requirements for self-replicating life processes include the following three features: (1) information carriers (DNA, RNA), (2) a metabolic system, and (3) encapsulation in a container structure [1]. Therefore, encapsulation can be regarded as an essential part of life itself. In nature, membranes are highly diverse interfacial structures that compartmentalize cells [2]. While prokaryotic cells only have an outer plasma membrane and a less-well-developed internal membrane structure, eukaryotic cells have a number of internal membranes associated with the organelles and the nucleus. Many of these membrane structures, including the plasma membrane, are complex layered systems, but with the basic structure of a lipid bilayer. Biomembranes contain hundreds of different lipid species in addition to embedded or peripherally associated membrane proteins and connections to scaffolds such as the cytoskeleton. In vitro, lipid bilayers are spontaneously self-organized structures formed by a large group of amphiphilic lipid molecules in aqueous suspensions. Bilayer formation is driven by the entropic properties of the hydrogen bond network in water in combination with the amphiphilic nature of the lipids. The molecular shapes of the lipid constituents play a crucial role in bilayer formation, and only lipids with approximately cylindrical shapes are able to form extended bilayers. The bilayer structure of biomembranes was discovered by Gorter and Grendel in 1925 [3] using monolayer studies of lipid extracts from red blood cells. Later, a number of conceptual models were developed to rationalize the organization of lipids and proteins in biological membranes. One of the most celebrated is the fluid-mosaic model by Singer and Nicolson (1972) [4]. According to this model, the lipid bilayer component of

  18. Pharmaceutical expenditure forecast model to support health policy decision making (United States)

    Rémuzat, Cécile; Urbinati, Duccio; Kornfeld, Åsa; Vataire, Anne-Lise; Cetinsoy, Laurent; Aballéa, Samuel; Mzoughi, Olfa; Toumi, Mondher


    Background and objective With constant incentives for healthcare payers to contain their pharmaceutical budgets, modelling policy decision impact became critical. The objective of this project was to test the impact of various policy decisions on pharmaceutical budget (developed for the European Commission for the project ‘European Union (EU) Pharmaceutical expenditure forecast’ – Methods A model was built to assess policy scenarios’ impact on the pharmaceutical budgets of seven member states of the EU, namely France, Germany, Greece, Hungary, Poland, Portugal, and the United Kingdom. The following scenarios were tested: expanding the UK policies to EU, changing time to market access, modifying generic price and penetration, shifting the distribution chain of biosimilars (retail/hospital). Results Applying the UK policy resulted in dramatic savings for Germany (10 times the base case forecast) and substantial additional savings for France and Portugal (2 and 4 times the base case forecast, respectively). Delaying time to market was found be to a very powerful tool to reduce pharmaceutical expenditure. Applying the EU transparency directive (6-month process for pricing and reimbursement) increased pharmaceutical expenditure for all countries (from 1.1 to 4 times the base case forecast), except in Germany (additional savings). Decreasing the price of generics and boosting the penetration rate, as well as shifting distribution of biosimilars through hospital chain were also key methods to reduce pharmaceutical expenditure. Change in the level of reimbursement rate to 100% in all countries led to an important increase in the pharmaceutical budget. Conclusions Forecasting pharmaceutical expenditure is a critical exercise to inform policy decision makers. The most important leverages identified by the model on pharmaceutical budget were driven by generic and biosimilar prices, penetration rate

  19. A short mnemonic to support the comprehensive geriatric assessment model. (United States)

    Han, Brenda; Grant, Cristin


    With an increasing number of older people using emergency services, researchers have raised concerns about the quality of care in an environment that is not designed to address older patients' specific needs and conditions. The comprehensive geriatric assessment (CGA) model was developed to address these issues, and to optimise healthcare delivery to older adults. This article introduces a complementary mnemonic, FRAIL, that refers to important elements of health information to consider before initiating care for older patients - falls/functional decline, reactions, altered mental status, illnesses, and living situation. It is not intended to replace the CGA, but can help to quickly identify high-risk older patients who warrant a more in-depth clinical assessment with CGA.

  20. LOKI: a practical modelling and support system for telepresence systems

    International Nuclear Information System (INIS)

    Griffin, M.; Bridgewater, C.E.


    The use of Virtual Reality headset systems, in combination with a tele-presence ''head'' is discussed. The system is attached to a Unimate Puma robot arm and manipulated by the operator, using information gathered by the camera and auditory system, displayed via the Virtual Reality helmet. Operator commands are cross checked by using a modelling system, held on the Virtual Reality system. This system was found to supply a good sense of spacial awareness of the robot's domain. Actions which might move the robot outside its suitable operating envelope, or create a collision with the environment, were successfully blocked. This approach is seen as useful within the area of tele-operation. (author)

  1. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model (United States)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.


    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient

  2. Developing a Model to Support Students in Solving Subtraction

    Directory of Open Access Journals (Sweden)

    Nila Mareta Murdiyani


    Full Text Available Subtraction has two meanings and each meaning leads to the different strategies. The meaning of “taking away something” suggests a direct subtraction, while the meaning of “determining the difference between two numbers” is more likely to be modeled as indirect addition. Many prior researches found that the second meaning and second strategy rarely appeared in the mathematical textbooks and teacher explanations, including in Indonesia. Therefore, this study was conducted to contribute to the development of a local instruction theory for subtraction by designing instructional activities that can facilitate first grade of primary school students to develop a model in solving two digit numbers subtraction. Consequently, design research was chosen as an appropriate approach for achieving the research aim and Realistic Mathematics Education (RME was used as a guide to design the lesson. This study involved 6 students in the pilot experiment, 31 students in the teaching experiment, and a first grade teacher of SDN 179 Palembang. The  result of this study shows that the beads string could bridge students from the contextual problems (taking ginger candies and making grains bracelets to the use of the empty number line. It also shows that the empty number line could promote students to  use different strategies (direct subtraction, indirect addition, and indirect subtraction in solving subtraction problems. Based on these findings, it is recommended to apply RME in the teaching learning process to make it more meaningful for students. Keywords: Subtraction, Design Research, Realistic Mathematics Education, The Beads String, The Empty Number Line DOI:

  3. Scalable Concurrency Control and Recovery for Shared Storage Arrays

    National Research Council Canada - National Science Library

    Amiri, Khalil


    Shared storage arrays enable thousands of storage devices to be shared and directly accessed by end hosts over switched system area networks, promising databases and file systems highly scalable, reliable storage...

  4. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George


    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  5. PSOM2—partitioning-based scalable ontology matching using ...

    Indian Academy of Sciences (India)

    B Sathiya


    -based systems to reduce the matching space. ... reduction in execution time, leading to an effective and scalable ontology matching system. Keywords. ... ontology matching results, collaborative and social ontol- ogy matching ...

  6. ARC Code TI: Block-GP: Scalable Gaussian Process Regression (United States)

    National Aeronautics and Space Administration — Block GP is a Gaussian Process regression framework for multimodal data, that can be an order of magnitude more scalable than existing state-of-the-art nonlinear...

  7. celerite: Scalable 1D Gaussian Processes in C++, Python, and Julia (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth


    celerite provides fast and scalable Gaussian Process (GP) Regression in one dimension and is implemented in C++, Python, and Julia. The celerite API is designed to be familiar to users of george and, like george, celerite is designed to efficiently evaluate the marginalized likelihood of a dataset under a GP model. This is then be used alongside a non-linear optimization or posterior inference library for the best results.

  8. Streamed Sampling on Dynamic data as Support for Classification Model

    Directory of Open Access Journals (Sweden)

    Heru Sukoco


    Full Text Available Data mining process on dynamically changing data have several problems, such as unknown data size and skew of the data is always changing. Random sampling method commonly applied for extracting general synopsis from very large database. In this research, Vitter’s reservoir algorithm is used to retrieve k records of data from the database and put into the sample. Sample is used as input for classification task in data mining. Sample type is backing sample and it saved as table contains value of id and priority. Priority indicates the probability of how long data retained in the sample. Kullback-Leibler divergence applied to measure the similarity between population and sample distribution. Result of this research is showed that continuously taken samples randomly is possible when transaction occurs. Kullback-Leibler divergence is a very good measure to maintain similar distribution between the population and the sample with interval from 0 to 0.0001. Sample results are always up to date on new transactions with similar skewnes. In purpose of classification task, decision tree model is improved significantly when the changing occurred.

  9. Experiments and Modeling to Support Field Test Design

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    Disposition of heat-generating nuclear waste (HGNW) remains a continuing technical and sociopolitical challenge. We define HGNW as the combination of both heat generating defense high level waste (DHLW) and civilian spent nuclear fuel (SNF). Numerous concepts for HGNW management have been proposed and examined internationally, including an extensive focus on geologic disposal (c.f. Brunnengräber et al., 2013). One type of proposed geologic material is salt, so chosen because of its viscoplastic deformation that causes self-repair of damage or deformation induced in the salt by waste emplacement activities (Hansen and Leigh, 2011). Salt as a repository material has been tested at several sites around the world, notably the Morsleben facility in Germany (c.f. Fahland and Heusermann, 2013; Wollrath et al., 2014; Fahland et al., 2015) and at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, NM. Evaluating the technical feasibility of a HGNW repository in salt is an ongoing process involving experiments and numerical modeling of many processes at many facilities.

  10. Nutritional models for a Controlled Ecological Life Support System (CELSS): Linear mathematical modeling (United States)

    Wade, Rose C.


    The NASA Controlled Ecological Life Support System (CELSS) Program is involved in developing a biogenerative life support system that will supply food, air, and water to space crews on long-duration missions. An important part of this effort is in development of the knowledge and technological capability of producing and processing foods to provide optimal diets for space crews. This involves such interrelated factors as determination of the diet, based on knowledge of nutrient needs of humans and adjustments in those needs that may be required as a result of the conditions of long-duration space flight; determination of the optimal mixture of crops required to provide nutrients at levels that are sufficient but not excessive or toxic; and consideration of the critical issues of spacecraft space and power limitations, which impose a phytomass minimization requirement. The complex interactions among these factors are examined with the goal of supplying a diet that will satisfy human needs while minimizing the total phytomass requirement. The approach taken was to collect plant nutritional composition and phytomass production data, identify human nutritional needs and estimate the adjustments to the nutrient requirements likely to result from space flight, and then to generate mathematical models from these data.

  11. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello


    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  12. A Scalable proxy cache for Grid Data Access (United States)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan


    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  13. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning


    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  14. A Scalable Architecture for VoIP Conferencing

    Directory of Open Access Journals (Sweden)

    R Venkatesha Prasad


    Full Text Available Real-Time services are traditionally supported on circuit switched network. However, there is a need to port these services on packet switched network. Architecture for audio conferencing application over the Internet in the light of ITU-T H.323 recommendations is considered. In a conference, considering packets only from a set of selected clients can reduce speech quality degradation because mixing packets from all clients can lead to lack of speech clarity. A distributed algorithm and architecture for selecting clients for mixing is suggested here based on a new quantifier of the voice activity called "Loudness Number" (LN. The proposed system distributes the computation load and reduces the load on client terminals. The highlights of this architecture are scalability, bandwidth saving and speech quality enhancement. Client selection for playing out tries to mimic a physical conference where the most vocal participants attract more attention. The contributions of the paper are expected to aid H.323 recommendations implementations for Multipoint Processors (MP. A working prototype based on the proposed architecture is already functional.

  15. Roles of University Support for International Students in the United States: Analysis of a Systematic Model of University Identification, University Support, and Psychological Well-Being (United States)

    Cho, Jaehee; Yu, Hongsik


    Unlike previous research on international students' social support, this current study applied the concept of organizational support to university contexts, examining the effects of university support. Mainly based on the social identity/self-categorization stress model, this study developed and tested a path model composed of four key…

  16. Scalable Quantum Networks for Distributed Computing and Sensing (United States)


    AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01...photon. 15. SUBJECT TERMS EOARD, quantum information processing, quantum computation , photonics, quantum networks, quantum memory 16. ANSI Std. Z39.18 Final report for “Scalable Quantum Networks for Distributed Computing and Sensing” Project 12-2076; Sept 2012 through Aug 2015

  17. SOL: A Library for Scalable Online Learning Algorithms


    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai


    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  18. TriG: Next Generation Scalable Spaceborne GNSS Receiver (United States)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.


    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  19. Partnerships for success: A collaborative support model to enhance the first year student experience

    Directory of Open Access Journals (Sweden)

    Johanna Einfalt


    Full Text Available Recent discourse about engaging first year students calls for more collaboration in terms of adopting a holistic approach to course delivery and support. This paper discusses a collaborative support model operating at a regional Australian university since 2008. In particular, it describes a collaborative support initiative emerging from this model that is based on providing an informal consultative space where students can drop-in and gain assessment support for research, writing and content. A focus group, online surveys and interviews with co-ordinators were conducted to evaluate the impact of this initiative. Findings suggest that this collaborative support model impacts on the first year student experience by: raising awareness about academic skills and the processes for researching and writing; promoting peer learning opportunities; building confidence and providing suitable support for a diverse range of students.

  20. Information support model and its impact on utility, satisfaction and loyalty of users

    Directory of Open Access Journals (Sweden)

    Sead Šadić


    Full Text Available In today’s modern age, information systems are of vital importance for successful performance of any organization. The most important role of any information system is its information support. This paper develops an information support model and presents the results of the survey examining the effects of such model. The survey was performed among the employees of Brčko District Government and comprised three phases. The first phase assesses the influence of the quality of information support and information on information support when making decisions. The second phase examines the impact of information support when making decisions on the perceived availability and user satisfaction with information support. The third phase examines the effects of perceived usefulness as well as information support satisfaction on user loyalty. The model is presented using six hypotheses, which were tested by means of a multivariate regression analysis. The demonstrated model shows that the quality of information support and information is of vital importance in the decision-making process. The perceived usefulness and customer satisfaction are of vital importance for continuous usage of information support. The model is universal, and if slightly modified, it can be used in any sphere of life where satisfaction is measured for clients and users of some service.