WorldWideScience

Sample records for model supporting scalable

  1. Scalability of human models

    NARCIS (Netherlands)

    Rodarius, C.; Rooij, L. van; Lange, R. de

    2007-01-01

    The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was

  2. Scalable Automated Model Search

    Science.gov (United States)

    2014-05-20

    of processing. 6. FUTURE WORK We note that these optimizations are just the tip of the iceberg in solving this problem faster. Advanced model ...Scalable Automated Model Search Evan Sparks Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No...2014 to 00-00-2014 4. TITLE AND SUBTITLE Scalable Automated Model Search 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  3. Scalable Models Using Model Transformation

    Science.gov (United States)

    2008-07-13

    and the following companies: Agilent, Bosch, HSBC , Lockheed-Martin, National Instruments, and Toyota. Scalable Models Using Model Transformation...parametrization, and workflow automation. (AFRL), the State of California Micro Program, and the following companies: Agi- lent, Bosch, HSBC , Lockheed

  4. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  5. Scalable software architectures for decision support.

    Science.gov (United States)

    Musen, M A

    1999-12-01

    Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  6. Scalable Capacity Bounding Models for Wireless Networks

    OpenAIRE

    Du, Jinfeng; Medard, Muriel; Xiao, Ming; Skoglund, Mikael

    2014-01-01

    The framework of network equivalence theory developed by Koetter et al. introduces a notion of channel emulation to construct noiseless networks as upper (resp. lower) bounding models, which can be used to calculate the outer (resp. inner) bounds for the capacity region of the original noisy network. Based on the network equivalence framework, this paper presents scalable upper and lower bounding models for wireless networks with potentially many nodes. A channel decoupling method is proposed...

  7. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  8. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  9. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  10. Scalable RFCMOS Model for 90 nm Technology

    Directory of Open Access Journals (Sweden)

    Ah Fatt Tong

    2011-01-01

    Full Text Available This paper presents the formation of the parasitic components that exist in the RF MOSFET structure during its high-frequency operation. The parasitic components are extracted from the transistor's S-parameter measurement, and its geometry dependence is studied with respect to its layout structure. Physical geometry equations are proposed to represent these parasitic components, and by implementing them into the RF model, a scalable RFCMOS model, that is, valid up to 49.85 GHz is demonstrated. A new verification technique is proposed to verify the quality of the developed scalable RFCMOS model. The proposed technique can shorten the verification time of the scalable RFCMOS model and ensure that the coded scalable model file is error-free and thus more reliable to use.

  11. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...

  12. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-16

    Technology: Permanent Magnet Brushless DC machine • Model : Self- generating torque-speed-efficiency map • Future improvements: Induction machine...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Outline • Motivation and Scope • Integrated Starter Generator Model • Battery Model ...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  13. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  14. A Scalable Prescriptive Parallel Debugging Model

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Quarfot Nielsen, Niklas; Lee, Gregory L.

    2015-01-01

    Debugging is a critical step in the development of any parallel program. However, the traditional interactive debugging model, where users manually step through code and inspect their application, does not scale well even for current supercomputers due its centralized nature. While lightweight...

  15. TOWARDS A SCALABLE SCIENTIFIC DATA GRID MODEL AND SERVICES

    Directory of Open Access Journals (Sweden)

    Azizol Abdullah

    2010-03-01

    Full Text Available Scientific Data Grid mostly deals with large computational problems. It provides geographically distributed resources for large-scale data-intensive applications that generate large scientific data sets. This required the scientist in modern scientific computing communities involved in managing massive amounts of a very large data collections that are geographically distributed. Research in the area of grid has given various ideas and solutions to address these requirements. However, nowadays the number of participants (scientists and institutions that are involved in this kind of environment is increasing tremendously. This situation has lead to a problem of scalability. In order to overcome this problem we need a data grid model that can scale well with the increasing number of users. Peer-to-peer (P2P is one of the architectures that is a promising scale and dynamism environment. In this paper, we present a P2P model for Scientific Data Grid that utilizes the P2P services to address the scalability problem. By using this model, we study and propose various decentralized discovery strategies that intend to address the problem of scalability. We also investigate the impact of data replication that addresses the data distribution and reliability problem for our Scientific Data Grid model on the propose discovery strategies. For the purpose of this study, we have developed and used our own data grid simulation written using PARSEC. We illustrate our P2P Scientific Data Grid model and our data grid simulation used in this study. We then analyze the performance of the discovery strategies with and without the existence of replication strategies relative to their success rates, bandwidth consumption and average number of hop.

  16. Scalable wideband equivalent circuit model for silicon-based on-chip transmission lines

    Science.gov (United States)

    Wang, Hansheng; He, Weiliang; Zhang, Minghui; Tanh, Lu

    2017-06-01

    A scalable wideband equivalent circuit model of silicon-based on-chip transmission lines is presented in this paper along with an efficient analytical parameter extraction method based on improved characteristic function approach, including a relevant equation to reduce the deviation caused by approximation. The model consists of both series and shunt lumped elements and accounts for high-order parasitic effects. The equivalent circuit model is derived and verified to recover the frequency-dependent parameters at a range from direct current to 50 GHz accurately. The scalability of the model is proved by comparing simulated and measured scattering parameters with the method of cascade, attaining excellent results based on samples made from CMOS 0.13 and 0.18 μm process. Project supported by National Natural Science Foundation of China (No. 61674036).

  17. ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION

    Data.gov (United States)

    National Aeronautics and Space Administration — ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION AMRUDIN AGOVIC*, HANHUAI SHAN, AND ARINDAM BANERJEE Abstract. The...

  18. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  19. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  20. Optimal, scalable forward models for computing gravity anomalies

    CERN Document Server

    May, Dave A

    2011-01-01

    We describe three approaches for computing a gravity signal from a density anomaly. The first approach consists of the classical "summation" technique, whilst the remaining two methods solve the Poisson problem for the gravitational potential using either a Finite Element (FE) discretization employing a multilevel preconditioner, or a Green's function evaluated with the Fast Multipole Method (FMM). The methods utilizing the PDE formulation described here differ from previously published approaches used in gravity modeling in that they are optimal, implying that both the memory and computational time required scale linearly with respect to the number of unknowns in the potential field. Additionally, all of the implementations presented here are developed such that the computations can be performed in a massively parallel, distributed memory computing environment. Through numerical experiments, we compare the methods on the basis of their discretization error, CPU time and parallel scalability. We demonstrate t...

  1. A hybrid random field model for scalable statistical learning.

    Science.gov (United States)

    Freno, A; Trentin, E; Gori, M

    2009-01-01

    This paper introduces hybrid random fields, which are a class of probabilistic graphical models aimed at allowing for efficient structure learning in high-dimensional domains. Hybrid random fields, along with the learning algorithm we develop for them, are especially useful as a pseudo-likelihood estimation technique (rather than a technique for estimating strict joint probability distributions). In order to assess the generality of the proposed model, we prove that the class of pseudo-likelihood distributions representable by hybrid random fields strictly includes the class of joint probability distributions representable by Bayesian networks. Once we establish this result, we develop a scalable algorithm for learning the structure of hybrid random fields, which we call 'Markov Blanket Merging'. On the one hand, we characterize some complexity properties of Markov Blanket Merging both from a theoretical and from the experimental point of view, using a series of synthetic benchmarks. On the other hand, we evaluate the accuracy of hybrid random fields (as learned via Markov Blanket Merging) by comparing them to various alternative statistical models in a number of pattern classification and link-prediction applications. As the results show, learning hybrid random fields by the Markov Blanket Merging algorithm not only reduces significantly the computational cost of structure learning with respect to several considered alternatives, but it also leads to models that are highly accurate as compared to the alternative ones.

  2. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); de Supinski, Bronis R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  3. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    , methodologies enabling scalability analysis of multidomain, complex systems, are still missing. In acknowledgment of the importance for complex systems to present the ability to change or evolve, we present in this work a systemlevel model-based methodology allowing the multidisciplinary parametric evaluation......Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems......-engineering efforts for scaling a system specification efficaciously. We demonstrate the value of our methodology by investigating a smartphone-based biosensing instrumentation platform. Specifically, we carry out scalability analysis for the system’s bandwidth specification: the maximum analog voltage waveform...

  4. Progress Report 2008: A Scalable and Extensible Earth System Model for Climate Change Science

    Energy Technology Data Exchange (ETDEWEB)

    Drake, John B [ORNL; Worley, Patrick H [ORNL; Hoffman, Forrest M [ORNL; Jones, Phil [Los Alamos National Laboratory (LANL)

    2009-01-01

    This project employs multi-disciplinary teams to accelerate development of the Community Climate System Model (CCSM), based at the National Center for Atmospheric Research (NCAR). A consortium of eight Department of Energy (DOE) National Laboratories collaborate with NCAR and the NASA Global Modeling and Assimilation Office (GMAO). The laboratories are Argonne (ANL), Brookhaven (BNL) Los Alamos (LANL), Lawrence Berkeley (LBNL), Lawrence Livermore (LLNL), Oak Ridge (ORNL), Pacific Northwest (PNNL) and Sandia (SNL). The work plan focuses on scalablity for petascale computation and extensibility to a more comprehensive earth system model. Our stated goal is to support the DOE mission in climate change research by helping ... To determine the range of possible climate changes over the 21st century and beyond through simulations using a more accurate climate system model that includes the full range of human and natural climate feedbacks with increased realism and spatial resolution.

  5. Clinical Information System Services and Capabilities Desired for Scalable, Standards-Based, Service-oriented Decision Support: Consensus Assessment of the Health Level 7 Clinical Decision Support Work Group

    Science.gov (United States)

    Kawamoto, Kensaku; Jacobs, Jason; Welch, Brandon M.; Huser, Vojtech; Paterno, Marilyn D.; Del Fiol, Guilherme; Shields, David; Strasberg, Howard R.; Haug, Peter J.; Liu, Zhijing; Jenders, Robert A.; Rowed, David W.; Chertcoff, Daryl; Fehre, Karsten; Adlassnig, Klaus-Peter; Curtis, A. Clayton

    2012-01-01

    A standards-based, service-oriented architecture for clinical decision support (CDS) has the potential to significantly enhance CDS scalability and robustness. To enable such a CDS architecture, the Health Level 7 CDS Work Group reviewed the literature, hosted multi-stakeholder discussions, and consulted domain experts to identify and prioritize the services and capabilities required from clinical information systems (CISs) to enable service-oriented CDS. In addition, relevant available standards were identified. Through this process, ten CIS services and eight CIS capabilities were identified as being important for enabling scalable, service-oriented CDS. In particular, through a survey of 46 domain experts, five services and capabilities were identified as being especially critical: 1) the use of standard information models and terminologies; 2) the ability to leverage a Decision Support Service (DSS); 3) support for a clinical data query service; 4) support for an event subscription and notification service; and 5) support for a user communication service. PMID:23304315

  6. Clinical information system services and capabilities desired for scalable, standards-based, service-oriented decision support: consensus assessment of the Health Level 7 clinical decision support Work Group.

    Science.gov (United States)

    Kawamoto, Kensaku; Jacobs, Jason; Welch, Brandon M; Huser, Vojtech; Paterno, Marilyn D; Del Fiol, Guilherme; Shields, David; Strasberg, Howard R; Haug, Peter J; Liu, Zhijing; Jenders, Robert A; Rowed, David W; Chertcoff, Daryl; Fehre, Karsten; Adlassnig, Klaus-Peter; Curtis, A Clayton

    2012-01-01

    A standards-based, service-oriented architecture for clinical decision support (CDS) has the potential to significantly enhance CDS scalability and robustness. To enable such a CDS architecture, the Health Level 7 CDS Work Group reviewed the literature, hosted multi-stakeholder discussions, and consulted domain experts to identify and prioritize the services and capabilities required from clinical information systems (CISs) to enable service-oriented CDS. In addition, relevant available standards were identified. Through this process, ten CIS services and eight CIS capabilities were identified as being important for enabling scalable, service-oriented CDS. In particular, through a survey of 46 domain experts, five services and capabilities were identified as being especially critical: 1) the use of standard information models and terminologies; 2) the ability to leverage a Decision Support Service (DSS); 3) support for a clinical data query service; 4) support for an event subscription and notification service; and 5) support for a user communication service.

  7. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited)

    Science.gov (United States)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.

    2013-12-01

    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce

  8. Scalable air cathode microbial fuel cells using glass fiber separators, plastic mesh supporters, and graphite fiber brush anodes

    KAUST Repository

    Zhang, Xiaoyuan

    2011-01-01

    The combined use of brush anodes and glass fiber (GF1) separators, and plastic mesh supporters were used here for the first time to create a scalable microbial fuel cell architecture. Separators prevented short circuiting of closely-spaced electrodes, and cathode supporters were used to avoid water gaps between the separator and cathode that can reduce power production. The maximum power density with a separator and supporter and a single cathode was 75±1W/m3. Removing the separator decreased power by 8%. Adding a second cathode increased power to 154±1W/m3. Current was increased by connecting two MFCs connected in parallel. These results show that brush anodes, combined with a glass fiber separator and a plastic mesh supporter, produce a useful MFC architecture that is inherently scalable due to good insulation between the electrodes and a compact architecture. © 2010 Elsevier Ltd.

  9. System-agnostic clinical decision support services: benefits and challenges for scalable decision support.

    Science.gov (United States)

    Kawamoto, Kensaku; Del Fiol, Guilherme; Orton, Charles; Lobach, David F

    2010-01-01

    System-agnostic clinical decision support (CDS) services provide patient evaluation capabilities that are independent of specific CDS systems and system implementation contexts. While such system-agnostic CDS services hold great potential for facilitating the widespread implementation of CDS systems, little has been described regarding the benefits and challenges of their use. In this manuscript, the authors address this need by describing potential benefits and challenges of using a system-agnostic CDS service. This analysis is based on the authors' formal assessments of, and practical experiences with, various approaches to developing, implementing, and maintaining CDS capabilities. In particular, the analysis draws on the authors' experience developing and leveraging a system-agnostic CDS Web service known as SEBASTIAN. A primary potential benefit of using a system-agnostic CDS service is the relative ease and flexibility with which the service can be leveraged to implement CDS capabilities across applications and care settings. Other important potential benefits include facilitation of centralized knowledge management and knowledge sharing; the potential to support multiple underlying knowledge representations and knowledge resources through a common service interface; improved simplicity and componentization; easier testing and validation; and the enabling of distributed CDS system development. Conversely, important potential challenges include the increased effort required to develop knowledge resources capable of being used in many contexts and the critical need to standardize the service interface. Despite these challenges, our experiences to date indicate that the benefits of using a system-agnostic CDS service generally outweigh the challenges of using this approach to implementing and maintaining CDS systems.

  10. Reinforcing user data analysis with Ganga in the LHC era: scalability, monitoring and user-support.

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M

    2011-01-01

    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticeable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to imp...

  11. Reinforcing User Data Analysis with Ganga in the LHC Era: Scalability, Monitoring and User-support

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M

    2010-01-01

    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to impr...

  12. Scalable Bayesian modeling, monitoring and analysis of dynamic network flow data

    OpenAIRE

    2016-01-01

    Traffic flow count data in networks arise in many applications, such as automobile or aviation transportation, certain directed social network contexts, and Internet studies. Using an example of Internet browser traffic flow through site-segments of an international news website, we present Bayesian analyses of two linked classes of models which, in tandem, allow fast, scalable and interpretable Bayesian inference. We first develop flexible state-space models for streaming count data, able to...

  13. Scalable Telemonitoring Model in Cloud for Health Care Analysis

    Science.gov (United States)

    Sawant, Yogesh; Jayakumar, Naveenkumar, Dr.; Pawar, Sanket Sunil

    2017-08-01

    Telemonitoring model is health observations model that going to surveillance patients remotely. Telemonitoring model is suitable for patients to avoid high operating expense to get Emergency treatment. Telemonitoring gives the path for monitoring the medical device that generates a complete profile of patient’s health through assembling essential signs as well as additional health information. Telemonitoring model is relying on four differential modules which is capable to generate realistic synthetic electrocardiogram (ECG) signals. Telemonitoring model shows four categories of chronic disease: pulmonary state, diabetes, hypertension, as well as cardiovascular diseases. On the other hand, the results of this application model recommend facilitating despite of their nationality, socioeconomic grade, or age, patients observe amid tele-monitoring programs as well as the utilization of technologies. Patient’s multiple health status is shown in the result such as beat-to-beat variation in morphology and timing of the human ECG, including QT dispersion and R-peak amplitude modulation. This model will be utilized to evaluate biomedical signal processing methods that are utilized to calculate clinical information from the ECG.

  14. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  15. A Scalable Mextram Model for Advanced Bipolar Circuit Design

    NARCIS (Netherlands)

    Wu, H.-C.

    2007-01-01

    In this thesis, a referenced based scaling approach and its parameter extraction for the bipolar transistor model Mextram is proposed. It is mainly based on the physical properties of the Mextram parameters, which scale with the junction temperature and geometry of the bipolar transistor. The scalab

  16. Model Transport: Towards Scalable Transfer Learning on Manifolds

    DEFF Research Database (Denmark)

    Freifeld, Oren; Hauberg, Søren; Black, Michael J.

    2014-01-01

    “commutes” with learning. Consequently, our compact framework, applicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image......We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use...... ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straightforward idea seems to suffer...

  17. Scalable and Robust BDDC Preconditioners for Reservoir and Electromagnetics Modeling

    KAUST Repository

    Zampini, S.

    2015-09-13

    The purpose of the study is to show the effectiveness of recent algorithmic advances in Balancing Domain Decomposition by Constraints (BDDC) preconditioners for the solution of elliptic PDEs with highly heterogeneous coefficients, and discretized by means of the finite element method. Applications to large linear systems generated by div- and curl- conforming finite elements discretizations commonly arising in the contexts of modelling reservoirs and electromagnetics will be presented.

  18. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  19. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  20. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    Directory of Open Access Journals (Sweden)

    Esther Palomar

    2016-10-01

    Full Text Available Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS, which extends the refinement calculus for component and object system (rCOS modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT, i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  1. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  2. The Derivation and Use of a Scalable Model for Network Attack Identification and Path Prediction

    Directory of Open Access Journals (Sweden)

    Sanjeeb Nanda

    2008-04-01

    Full Text Available The rapid growth of the Internet has triggered an explosion in the number of applications that leverage its capabilities. Unfortunately, many are designed to burden or destroy the capabilities of their peers and the network's infrastructure. Hence, considerable effort has been focused on detecting and predicting the security breaches they propagate. However, the enormity of the Internet poses a formidable challenge to analyzing such attacks using scalable models. Furthermore, the lack of complete information on network vulnerabilities makes forecasting the systems that may be exploited by such applications in the future very hard. This paper presents a technique for deriving a scalable model for representing network attacks, and its application to identify actual attacks with greater certainty amongst false positives and false negatives. It also presents a method to forecast the propagation of security failures proliferated by an attack over time and its likely targets in the future.

  3. Scalable Generalization of Hydraulic Conductivity in Quaternary Strata for Use in a Regional Groundwater Model

    Science.gov (United States)

    Jatnieks, J.; Popovs, K.; Klints, I.; Timuhins, A.; Kalvans, A.; Delina, A.; Saks, T.

    2012-04-01

    The cover of Quaternary sediments especially in formerly glaciated territories usually is the most complex part of the sedimentary sequences. In regional hydro-geological models it is often assumed as a single layer with uniform or calibrated properties (Valner 2003). However, the properties and structure of Quaternary sediments control the groundwater recharge: it can either direct the groundwater flow horizontally towards discharge in topographic lows or vertically, recharging groundwater in the bedrock. This work aims to present calibration results and detail our experience while integrating a scalable generalization of hydraulic conductivity for Quaternary strata in the regional groundwater modelling system for the Baltic artesian basin - MOSYS V1. We also present a method for solving boundary transitions between spatial clusters of lithologically similar structure. In this study the main unit of generalization is the spatial cluster. Clusters are obtained from distance calculations combining the Normalized Compression Distance (NCD) metric, calculated by the CompLearn parameter-free machine learning toolkit, with normalized Euclidean distance measures for coordinates of the borehole log data. A hierarchical clustering solution is used for obtaining cluster membership identifier for each borehole. Using boreholes as generator points for Voronoi tessellation and dissolving resulting polygons according to their cluster membership attribute, allows us to obtain spatial regions representing a certain degree of similarity in lithological structure. This degree of similarity and the spatial heterogeneity of the cluster polygons can be varied by different flattening of the hierarchical cluster model into variable number of clusters. This provides a scalable generalization solution which can be adapted according to model calibration performance. Using the dissimilarity matrix of the NCD metric, a borehole most similar to all the others from the lithological structure

  4. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  5. SCALABLE PERCEPTUAL AUDIO REPRESENTATION WITH AN ADAPTIVE THREE TIME-SCALE SINUSOIDAL SIGNAL MODEL

    Institute of Scientific and Technical Information of China (English)

    Al-Moussawy Raed; Yin Junxun; Song Shaopeng

    2004-01-01

    This work is concerned with the development and optimization of a signal model for scalable perceptual audio coding at low bit rates. A complementary two-part signal model consisting of Sines plus Noise (SN) is described. The paper presents essentially a fundamental enhancement to the sinusoidal modeling component. The enhancement involves an audio signal scheme based on carrying out overlap-add sinusoidal modeling at three successive time scales,large, medium, and small. The sinusoidal modeling is done in an analysis-by-synthesis overlapadd manner across the three scales by using a psychoacoustically weighted matching pursuits.The sinusoidal modeling residual at the first scale is passed to the smaller scales to allow for the modeling of various signal features at appropriate resolutions. This approach greatly helps to correct the pre-echo inherent in the sinusoidal model. This improves the perceptual audio quality upon our previous work of sinusoidal modeling while using the same number of sinusoids. The most obvious application for the SN model is in scalable, high fidelity audio coding and signal modification.

  6. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  7. Scalable motion vector coding

    Science.gov (United States)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  8. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki

    2013-06-21

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  9. Accurate geometry scalable complementary metal oxide semiconductor modelling of low-power 90 nm amplifier circuits

    Directory of Open Access Journals (Sweden)

    Apratim Roy

    2014-05-01

    Full Text Available This paper proposes a technique to accurately estimate radio frequency behaviour of low-power 90 nm amplifier circuits with geometry scalable discrete complementary metal oxide semiconductor (CMOS modelling. Rather than characterising individual elements, the scheme is able to predict gain, noise and reflection loss of low-noise amplifier (LNA architectures made with bias, active and passive components. It reduces number of model parameters by formulating dependent functions in symmetric distributed modelling and shows that simple fitting factors can account for extraneous (interconnect effects in LNA structure. Equivalent-circuit model equations based on physical structure and describing layout parasites are developed for major amplifier elements like metal–insulator–metal (MIM capacitor, spiral symmetric inductor, polysilicon (PS resistor and bulk RF transistor. The models are geometry scalable with respect to feature dimensions, i.e. MIM/PS width and length, outer-dimension/turns of planar inductor and channel-width/fingers of active device. Results obtained with the CMOS models are compared against measured literature data for two 1.2 V amplifier circuits where prediction accuracy for RF parameters (S(21, noise figure, S(11, S(22 lies within the range of 92–99%.

  10. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    Science.gov (United States)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  11. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  12. NYU3T: teaching, technology, teamwork: a model for interprofessional education scalability and sustainability.

    Science.gov (United States)

    Djukic, Maja; Fulmer, Terry; Adams, Jennifer G; Lee, Sabrina; Triola, Marc M

    2012-09-01

    Interprofessional education is a critical precursor to effective teamwork and the collaboration of health care professionals in clinical settings. Numerous barriers have been identified that preclude scalable and sustainable interprofessional education (IPE) efforts. This article describes NYU3T: Teaching, Technology, Teamwork, a model that uses novel technologies such as Web-based learning, virtual patients, and high-fidelity simulation to overcome some of the common barriers and drive implementation of evidence-based teamwork curricula. It outlines the program's curricular components, implementation strategy, evaluation methods, and lessons learned from the first year of delivery and describes implications for future large-scale IPE initiatives.

  13. Generative model selection using a scalable and size-independent complex network classifier.

    Science.gov (United States)

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-12-01

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks," outperforms existing methods with respect to accuracy, scalability, and size-independence.

  14. Generative model selection using a scalable and size-independent complex network classifier

    Energy Technology Data Exchange (ETDEWEB)

    Motallebi, Sadegh, E-mail: motallebi@ce.sharif.edu; Aliakbary, Sadegh, E-mail: aliakbary@ce.sharif.edu; Habibi, Jafar, E-mail: jhabibi@sharif.edu [Department of Computer Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.

  15. Scalable Text and Link Analysis with Mixed-Topic Link Models

    CERN Document Server

    Zhu, Yaojia; Getoor, Lise; Moore, Cristopher

    2013-01-01

    Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both task...

  16. Disease prediction based on functional connectomes using a scalable and spatially-informed support vector machine.

    Science.gov (United States)

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-08-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to a strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D "connectome space," offering an additional layer of interpretability that could provide new insights about various disease processes.

  17. Standards for scalable clinical decision support: need, current and emerging standards, gaps, and proposal for progress.

    Science.gov (United States)

    Kawamoto, Kensaku; Del Fiol, Guilherme; Lobach, David F; Jenders, Robert A

    2010-01-01

    Despite their potential to significantly improve health care, advanced clinical decision support (CDS) capabilities are not widely available in the clinical setting. An important reason for this limited availability of CDS capabilities is the application-specific and institution-specific nature of most current CDS implementations. Thus, a critical need for enabling CDS capabilities on a much larger scale is the development and adoption of standards that enable current and emerging CDS resources to be more effectively leveraged across multiple applications and care settings. Standards required for such effective scaling of CDS include (i) standard terminologies and information models to represent and communicate about health care data; (ii) standard approaches to representing clinical knowledge in both human-readable and machine-executable formats; and (iii) standard approaches for leveraging these knowledge resources to provide CDS capabilities across various applications and care settings. A number of standards do exist or are under development to meet these needs. However, many gaps and challenges remain, including the excessive complexity of many standards; the limited availability of easily accessible knowledge resources implemented using standard approaches; and the lack of tooling and other practical resources to enable the efficient adoption of existing standards. Thus, the future development and widespread adoption of current CDS standards will depend critically on the availability of tooling, knowledge bases, and other resources that make the adoption of CDS standards not only the right approach to take, but the cost-effective path to follow given the alternative of using a traditional, ad hoc approach to implementing CDS.

  18. LoRa Scalability: A Simulation Model Based on Interference Measurements

    Directory of Open Access Journals (Sweden)

    Jetmir Haxhibeqiri

    2017-05-01

    Full Text Available LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  19. Scalability of the muscular action in a parametric 3D model of the index finger.

    Science.gov (United States)

    Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio

    2008-01-01

    A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.

  20. Scalable Content Management System

    Directory of Open Access Journals (Sweden)

    Sandeep Krishna S, Jayant Dani

    2013-10-01

    Full Text Available Immense growth in the volume of contents every day demands more scalable system to handle and overcome difficulties in capture, storage, transform, search, sharing and visualization of data, where the data can be a structured or unstructured data of any type. A system to manage the growing contents and overcome the issues and complexity faced using appropriate technologies would advantage over measurable qualities like flexibility, interoperability, customizability, security, auditability, quality, community support, options and cost of licensing. So architecting aContent Management System in terms of enterprise needs and a scalable solution to manage the huge data growth necessitates a Scalable Content Management System.

  1. A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.

    Science.gov (United States)

    Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N

    2014-01-01

    Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.

  2. Key Considerations of Community, Scalability, Supportability, Security, and Functionality in Selecting Open-Source Software in California Universities as Perceived by Technology Leaders

    Science.gov (United States)

    Britton, Todd Alan

    2014-01-01

    Purpose: The purpose of this study was to examine the key considerations of community, scalability, supportability, security, and functionality for selecting open-source software in California universities as perceived by technology leaders. Methods: After a review of the cogent literature, the key conceptual framework categories were identified…

  3. Key Considerations of Community, Scalability, Supportability, Security, and Functionality in Selecting Open-Source Software in California Universities as Perceived by Technology Leaders

    Science.gov (United States)

    Britton, Todd Alan

    2014-01-01

    Purpose: The purpose of this study was to examine the key considerations of community, scalability, supportability, security, and functionality for selecting open-source software in California universities as perceived by technology leaders. Methods: After a review of the cogent literature, the key conceptual framework categories were identified…

  4. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Kolla, Hemanth [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Borghesi, Giulio [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-05-01

    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- merical tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.

  5. PlanetDR, a scalable architecture for federated repositories supporting IMS Learning Design

    NARCIS (Netherlands)

    Blat, Josep; Griffiths, David; Navarrete, Toni; Santos, José Luis; García, Pedro; Pujol, Jordi

    2006-01-01

    This paper discusses PlanetDR, whose architecture supports very large federated educational digital repositories. It is based on the implementation of current open specifications for interoperability (such as IEEE Learning Object Metadata and IMS Digital Repositories Interoperability, in its Edusour

  6. Scalable Entity-Based Modeling of Population-Based Systems, Final LDRD Report

    Energy Technology Data Exchange (ETDEWEB)

    Cleary, A J; Smith, S G; Vassilevska, T K; Jefferson, D R

    2005-01-27

    The goal of this project has been to develop tools, capabilities and expertise in the modeling of complex population-based systems via scalable entity-based modeling (EBM). Our initial focal application domain has been the dynamics of large populations exposed to disease-causing agents, a topic of interest to the Department of Homeland Security in the context of bioterrorism. In the academic community, discrete simulation technology based on individual entities has shown initial success, but the technology has not been scaled to the problem sizes or computational resources of LLNL. Our developmental emphasis has been on the extension of this technology to parallel computers and maturation of the technology from an academic to a lab setting.

  7. Modelling and stability analysis of emergent behavior of scalable swarm system

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-ming; FANG Hua-jing

    2006-01-01

    In this paper we propose a two-layer emergent model for scalable swarm system. The first layer describes the individual flocking behavior to the local goal position (the center of minimal circumcircle decided by the neighbors in the positive visual set of individuals) resulting from the individual motion to one or two farthest neighbors in its positive visual set; the second layer describes the emergent aggregating swarm behavior resulting from the individual motion to its local goal position. The scale of the swarm will not be limited because only local individual information is used for modelling in the two-layer topology. We study the stability properties of the swarm emergent behavior based on Lyapunov stability theory. Simulations showed that the swarm system can converge to goal regions while maintaining cohesiveness.

  8. Prototyping scalable digital signal processing systems for radio astronomy using dataflow models

    CERN Document Server

    Sane, Nimish; Harris, Andrew I; Bhattacharyya, Shuvra S

    2012-01-01

    There is a growing trend toward using high-level tools for design and implementation of radio astronomy digital signal processing (DSP) systems. Such tools, for example, those from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER), are usually platform-specific, and lack high-level, platform-independent, portable, scalable application specifications. This limits the designer's ability to experiment with designs at a high-level of abstraction and early in the development cycle. We address some of these issues using a model-based design approach employing dataflow models. We demonstrate this approach by applying it to the design of a tunable digital downconverter (TDD) used for narrow-bandwidth spectroscopy. Our design is targeted toward an FPGA platform, called the Interconnect Break-out Board (IBOB), that is available from the CASPER. We use the term TDD to refer to a digital downconverter for which the decmation factor and center frequency can be reconfigured without the nee...

  9. Performance and scalability of finite-difference and finite-element wave-propagation modeling on Intel's Xeon Phi

    NARCIS (Netherlands)

    Zhebel, E.; Minisini, S.; Kononov, A.; Mulder, W.A.

    2013-01-01

    With the rapid developments in parallel compute architectures, algorithms for seismic modeling and imaging need to be reconsidered in terms of parallelization. The aim of this paper is to compare scalability of seismic modeling algorithms: finite differences, continuous mass-lumped finite elements

  10. Performance and scalability of finite-difference and finite-element wave-propagation modeling on Intel's Xeon Phi

    NARCIS (Netherlands)

    Zhebel, E.; Minisini, S.; Kononov, A.; Mulder, W.A.

    2013-01-01

    With the rapid developments in parallel compute architectures, algorithms for seismic modeling and imaging need to be reconsidered in terms of parallelization. The aim of this paper is to compare scalability of seismic modeling algorithms: finite differences, continuous mass-lumped finite elements a

  11. A scalable architecture for incremental specification and maintenance of procedural and declarative clinical decision-support knowledge.

    Science.gov (United States)

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan

    2010-01-01

    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians' assessment was significantly lower than the assessment of the knowledge engineers.

  12. Scalable Base-Station Model-Based Multicast in Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    Shao-Liang Peng; Shan-Shan Li; Lei Chen; Yu-Xing Peng; Nong Xiao

    2008-01-01

    Multicast is essential for wireless sensor network (WSN) applications. Existing multicast protocols in WSNs are often designed in a P2P pattern, assuming small number of destination nodes and frequent changes in network topologies.In order to truly adopt multicast in WSNs, we propose a base-station model-based multicast, SenCast, to meet the general requirements of applications. SenCast is scalable and energy-efficient for large group communications in WSNs. Theoretical analysis shows that SenCast is able to approximate the Minimum Nonleaf Nodes (MNN) problem to a ratio of In |R| (R is the set of all destinations), the best known lowest bound. We evaluate our design through comprehensive simulations and prototype implementations on Mica2 motes. Experimental results demonstrate that SenCast outperforms previous multicast protocols including the most recent work uCast.

  13. Working towards a scalable model of problem-based learning instruction in undergraduate engineering education

    Science.gov (United States)

    Mantri, Archana

    2014-05-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and Communication Engineering, namely Analog Electronics, Digital Electronics and Pulse, Digital & Switching Circuits is presented here. It measures the effects of pedagogy, gender and cognitive styles on the knowledge, skill and attitude of the students. The study was conducted two times with content designed around same set of OEPs but with two different trained facilitators for all the three courses. The repeatability of results for effects of the independent parameters on dependent parameters is studied and inferences are drawn.

  14. File format for storage of scalable video

    Institute of Scientific and Technical Information of China (English)

    BAI Gang; SUN Xiao-yan; WU Feng; YIN Bao-cai; LI Shi-peng

    2006-01-01

    A file format for storage of scalable video is proposed in this paper. A generic model is presented to enable a codec independent description of scalable video stream. The relationships, especially the dependencies, among sub-streams in a scalable video stream are specified sufficiently and effectively in the proposed model. Complying with the presented scalable video stream model, the file format for scalable video is proposed based on ISO Base Media File Format, which is simple and flexible enough to address the demands of scalable video application as well as the non-scalable ones.

  15. Long-Term Impact of an Electronic Health Record-Enabled, Team-Based, and Scalable Population Health Strategy Based on the Chronic Care Model

    Science.gov (United States)

    Kawamoto, Kensaku; Anstrom, Kevin J; Anderson, John B; Bosworth, Hayden B; Lobach, David F; McAdam-Marx, Carrie; Ferranti, Jeffrey M; Shang, Howard; Yarnall, Kimberly S H

    2016-01-01

    The Chronic Care Model (CCM) is a promising framework for improving population health, but little is known regarding the long-term impact of scalable, informatics-enabled interventions based on this model. To address this challenge, this study evaluated the long-term impact of implementing a scalable, electronic health record (EHR)- enabled, and CCM-based population health program to replace a labor-intensive legacy program in 18 primary care practices. Interventions included point-of-care decision support, quality reporting, team-based care, patient engagement, and provider education. Among 6,768 patients with diabetes receiving care over 4 years, hemoglobin A1c levels remained stable during the 2-year pre-intervention and post-intervention periods (0.03% and 0% increases, respectively), compared to a 0.42% increase expected based on A1c progression observed in the United Kingdom Prospective Diabetes Study long-term outcomes cohort. The results indicate that an EHR-enabled, team- based, and scalable population health strategy based on the CCM may be effective and efficient for managing population health.

  16. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  17. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    Science.gov (United States)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  18. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  19. Scalable rule-based modelling of allosteric proteins and biochemical networks.

    Directory of Open Access Journals (Sweden)

    Julien F Ollivier

    Full Text Available Much of the complexity of biochemical networks comes from the information-processing abilities of allosteric proteins, be they receptors, ion-channels, signalling molecules or transcription factors. An allosteric protein can be uniquely regulated by each combination of input molecules that it binds. This "regulatory complexity" causes a combinatorial increase in the number of parameters required to fit experimental data as the number of protein interactions increases. It therefore challenges the creation, updating, and re-use of biochemical models. Here, we propose a rule-based modelling framework that exploits the intrinsic modularity of protein structure to address regulatory complexity. Rather than treating proteins as "black boxes", we model their hierarchical structure and, as conformational changes, internal dynamics. By modelling the regulation of allosteric proteins through these conformational changes, we often decrease the number of parameters required to fit data, and so reduce over-fitting and improve the predictive power of a model. Our method is thermodynamically grounded, imposes detailed balance, and also includes molecular cross-talk and the background activity of enzymes. We use our Allosteric Network Compiler to examine how allostery can facilitate macromolecular assembly and how competitive ligands can change the observed cooperativity of an allosteric protein. We also develop a parsimonious model of G protein-coupled receptors that explains functional selectivity and can predict the rank order of potency of agonists acting through a receptor. Our methodology should provide a basis for scalable, modular and executable modelling of biochemical networks in systems and synthetic biology.

  20. A Scalable and Extensible Earth System Model for Climate Change Science

    Energy Technology Data Exchange (ETDEWEB)

    Gent, Peter; Lamarque, Jean-Francois; Conley, Andrew; Vertenstein, Mariana; Craig, Anthony

    2013-02-13

    The objective of this award was to build a scalable and extensible Earth System Model that can be used to study climate change science. That objective has been achieved with the public release of the Community Earth System Model, version 1 (CESM1). In particular, the development of the CESM1 atmospheric chemistry component was substantially funded by this award, as was the development of the significantly improved coupler component. The CESM1 allows new climate change science in areas such as future air quality in very large cities, the effects of recovery of the southern hemisphere ozone hole, and effects of runoff from ice melt in the Greenland and Antarctic ice sheets. Results from a whole series of future climate projections using the CESM1 are also freely available via the web from the CMIP5 archive at the Lawrence Livermore National Laboratory. Many research papers using these results have now been published, and will form part of the 5th Assessment Report of the United Nations Intergovernmental Panel on Climate Change, which is to be published late in 2013.

  1. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  2. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    Science.gov (United States)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our

  3. Modeling, Fabrication and Characterization of Scalable Electroless Gold Plated Nanostructures for Enhanced Surface Plasmon Resonance

    Science.gov (United States)

    Jang, Gyoung Gug

    The scientific and industrial demand for controllable thin gold (Au) film and Au nanostructures is increasing in many fields including opto-electronics, photovoltaics, MEMS devices, diagnostics, bio-molecular sensors, spectro-/microscopic surfaces and probes. In this study, a novel continuous flow electroless (CF-EL) Au plating method is developed to fabricate uniform Au thin films in ambient condition. The enhanced local mass transfer rate and continuous deposition resulting from CF-EL plating improved physical uniformity of deposited Au films and thermally transformed nanoparticles (NPs). Au films and NPs exhibited improved optical photoluminescence (PL) and surface plasmon resonance (SPR), respectively, relative to batch immersion EL (BI-EL) plating. Suggested mass transfer models of Au mole deposition are consistent with optical feature of CF-EL and BI-EL films. The prototype CF-EL plating system is upgraded an automated scalable CF-EL plating system with real-time transmission UV-vis (T-UV) spectroscopy which provides the advantage of CF-EL plating, such as more uniform surface morphology, and overcomes the disadvantages of conventional EL plating, such as no continuous process and low deposition rate, using continuous process and controllable deposition rate. Throughout this work, dynamic morphological and chemical transitions during redox-driven self-assembly of Ag and Au film on silica surfaces under kinetic and equilibrium conditions are distinguished by correlating real-time T-UV spectroscopy with X-ray photoelectron spectroscopy (XPS) and scanning electron microscopy (SEM) measurements. The characterization suggests that four previously unrecognized time-dependent physicochemical regimes occur during consecutive EL deposition of silver (Ag) and Au onto tin-sensitized silica surfaces: self-limiting Ag activation; transitory Ag NP formation; transitional Au-Ag alloy formation during galvanic replacement of Ag by Au; and uniform morphology formation under

  4. Detailed Modeling, Design, and Evaluation of a Scalable Multi-level Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Moody, A T; Bronevetsky, G; Mohror, K M; de Supinski, B R

    2010-04-09

    High-performance computing (HPC) systems are growing more powerful by utilizing more hardware components. As the system mean-time-before-failure correspondingly drops, applications must checkpoint more frequently to make progress. However, as the system memory sizes grow faster than the bandwidth to the parallel file system, the cost of checkpointing begins to dominate application run times. A potential solution to this problem is to use multi-level checkpointing, which employs multiple types of checkpoints with different costs and different levels of resiliency in a single run. The goal is to design light-weight checkpoints to handle the most common failure modes and rely on more expensive checkpoints for less common, but more severe failures. While this approach is theoretically promising, it has not been fully evaluated in a large-scale, production system context. To this end we have designed a system, called the Scalable Checkpoint/Restart (SCR) library, that writes checkpoints to storage on the compute nodes utilizing RAM, Flash, or disk, in addition to the parallel file system. We present the performance and reliability properties of SCR as well as a probabilistic Markov model that predicts its performance on current and future systems. We show that multi-level checkpointing improves efficiency on existing large-scale systems and that this benefit increases as the system size grows. In particular, we developed low-cost checkpoint schemes that are 100x-1000x faster than the parallel file system and effective against 85% of our system failures. This leads to a gain in machine efficiency of up to 35%, and it reduces the the load on the parallel file system by a factor of two on current and future systems.

  5. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Science.gov (United States)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  6. Novel Scalable 3-D MT Inverse Solver

    Science.gov (United States)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  7. A scalable model for network situational awareness based on Endsley's situation model

    Institute of Scientific and Technical Information of China (English)

    Hu Wei; Li Jianhua; Chen Xiuzhen; Jiang Xinghao; Zuo Min

    2007-01-01

    The paper introduces the Endsley's situation model into network security to describe the network security situation,and improves Endsley'S data processing to suit network alerts.The proposet model contains the information of incident frequency.incident time and incident space.The HoneyNet dataset is selected to evaluate the proposed model in the evaluation.The paper pmposes three definitions to depict and predigest the whole situation extraction in detail.and a fusion component to reduce the influence of alert redundancy on the total security situation.The less complex extraction makes the situation analysismore efficient,and the fine-grained model makes the analysis have a better expansibility.Finally,the situational variation curves are simulated,and the evaluation results prove the situation model applicable and efficient.

  8. High performance scalable image coding

    Institute of Scientific and Technical Information of China (English)

    Gan Tao; He Yanmin; Zhu Weile

    2007-01-01

    A high performance scalable image coding algorithm is proposed. The salient features of this algorithm are the ways to form and locate the significant clusters. Thanks to the list structure, the new coding algorithm achieves fine fractional bit-plane coding with negligible additional complexity. Experiments show that it performs comparably or better than the state-of-the-art coders. Furthermore, the flexible codec supports both quality and resolution scalability, which is very attractive in many network applications.

  9. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  10. Developing a scalable model of recombinant protein yield from Pichia pastoris: the influence of culture conditions, biomass and induction regime

    Directory of Open Access Journals (Sweden)

    Wilks Martin DB

    2009-07-01

    Full Text Available Abstract Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.

  11. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    Science.gov (United States)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability

  12. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  13. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  14. Scalable spheroid model of human hepatocytes for hepatitis C infection and replication.

    Science.gov (United States)

    Ananthanarayanan, Abhishek; Nugraha, Bramasta; Triyatni, Miriam; Hart, Stefan; Sankuratri, Suryanarayana; Yu, Hanry

    2014-07-07

    Developing effective new drugs against hepatitis C (HCV) virus has been challenging due to the lack of appropriate small animal and in vitro models recapitulating the entire life cycle of the virus. Current in vitro models fail to recapitulate the complexity of human liver physiology. Here we present a method to study HCV infection and replication on spheroid cultures of Huh 7.5 cells and primary human hepatocytes. Spheroid cultures are constructed using a galactosylated cellulosic sponge with homogeneous macroporosity, enabling the formation and maintenance of uniformly sized spheroids. This facilitates easy handling of the tissue-engineered constructs and overcomes limitations inherent of traditional spheroid cultures. Spheroids formed in the galactosylated cellulosic sponge show enhanced hepatic functions in Huh 7.5 cells and maintain liver-specific functions of primary human hepatocytes for 2 weeks in culture. Establishment of apical and basolateral polarity along with the expression and localization of all HCV specific entry proteins allow for a 9-fold increase in viral entry in spheroid cultures over conventional monolayer cultures. Huh 7.5 cells cultured in the galactosylated cellulosic sponge also support replication of the HCV clone, JFH (Japanese fulminant hepatitis)-1 at higher levels than in monolayer cultures. The advantages of our system in maintaining liver-specific functions and allowing HCV infection together with its ease of handling make it suitable for the study of HCV biology in basic research and pharmaceutical R&D.

  15. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  16. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  17. Multiscale Modeling of supported bilayers

    Science.gov (United States)

    Faller, Roland; Xing, Chenyue; Hoopes, Matthew I.

    2009-03-01

    Supported Lipid Bilayers are an abundant research platform for understanding the behavior of real cell membranes as they allow for additional mechanical stability. We studied systematically the changes that a support induces on a phospholipid bilayer using coarse-grained molecular modeling on different levels. We characterize the density and pressure profiles as well as the density imbalance inflicted on the membrane by the support. We also determine the diffusion coefficients and characterize the influence of different corrugations of the support. We then determine the free energy of transfer of phospholipids between the proximal and distal leaflet of a supported membrane using the coarse-grained Martini model. It turns out that there is at equilibrium about a 2-3% higher density in the proximal leaflet. These results are in favorable agreement with recent data obtained by very large scale modeling using a water free model where flip-flop can be observed directly. We compare results of the free energy of transfer obtained by pulling the lipid across the membrane in different ways. There are small quantitative differences but the overall picture is consistent. We are additionally characterizing the intermediate states which determine the barrier height and therefore the rate of translocation.

  18. Genetic algorithms and genetic programming for multiscale modeling: Applications in materials science and chemistry and advances in scalability

    Science.gov (United States)

    Sastry, Kumara Narasimha

    2007-03-01

    building blocks in organic chemistry---indicate that MOGAs produce High-quality semiempirical methods that (1) are stable to small perturbations, (2) yield accurate configuration energies on untested and critical excited states, and (3) yield ab initio quality excited-state dynamics. The proposed method enables simulations of more complex systems to realistic, multi-picosecond timescales, well beyond previous attempts or expectation of human experts, and 2--3 orders-of-magnitude reduction in computational cost. While the two applications use simple evolutionary operators, in order to tackle more complex systems, their scalability and limitations have to be investigated. The second part of the thesis addresses some of the challenges involved with a successful design of genetic algorithms and genetic programming for multiscale modeling. The first issue addressed is the scalability of genetic programming, where facetwise models are built to assess the population size required by GP to ensure adequate supply of raw building blocks and also to ensure accurate decision-making between competing building blocks. This study also presents a design of competent genetic programming, where traditional fixed recombination operators are replaced by building and sampling probabilistic models of promising candidate programs. The proposed scalable GP, called extended compact GP (eCGP), combines the ideas from extended compact genetic algorithm (eCGA) and probabilistic incremental program evolution (PIPE) and adaptively identifies, propagates and exchanges important subsolutions of a search problem. Results show that eCGP scales cubically with problem size on both GP-easy and GP-hard problems. Finally, facetwise models are developed to explore limitations of scalability of MOGAs, where the scalability of multiobjective algorithms in reliably maintaining Pareto-optimal solutions is addressed. The results show that even when the building blocks are accurately identified, massive multimodality

  19. Defined Essential 8™ Medium and Vitronectin Efficiently Support Scalable Xeno-Free Expansion of Human Induced Pluripotent Stem Cells in Stirred Microcarrier Culture Systems

    Science.gov (United States)

    Badenes, Sara M.; Fernandes, Tiago G.; Cordeiro, Cláudia S. M.; Boucher, Shayne; Kuninger, David; Vemuri, Mohan C.; Diogo, Maria Margarida; Cabral, Joaquim M. S.

    2016-01-01

    Human induced pluripotent stem (hiPS) cell culture using Essential 8™ xeno-free medium and the defined xeno-free matrix vitronectin was successfully implemented under adherent conditions. This matrix was able to support hiPS cell expansion either in coated plates or on polystyrene-coated microcarriers, while maintaining hiPS cell functionality and pluripotency. Importantly, scale-up of the microcarrier-based system was accomplished using a 50 mL spinner flask, under dynamic conditions. A three-level factorial design experiment was performed to identify optimal conditions in terms of a) initial cell density b) agitation speed, and c) to maximize cell yield in spinner flask cultures. A maximum cell yield of 3.5 is achieved by inoculating 55,000 cells/cm2 of microcarrier surface area and using 44 rpm, which generates a cell density of 1.4x106 cells/mL after 10 days of culture. After dynamic culture, hiPS cells maintained their typical morphology upon re-plating, exhibited pluripotency-associated marker expression as well as tri-lineage differentiation capability, which was verified by inducing their spontaneous differentiation through embryoid body formation, and subsequent downstream differentiation to specific lineages such as neural and cardiac fates was successfully accomplished. In conclusion, a scalable, robust and cost-effective xeno-free culture system was successfully developed and implemented for the scale-up production of hiPS cells. PMID:26999816

  20. Infopreneurs in service of rural enterprise and economic development: Addressing the critical challenges of scalability and sustainability in support of service extension in developing (rural) economies

    CSIR Research Space (South Africa)

    Van Rensburg, JR

    2010-08-31

    Full Text Available years’ work of ongoing research in a Living Lab fashion to understand and address the two critical challenges of scalability and sustainability in the utilisation of technology (primarily Information and Communication Technologies – ICTs) as enablers...

  1. Support for Programming Models in Network-on-Chip-based Many-core Systems

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth

    and scalability in an image processing application with the aim of providing insight into parallel programming issues. The second part proposes and presents the tile-based Clupea many-core architecture, which has the objective of providing configurable support for programming models to allow different programming......This thesis addresses aspects of support for programming models in Network-on- Chip-based many-core architectures. The main focus is to consider architectural support for a plethora of programming models in a single system. The thesis has three main parts. The first part considers parallelization...

  2. A SCALABLE HYBRID MODULAR MULTIPLICATION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    Meng Qiang; Chen Tao; Dai Zibin; Chen Quji

    2008-01-01

    Based on the analysis of several familiar large integer modular multiplication algorithms,this paper proposes a new Scalable Hybrid modular multiplication (SHyb) algorithm which has scalable operands, and presents an RSA algorithm model with scalable key size. Theoretical analysis shows that SHyb algorithm requires m2n/2+2m iterations to complete an mn-bit modular multiplication with the application of an n-bit modular addition hardware circuit. The number of the required iterations can be reduced to a half of that of the scalable Montgomery algorithm. Consequently, the application scope of the RSA cryptosystem is expanded and its operation speed is enhanced based on SHyb algorithm.

  3. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    Science.gov (United States)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  4. Generative Model Selection Using a Scalable and Size-Independent Complex Network Classifier

    OpenAIRE

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-01-01

    Real networks exhibit nontrivial topological features such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a...

  5. Double-π fully scalable model for on-chip spiral inductors

    Institute of Scientific and Technical Information of China (English)

    Liu Jun; Zhong Lin; Wang Huang; Wen Jincai; Sun Lingling; Yu Zhiping; Marissa Condon

    2012-01-01

    A novel double-π equivalent circuit model for on-chip spiral inductors is presented.A hierarchical structure,similar to that of MOS models is introduced.This enables a strict partition of the geometry scaling in the global model and the model equations in the local model.The major parasitic effects,including the skin effect,the proximity effect,the inductive and capacitive loss in the substrate,and the distributed effect,are analytically calculated with geometric and process parameters in the local-level.As accurate values of the layout and process parameters are difficult to obtain,a set of model parameters is introduced to correct the errors caused by using these given inaccurate layout and process parameters at the local level.Scaling rules are defined to enable the formation of models that describe the behavior of the inductors of a variety of geometric dimensions.A series of asymmetric inductors with different geometries are fabricated on a standard 0.18-μm SiGe BiCMOS process with 100Ω/cm substrate resistivity to verify the proposed model.Excellent agreement has been obtained between the measured results and the proposed model over a wide frequency range.

  6. Modeling Advance Life Support Systems

    Science.gov (United States)

    Pitts, Marvin; Sager, John; Loader, Coleen; Drysdale, Alan

    1996-01-01

    Activities this summer consisted of two projects that involved computer simulation of bioregenerative life support systems for space habitats. Students in the Space Life Science Training Program (SLSTP) used the simulation, space station, to learn about relationships between humans, fish, plants, and microorganisms in a closed environment. One student complete a six week project to modify the simulation by converting the microbes from anaerobic to aerobic, and then balancing the simulation's life support system. A detailed computer simulation of a closed lunar station using bioregenerative life support was attempted, but there was not enough known about system restraints and constants in plant growth, bioreactor design for space habitats and food preparation to develop an integrated model with any confidence. Instead of a completed detailed model with broad assumptions concerning the unknown system parameters, a framework for an integrated model was outlined and work begun on plant and bioreactor simulations. The NASA sponsors and the summer Fell were satisfied with the progress made during the 10 weeks, and we have planned future cooperative work.

  7. A Scalable Approach to Modeling Cascading Risk in the MDAP Network

    Science.gov (United States)

    2014-04-30

    EAC. Images are used to represent the status of a program in a tabular matrix format, called the Program Status Matrix ( PSM ), for the past 3 months...indicates contracts and APB requirement that have problems but can be resolved and red indicates the contracts that will not be made available. PSM is...Program Status Matrix ( PSM ) in the DAES report is a matrix of circular shapes supported by Microsoft PowerPoint or word. The PSM is captured in its

  8. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Atul K. [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2016-09-14

    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  9. Performance Evaluation of the WSN Routing Protocols Scalability

    Directory of Open Access Journals (Sweden)

    L. Alazzawi

    2008-01-01

    Full Text Available Scalability is an important factor in designing an efficient routing protocol for wireless sensor networks (WSNs. A good routing protocol has to be scalable and adaptive to the changes in the network topology. Thus scalable protocol should perform well as the network grows larger or as the workload increases. In this paper, routing protocols for wireless sensor networks are simulated and their performances are evaluated to determine their capability for supporting network scalability.

  10. HYDROSCAPE: A SCAlable and ParallelizablE Rainfall Runoff Model for Hydrological Applications

    Science.gov (United States)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-12-01

    In this work we present HYDROSCAPE, an innovative streamflow routing method based on the travel time approach, and modeled through a fine-scale geomorphological description of hydrological flow paths. The model is designed aimed at being easily coupled with weather forecast or climate models providing the hydrological forcing, and at the same time preserving the geomorphological dispersion of the river network, which is kept unchanged independently on the grid size of rainfall input. This makes HYDROSCAPE particularly suitable for multi-scale applications, ranging from medium size catchments up to the continental scale, and to investigate the effects of extreme rainfall events that require an accurate description of basin response timing. Key feature of the model is its computational efficiency, which allows performing a large number of simulations for sensitivity/uncertainty analyses in a Monte Carlo framework. Further, the model is highly parsimonious, involving the calibration of only three parameters: one defining the residence time of hillslope response, one for channel velocity, and a multiplicative factor accounting for uncertainties in the identification of the potential maximum soil moisture retention in the SCS-CN method. HYDROSCAPE is designed with a simple and flexible modular structure, which makes it particularly prone to massive parallelization, customization according to the specific user needs and preferences (e.g., rainfall-runoff model), and continuous development and improvement. Finally, the possibility to specify the desired computational time step and evaluate streamflow at any location in the domain, makes HYDROSCAPE an attractive tool for many hydrological applications, and a valuable alternative to more complex and highly parametrized large scale hydrological models. Together with model development and features, we present an application to the Upper Tiber River basin (Italy), providing a practical example of model performance and

  11. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)

    2012-07-01

    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  12. PATHLOGIC-S: a scalable Boolean framework for modelling cellular signalling.

    Directory of Open Access Journals (Sweden)

    Liam G Fearnley

    Full Text Available Curated databases of signal transduction have grown to describe several thousand reactions, and efficient use of these data requires the development of modelling tools to elucidate and explore system properties. We present PATHLOGIC-S, a Boolean specification for a signalling model, with its associated GPL-licensed implementation using integer programming techniques. The PATHLOGIC-S specification has been designed to function on current desktop workstations, and is capable of providing analyses on some of the largest currently available datasets through use of Boolean modelling techniques to generate predictions of stable and semi-stable network states from data in community file formats. PATHLOGIC-S also addresses major problems associated with the presence and modelling of inhibition in Boolean systems, and reduces logical incoherence due to common inhibitory mechanisms in signalling systems. We apply this approach to signal transduction networks including Reactome and two pathways from the Panther Pathways database, and present the results of computations on each along with a discussion of execution time. A software implementation of the framework and model is freely available under a GPL license.

  13. Towards a portable, scalable, open source model of tree cover derived from Landsat spectra

    Science.gov (United States)

    Greenberg, J. A.; Xu, Q.; Morrison, B. D.; Xu, Z.; Man, A.; Fredrickson, M. M.; Ramirez, C.; Li, B.

    2016-12-01

    Tree cover is a key parameter used in a variety of applications, including ecosystem and fire behavior modeling, wildlife management, and is the primary way by which a variety of biomes are classified. At large scales, quantification of tree cover can help elucidate changes in deforestation and forest recovery and understand the relationship between climate and forest distributions. To determine tree cover at large scales, remote sensing-based methods are required. There exist a variety of products at various scales and extents, including two global products, Hansen et al.'s treecover2000 product and Sexton et al.'s Landsat Vegetation Continuous Fields (VCF) product. While these products serve an important role, they are only available for a limited set of dates: treecover2000 is available for the year 2000, and Landsat VCF for 2000 and 2005. In this analysis, we created a single model of tree cover as a function of Landsat spectra that is both calibrated and validated using small footprint LiDAR estimates of tree cover, trained across multiple Landsat scenes. Our model was found to be accurate and portable across space and time largely due to using a large amount of LiDAR - Landsat pixel pairs across multiple Landsat scenes to capture both sensor and scene heterogeneity. We will be releasing the model itself, rather than time-limited products, to allow other users to apply the model to any reflectance-calibrated Landsat scene from any time period.

  14. A Scalable Model for the Performance Evaluation of ROADMs with Generic Switching Capabilities

    Directory of Open Access Journals (Sweden)

    Athanasios S Tsokanos

    2010-10-01

    Full Text Available In order to evaluate the performance of Reconfigurable Optical Add/Drop Multiplexers (ROADMs consisting of a single large switch, in circuit switched Wavelength-Division Multiplexing (WDM networks, a theoretical Queuing Network Model (QNM is developed, which consists of two M/M/c/c loss systems each of which is analyzed in isolation. An overall analytical blocking probability of a ROADM is obtained. This model can also be used for the performance optimization of ROADMs with a single switch capable of switching all or a partial number of the wavelengths being used. It is demonstrated how the proposed model can be used for the performance evaluation of a ROADM for different number of wavelengths inside the switch, in various traffic intensity conditions producing an exact blocking probability solution. The accuracy of the analytical results is validated by simulation.

  15. A scalable delivery framework and a pricing model for streaming media with advertisements

    Science.gov (United States)

    Al-Hadrusi, Musab; Sarhan, Nabil J.

    2008-01-01

    This paper presents a delivery framework for streaming media with advertisements and an associated pricing model. The delivery model combines the benefits of periodic broadcasting and stream merging. The advertisements' revenues are used to subsidize the price of the media content. The pricing is determined based on the total ads' viewing time. Moreover, this paper presents an efficient ad allocation scheme and three modified scheduling policies that are well suited to the proposed delivery framework. Furthermore, we study the effectiveness of the delivery framework and various scheduling polices through extensive simulation in terms of numerous metrics, including customer defection probability, average number of ads viewed per client, price, arrival rate, profit, and revenue.

  16. Scalable 3D GIS environment managed by 3D-XML-based modeling

    Science.gov (United States)

    Shi, Beiqi; Rui, Jianxun; Chen, Neng

    2008-10-01

    Nowadays, the namely 3D GIS technologies become a key factor in establishing and maintaining large-scale 3D geoinformation services. However, with the rapidly increasing size and complexity of the 3D models being acquired, a pressing needed for suitable data management solutions has become apparent. This paper outlines that storage and exchange of geospatial data between databases and different front ends like 3D models, GIS or internet browsers require a standardized format which is capable to represent instances of 3D GIS models, to minimize loss of information during data transfer and to reduce interface development efforts. After a review of previous methods for spatial 3D data management, a universal lightweight XML-based format for quick and easy sharing of 3D GIS data is presented. 3D data management based on XML is a solution meeting the requirements as stated, which can provide an efficient means for opening a new standard way to create an arbitrary data structure and share it over the Internet. To manage reality-based 3D models, this paper uses 3DXML produced by Dassault Systemes. 3DXML uses opening XML schemas to communicate product geometry, structure and graphical display properties. It can be read, written and enriched by standard tools; and allows users to add extensions based on their own specific requirements. The paper concludes with the presentation of projects from application areas which will benefit from the functionality presented above.

  17. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  18. Developmental Impact Analysis of an ICT-Enabled Scalable Healthcare Model in BRICS Economies

    Directory of Open Access Journals (Sweden)

    Dhrubes Biswas

    2012-06-01

    Full Text Available This article highlights the need for initiating a healthcare business model in a grassroots, emerging-nation context. This article’s backdrop is a history of chronic anomalies afflicting the healthcare sector in India and similarly placed BRICS nations. In these countries, a significant percentage of populations remain deprived of basic healthcare facilities and emergency services. Community (primary care services are being offered by public and private stakeholders as a panacea to the problem. Yet, there is an urgent need for specialized (tertiary care services at all levels. As a response to this challenge, an all-inclusive health-exchange system (HES model, which utilizes information communication technology (ICT to provide solutions in rural India, has been developed. The uniqueness of the model lies in its innovative hub-and-spoke architecture and its emphasis on affordability, accessibility, and availability to the masses. This article describes a developmental impact analysis (DIA that was used to assess the impact of this model. The article contributes to the knowledge base of readers by making them aware of the healthcare challenges emerging nations are facing and ways to mitigate those challenges using entrepreneurial solutions.

  19. Toward a scalable flexible-order model for 3D nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Ducrozet, Guillaume; Bingham, Harry B.

    strategy on a time-invariant mesh. The 3D numerical model is based on a finite difference method as in the original works \\cite{LiFleming1997,BinghamZhang2007}. Full details and other aspects of an improved 3D solution can be found in \\cite{EBL08}. The new and improved approach for three...

  20. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....

  1. An Extensible and Scalable Framework for Formal Modeling, Analysis, and Development of Distributed Systems

    Science.gov (United States)

    2008-11-30

    the project personnel. All publications are available on request. [Pl| R. Canetti , L. Cheung, D. Kaynar, M. Liskov, N". Lynch, O. Pereira, and R...March 2008. [P2] Ran Canetti , Ling Cheung, Dilsun Kaynar, Nancy Lynch, and Oliviei Pereira. Modeling Bounded Computation in Long-Lived Systems. CONCUR...pages 153-1G2, 2001. [4] R. Canetti , L. Cheung. D. Kaynar, M. Liskov, N. Lynch, O. Pereirt, and R. Segala. Analyz- ing Security Protocol Using Thne

  2. Helicopter model rotor-blade vortex interaction impulsive noise: Scalability and parametric variations

    Science.gov (United States)

    Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.

    1984-01-01

    Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.

  3. Scalable adaptive methods for forward and inverse continental ice sheet modelling

    Science.gov (United States)

    Isaac, T.; Ghattas, O.; Stadler, G.; Petra, N.

    2013-12-01

    The simulation of continental ice flow is challenging due to (1) localized regions of fast flow that are separated from slow regions by thin transition zones, (2) the complex and anisotropic geometry of continental ice sheets, (3) stress singularities occurring at the grounding line, and (4) the nonlinear rheology of ice. We present an inexact Newton method for the solution of an adaptive higher-order accurate finite element discretization of the nonlinear Stokes equations that model ice flow. The Newton linearizations are solved using a Krylov method with a block preconditioner with algebraic multigrid for the viscous block and an incomplete factorization smoother. The basal boundary conditions play a crucial role in modeling the dynamics of polar ice sheets. These are typically formulated as Robin-type boundary conditions with a basal friction coefficient, which subsumes several physical processes. This coefficient is uncertain, since it cannot be observed or measured. Hence, it must be inferred from surface ice velocity observations. We formulate this inference problem in a Bayesian framework and present results for the maximum a posteriori (MAP) point computed with different prior knowledge/regularization. Using a low rank Hessian approximation of the negative log posterior, we construct a Gaussian approximation of the posterior distribution for the friction coefficient. This allow us to compute the pointwise variance field and samples for the basal friction coefficient for continental-scale ice sheet problems.

  4. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    Science.gov (United States)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  5. Service Virtualization Using a Non-von Neumann Parallel, Distributed, and Scalable Computing Model

    Directory of Open Access Journals (Sweden)

    Rao Mikkilineni

    2012-01-01

    Full Text Available This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance, and security (FCAPS management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end-to-end transaction management of computing tasks (provided by the autonomous distributed computing elements to be implemented as network-level FCAPS management. While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP/Perl/Python (LAMP based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions. We did not use Hypervisors, Virtual machines, or layers of complex virtualization management systems in implementing this prototype.

  6. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    Science.gov (United States)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  7. Scalable Video Transcaling for the Wireless Internet

    Directory of Open Access Journals (Sweden)

    van der Schaar Mihaela

    2004-01-01

    Full Text Available The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes. However, in general, there is an inherent trade-off between the level of scalability and the quality of scalable video streams. In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range. In this paper, we introduce the notion of wireless video transcaling (TS, which is a generalization of (nonscalable transcoding. With TS, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges. Our proposed TS framework exploits the fact that the level of heterogeneity changes at different points of the video distribution tree over wireless and mobile Internet networks. This provides the opportunity to improve the video quality by performing the appropriate TS process. We argue that an Internet/wireless network gateway represents a good candidate for performing TS. Moreover, we describe hierarchical TS (HTS, which provides a “Transcaler” with the option of choosing among different levels of TS processes with different complexities. We illustrate the benefits of TS by considering the recently developed MPEG-4 fine granularity scalability (FGS video coding. Extensive simulation results of video TS over bit rate ranges supported by emerging wireless LANs are presented.

  8. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  9. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  10. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  11. MATCHING LSI FOR SCALABLE INFORMATION RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Rajagopal Palsonkennedy

    2012-01-01

    Full Text Available Latent Semantic Indexing (LSI is one of the well-liked techniques in the information retrieval fields. Different from the traditional information retrieval techniques, LSI is not based on the keyword matching simply. It uses statistics and algebraic computations. Based on Singular Value Decomposition (SVD, the higher dimensional matrix is converted to a lower dimensional approximate matrix, of which the noises could be filtered. And also the issues of synonymy and polysemy in the traditional techniques can be prevail over based on the investigations of the terms related with the documents. However, it is notable that LSI suffers a scalability issue due to the computing complexity of SVD. This study presents a distributed LSI algorithm MR-LSI which can solve the scalability issue using Hadoop framework based on the distributed computing model Map Reduce. It also solves the overhead issue caused by the involved clustering algorithm by k-means algorithm. The evaluations indicate that MR-LSI can gain noteworthy improvement compared to the other scheme on processing large scale of documents. One significant advantage of Hadoop is that it supports various computing environments so that the issue of unbalanced load among nodes is highlighted.Hence, a load balancing algorithm based on genetic algorithm for balancing load in static environment is proposed. The results show that it can advance the performance of a cluster according to different levels.

  12. Mathematical models for planning support

    NARCIS (Netherlands)

    L.G. Kroon (Leo); R.A. Zuidwijk (Rob)

    2003-01-01

    textabstractIn this paper we describe how computer systems can provide planners with active planning support, when these planners are carrying out their daily planning activities. This means that computer systems actively participate in the planning process by automatically generating plans or parti

  13. Equalizer: a scalable parallel rendering framework.

    Science.gov (United States)

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  14. Performance Evaluation of the WSN Routing Protocols Scalability

    OpenAIRE

    2008-01-01

    Scalability is an important factor in designing an efficient routing protocol for wireless sensor networks (WSNs). A good routing protocol has to be scalable and adaptive to the changes in the network topology. Thus scalable protocol should perform well as the network grows larger or as the workload increases. In this paper, routing protocols for wireless sensor networks are simulated and their performances are evaluated to determine their capability for supporting network ...

  15. Modeling and Simulation of Scalable Cloud Computing Environments and the CloudSim Toolkit: Challenges and Opportunities

    CERN Document Server

    Buyya, Rajkumar; Calheiros, Rodrigo N

    2009-01-01

    Cloud computing aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Cloud applications have different composition, configuration, and deployment requirements. Quantifying the performance of resource allocation policies and application scheduling algorithms at finer details in Cloud computing environments for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is a challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: an extensible simulation toolkit that enables modelling and simulation of Cloud computing environments. The CloudSim toolkit supports modelling and creation of one or more virtual machines (VMs) on a simulated node of a Data Center, jobs, and their mapping to suitable VMs. It also allows simulation of multiple Data Centers to...

  16. Scalability of Hydrodynamic Simulations

    CERN Document Server

    Tang, Shikui

    2009-01-01

    Many hydrodynamic processes can be studied in a way that is scalable over a vastly relevant physical parameter space. We systematically examine this scalability, which has so far only briefly discussed in astrophysical literature. We show how the scalability is limited by various constraints imposed by physical processes and initial conditions. Using supernova remnants in different environments and evolutionary phases as application examples, we demonstrate the use of the scaling as a powerful tool to explore the interdependence among relevant parameters, based on a minimum set of simulations. In particular, we devise a scaling scheme that can be used to adaptively generate numerous seed remnants and plant them into 3D hydrodynamic simulations of the supernova-dominated interstellar medium.

  17. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  18. Multipoint videoconferencing with scalable video coding

    Institute of Scientific and Technical Information of China (English)

    ELEFTHERIADIS Alexandros; CIVANLAR M. Reha; SHAPIRO Ofer

    2006-01-01

    We describe a system for multipoint videoconferencing that offers extremely low end-to-end delay, low cost and complexity, and high scalability, alongside standard features associated with high-end solutions such as rate matching and personal video layout. The system accommodates heterogeneous receivers and networks based on the Internet Protocol and relies on scalable video coding to provide a coded representation of a source video signal at multiple temporal and spatial resolutions as well as quality levels. These are represented by distinct bitstream components which are created at each end-user encoder. Depending on the specific conferencing environment, some or all of these components are transmitted to a Scalable Video Conferencing Server (SVCS). The SVCS redirects these components to one or more recipients depending on, e.g., the available network conditions and user preferences. The scalable aspect of the video coding technique allows the system to adapt to different network conditions, and also accommodates different end-user requirements (e.g., a user may elect to view another user at a high or low spatial resolution). Performance results concerning flexibility, video quality and delay of the system are presented using the Joint Scalable Video Model (JSVM) of the forthcoming SVC (H.264 Annex G) standard, demonstrating that scalable coding outperforms existing state-of-the-art systems and offers the right platform for building next-generation multipoint videoconferencing systems.

  19. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  20. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    database system. This technique allows unprecedented data volumes to be processed for maps. Scalable execution is achieved by translating Glossy SQL queries into pure relational algebra queries that can run natively in SQL-based spatial analytics systems. The implementation developed during this thesis...... supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads...

  1. Scalable Content Management System

    OpenAIRE

    Sandeep Krishna S, Jayant Dani

    2013-01-01

    Immense growth in the volume of contents every day demands more scalable system to handle and overcome difficulties in capture, storage, transform, search, sharing and visualization of data, where the data can be a structured or unstructured data of any type. A system to manage the growing contents and overcome the issues and complexity faced using appropriate technologies would advantage over measurable qualities like flexibility, interoperability, customizabi...

  2. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  3. Linking Remote Sensing Data and Energy Balance Models for a Scalable Agriculture Insurance System for sub-Saharan Africa

    Science.gov (United States)

    Brown, M. E.; Osgood, D. E.; McCarty, J. L.; Husak, G. J.; Hain, C.; Neigh, C. S. R.

    2014-12-01

    One of the most immediate and obvious impacts of climate change is on the weather-sensitive agriculture sector. Both local and global impacts on production of food will have a negative effect on the ability of humanity to meet its growing food demands. Agriculture has become more risky, particularly for farmers in the most vulnerable and food insecure regions of the world such as East Africa. Smallholders and low-income farmers need better financial tools to reduce the risk to food security while enabling productivity increases to meet the needs of a growing population. This paper will describe a recently funded project that brings together climate science, economics, and remote sensing expertise to focus on providing a scalable and sensor-independent remote sensing based product that can be used in developing regional rainfed agriculture insurance programs around the world. We will focus our efforts in Ethiopia and Kenya in East Africa and in Senegal and Burkina Faso in West Africa, where there are active index insurance pilots that can test the effectiveness of our remote sensing-based approach for use in the agriculture insurance industry. The paper will present the overall program, explain links to the insurance industry, and present comparisons of the four remote sensing datasets used to identify drought: the CHIRPS 30-year rainfall data product, the GIMMS 30-year vegetation data product from AVHRR, the ESA soil moisture ECV-30 year soil moisture data product, and a MODIS Evapotranspiration (ET) 15-year dataset. A summary of next year's plans for this project will be presented at the close of the presentation.

  4. WISER: realistic and scalable wireless mobile IP network emulator

    Science.gov (United States)

    Kaplan, M. A.; Cichocki, A.; Demers, S.; Fecko, M. A.; Hokelek, I.; Samtani, S.; Unger, J. W.; Uyar, M. U.; Greear, B.

    2009-05-01

    WISER is a scalable network emulation tool for networks with several hundred heterogeneous wireless nodes. It provides high-fidelity network modeling, exchanges packets in real-time, and faithfully captures the complex interactions among network entities. WISER runs on inexpensive COTS platforms and represents multiple full network stacks, one for each individual virtual node. It supports a flexible open source router platform (XORP) to implement routing protocol stacks. WISER offers wireless MAC emulation capabilities for different types of links, waveforms, radio devices, etc. We present experiments to demonstrate WISER's capabilities enabling a new paradigm for performance evaluation of mobile sensor and ad-hoc networks.

  5. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  6. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  7. Support vector machine applied in QSAR modelling

    Institute of Scientific and Technical Information of China (English)

    MEI Hu; ZHOU Yuan; LIANG Guizhao; LI Zhiliang

    2005-01-01

    Support vector machine (SVM), partial least squares (PLS), and Back-Propagation artificial neural network (ANN) were employed to establish QSAR models of 2 dipeptide datasets. In order to validate predictive capabilities on external dataset of the resulting models, both internal and external validations were performed. The division of dataset into both training and test sets was carried out by D-optimal design. The results showed that support vector machine (SVM) behaved well in both calibration and prediction. For the dataset of 48 bitter tasting dipeptides (BTD), the results obtained by support vector regression (SVR) were superior to that by PLS in both calibration and prediction. When compared with BP artificial neural network, SVR showed less calibration power but more predictive capability. For the dataset of angiotensin-converting enzyme (ACE) inhibitors, the results obtained by support vector machine (SVM) regression were equivalent to those by PLS and BP artificial neural network. In both datasets, SVR using linear kernel function behaved well as that using radial basis kernel function. The results showed that there is wide prospect for the application of support vector machine (SVM) into QSAR modeling.

  8. Towards better modelling and decision support

    DEFF Research Database (Denmark)

    Meli, Mattia; Grimm, V; Augusiak, J.

    2014-01-01

    The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE......, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its......, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing...

  9. Towards better modelling and decision support

    DEFF Research Database (Denmark)

    Meli, Mattia; Grimm, V; Augusiak, J.;

    2014-01-01

    The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE......, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its......, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing...

  10. An Optimization Model of Tunnel Support Parameters

    Directory of Open Access Journals (Sweden)

    Su Lijuan

    2015-05-01

    Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.

  11. 增强伸缩性的主被动集成访问控制模型%Scalability enhanced active-passive-integrated access control model

    Institute of Scientific and Technical Information of China (English)

    翟治年; 卢亚辉; 郭玉彬; 贾连印; 奚建清; 刘艳霞

    2011-01-01

    基于任务分类和角色层次的三步授权机制集成了主被动两种访问控制模式,但任务间重复授权、多种角色层次上的任务继承冲突、任务约束重复表达等问题严重影响了有关模型的伸缩性。为此提出一种增强的主被动集成访问控制模型。通过可扩展的角色层次划分细化了主/被动任务的分类,可以灵活地简化多种任务分配关系;引入基于任务泛化的授权继承和约束覆盖机制,可以有效减少任务之间的重复授权和约束;通过一组正确和完备的语义覆盖规则,为自动约束化简等提供了依据。最后给出多粒度权限激活机制和动态互斥的冗余检测算法,以消除不必要的访问检查开销,降低伸缩增强带来的效率损失。%The 3-step authorization mechanism based on task classification and role hierarchy integrates two access control paradigms of active and passive ones.But the scalability of the related models was seriously affected by repetitive authorizations among tasks,conflicts among task inheritances along multiple role hierarchies and repetitive expressions of task constraints.To deal with these problems,an enhanced active-passive integrated access control model was proposed.The classification of active/passive tasks was refined through extendable subdivision of role hierarchy,thus many kinds of task assignments could be simplified flexibly.Task generalization based authorization inheritance and constraint coverage mechanisms were introduced to reduce repeatitive authority and constraint among tasks.The basis was provided for automatic constraints simplification by a set of correct semantic overlay rules.Finally,multiple-granularity permission activation mechanism and dynamic exclusions redundancy detecting algorithm was presented to eliminate unnecessary cost in access checking and to compensate efficiency loss brought by scalability enhancing.

  12. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  13. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  14. Optimized scalable network switch

    Science.gov (United States)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  15. A Scalable Module System

    CERN Document Server

    Rabe, Florian

    2011-01-01

    Symbolic and logic computation systems ranging from computer algebra systems to theorem provers are finding their way into science, technology, mathematics and engineering. But such systems rely on explicitly or implicitly represented mathematical knowledge that needs to be managed to use such systems effectively. While mathematical knowledge management (MKM) "in the small" is well-studied, scaling up to large, highly interconnected corpora remains difficult. We hold that in order to realize MKM "in the large", we need representation languages and software architectures that are designed systematically with large-scale processing in mind. Therefore, we have designed and implemented the MMT language -- a module system for mathematical theories. MMT is designed as the simplest possible language that combines a module system, a foundationally uncommitted formal semantics, and web-scalable implementations. Due to a careful choice of representational primitives, MMT allows us to integrate existing representation l...

  16. Transonic Cascade Measurements to Support Analytical Modeling

    Science.gov (United States)

    2007-11-02

    RECEIVED JUL 0 12005 FINAL REPORT FOR: AFOSR GRANT F49260-02-1-0284 TRANSONIC CASCADE MEASUREMENTS TO SUPPORT ANALYTICAL MODELING Paul A. Durbin ...PAD); 650-723-1971 (JKE) durbin @vk.stanford.edu; eaton@vk.stanford.edu submitted to: Attn: Dr. John Schmisseur Air Force Office of Scientific Research...both spline and control points for subsequent wall shape definitions. An algebraic grid generator was used to generate the grid for the blade-wall

  17. Supporting observation campaigns with high resolution modeling

    Science.gov (United States)

    Klocke, Daniel; Brueck, Matthias; Voigt, Aiko

    2017-04-01

    High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.

  18. Continuous and Discontinuous Galerkin Methods for a Scalable Three-Dimensional Nonhydrostatic Atmospheric Model: Limited-Area Mode

    Science.gov (United States)

    2012-03-09

    For these reasons, high-order EBG methods are excellent candidates for next-generation NWP models. Acknowledgment The authors acknowledge Shiva ...Gopalakrishnan for his assistance in an- alyzing the bottlenecks of the MPI codes as well as running some of the simulations. In addition we thank both Shiva

  19. Centralized and distributed architectures of scalable video conferencing services

    OpenAIRE

    Le, Tien Anh; Nguyen, Hang

    2010-01-01

    International audience; The Multipoint Control Unit-based centralized architecture and Application Layer Multicast-based distributed architecture are mainly used for data distribution in video conferencing services. With the contribution of Scalable Video Coding, the latest extension of Advanced Video Coding, video conferencing services are being further researched to support terminals' scalability. The main contribution of this research is to answer a fundamental question before a video conf...

  20. Sherlock: Scalable Fact Learning in Images

    OpenAIRE

    Elhoseiny, Mohamed; Cohen, Scott; Chang, Walter; Price, Brian; Elgammal, Ahmed

    2015-01-01

    We study scalable and uniform understanding of facts in images. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., $$), (2) attributes (e.g., $$), (3) actions (e.g., $$), and (4) in...

  1. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  2. Multiscale Modeling of Supported Lipid Bilayers

    Science.gov (United States)

    Hoopes, Matthew I.; Xing, Chenyue; Faller, Roland

    Cell membranes consist of a multitude of lipid molecules that serve as a framework for the even greater variety of membrane associated proteins [1-4]. As this highly complex (nonequilibrium) system cannot easily be understood and studied in a controlled way, a wide variety of model systems have been devised to understand the dynamics, structure, and thermodynamics in biological membranes. One such model system is a supported lipid bilayer (SLB), a two-dimensional membrane suspended on a surface. SLBs have been realized to be manageable experimentally while reproducing many of the key features of real biological membranes [5,6]. One of the main advantages of supported bilayers is the physical stability due to the solid support that enables a wide range of surface characterization techniques not available to free or unsupported membranes. As SLBs maintain some of the crucial structural and dynamic properties of biological membranes, they provide an important bridge to natural systems. In order to mimic cell membranes reliably, certain structural and dynamic features have to be reliably reproduced in the artificially constructed lipid bilayers. SLBs should display lateral mobility as in living cells, because many membrane activities involve transport, recruitment, or assembly of specific components. It is also critical for membranes to exhibit the correct thermodynamic phase, namely, a fluid lipid bilayer, to respond to environmental stress such as temperature and pressure changes [7]. There are several ways to fabricate supported lipid bilayers (SLBs) on planar substrates. One can use vesicle fusion on solid substrates [5,8-10] as well as Langmuir-Blodgett deposition [11,12]. Proteoliposome adsorption and subsequent membrane formation on a mica surface was first demonstrated by Brian and McConnell [13]. Because of its simplicity and reproducibility, this is one of the most common approaches to prepare supported membranes. A diverse range of different solid substrates

  3. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  4. Clinical Productivity System - A Decision Support Model

    CERN Document Server

    Bennett, Casey C

    2012-01-01

    Purpose: This goal of this study was to evaluate the effects of a data-driven clinical productivity system that leverages Electronic Health Record (EHR) data to provide productivity decision support functionality in a real-world clinical setting. The system was implemented for a large behavioral health care provider seeing over 75,000 distinct clients a year. Design/methodology/approach: The key metric in this system is a "VPU", which simultaneously optimizes multiple aspects of clinical care. The resulting mathematical value of clinical productivity was hypothesized to tightly link the organization's performance to its expectations and, through transparency and decision support tools at the clinician level, affect significant changes in productivity, quality, and consistency relative to traditional models of clinical productivity. Findings: In only 3 months, every single variable integrated into the VPU system showed significant improvement, including a 30% rise in revenue, 10% rise in clinical percentage, a...

  5. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  6. A scalable satellite-based crop yield mapper: Integrating satellites and crop models for field-scale estimation in India

    Science.gov (United States)

    Jain, M.; Singh, B.; Srivastava, A.; Lobell, D. B.

    2015-12-01

    Food security will be challenged over the upcoming decades due to increased food demand, natural resource degradation, and climate change. In order to identify potential solutions to increase food security in the face of these changes, tools that can rapidly and accurately assess farm productivity are needed. With this aim, we have developed generalizable methods to map crop yields at the field scale using a combination of satellite imagery and crop models, and implement this approach within Google Earth Engine. We use these methods to examine wheat yield trends in Northern India, which provides over 15% of the global wheat supply and where over 80% of farmers rely on wheat as a staple food source. In addition, we identify the extent to which farmers are shifting sow date in response to heat stress, and how well shifting sow date reduces the negative impacts of heat stress on yield. To identify local-level decision-making, we map wheat sow date and yield at a high spatial resolution (30 m) using Landsat satellite imagery from 1980 to the present. This unique dataset allows us to examine sow date decisions at the field scale over 30 years, and by relating these decisions to weather experienced over the same time period, we can identify how farmers learn and adapt cropping decisions based on weather through time.

  7. Intratracheal Bleomycin Aerosolization: The Best Route of Administration for a Scalable and Homogeneous Pulmonary Fibrosis Rat Model?

    Directory of Open Access Journals (Sweden)

    Alexandre Robbe

    2015-01-01

    Full Text Available Idiopathic pulmonary fibrosis (IPF is a chronic disease with a poor prognosis and is characterized by the accumulation of fibrotic tissue in lungs resulting from a dysfunction in the healing process. In humans, the pathological process is patchy and temporally heterogeneous and the exact mechanisms remain poorly understood. Different animal models were thus developed. Among these, intratracheal administration of bleomycin (BML is one of the most frequently used methods to induce lung fibrosis in rodents. In the present study, we first characterized histologically the time-course of lung alteration in rats submitted to BLM instillation. Heterogeneous damages were observed among lungs, consisting in an inflammatory phase at early time-points. It was followed by a transition to a fibrotic state characterized by an increased myofibroblast number and collagen accumulation. We then compared instillation and aerosolization routes of BLM administration. The fibrotic process was studied in each pulmonary lobe using a modified Ashcroft scale. The two quantification methods were confronted and the interobserver variability evaluated. Both methods induced fibrosis development as demonstrated by a similar progression of the highest modified Ashcroft score. However, we highlighted that aerosolization allows a more homogeneous distribution of lesions among lungs, with a persistence of higher grade damages upon time.

  8. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  9. Modeling, simulation, and fabrication of a fully integrated, acid-stable, scalable solar-driven water-splitting system.

    Science.gov (United States)

    Walczak, Karl; Chen, Yikai; Karp, Christoph; Beeman, Jeffrey W; Shaner, Matthew; Spurgeon, Joshua; Sharp, Ian D; Amashukeli, Xenia; West, William; Jin, Jian; Lewis, Nathan S; Xiang, Chengxiang

    2015-02-01

    A fully integrated solar-driven water-splitting system comprised of WO3 /FTO/p(+) n Si as the photoanode, Pt/TiO2 /Ti/n(+) p Si as the photocathode, and Nafion as the membrane separator, was simulated, assembled, operated in 1.0 M HClO4 , and evaluated for performance and safety characteristics under dual side illumination. A multi-physics model that accounted for the performance of the photoabsorbers and electrocatalysts, ion transport in the solution electrolyte, and gaseous product crossover was first used to define the optimal geometric design space for the system. The photoelectrodes and the membrane separators were then interconnected in a louvered design system configuration, for which the light-absorbing area and the solution-transport pathways were simultaneously optimized. The performance of the photocathode and the photoanode were separately evaluated in a traditional three-electrode photoelectrochemical cell configuration. The photocathode and photoanode were then assembled back-to-back in a tandem configuration to provide sufficient photovoltage to sustain solar-driven unassisted water-splitting. The current-voltage characteristics of the photoelectrodes showed that the low photocurrent density of the photoanode limited the overall solar-to-hydrogen (STH) conversion efficiency due to the large band gap of WO3 . A hydrogen-production rate of 0.17 mL hr(-1) and a STH conversion efficiency of 0.24 % was observed in a full cell configuration for >20 h with minimal product crossover in the fully operational, intrinsically safe, solar-driven water-splitting system. The solar-to-hydrogen conversion efficiency, ηSTH , calculated using the multiphysics numerical simulation was in excellent agreement with the experimental behavior of the system. The value of ηSTH was entirely limited by the performance of the photoelectrochemical assemblies employed in this study. The louvered design provides a robust platform for implementation of various types of

  10. Practical scalability assesment for parallel scientific numerical applications

    CERN Document Server

    Perlin, Natalie; Kirtman, Ben P

    2016-01-01

    The concept of scalability analysis of numerical parallel applications has been revisited, with the specific goals defined for the performance estimation of research applications. A series of Community Climate Model System (CCSM) numerical simulations were used to test the several MPI implementations, determine optimal use of the system resources, and their scalability. The scaling capacity and model throughput performance metrics for $N$ cores showed a log-linear behavior approximated by a power fit in the form of $C(N)=bN^a$, where $a$ and $b$ are two empirical constants. Different metrics yielded identical power coefficients ($a$), but different dimensionality coefficients ($b$). This model was consistent except for the large numbers of N. The power fit approach appears to be very useful for scalability estimates, especially when no serial testing is possible. Scalability analysis of additional scientific application has been conducted in the similar way to validate the robustness of the power fit approach...

  11. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A scalable gravity offload device simulates reduced gravity for the testing of various surface system elements such as mobile robots, excavators, habitats, and...

  12. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a scalable gravity off-load system that enables controlled integrated testing of Surface System elements such as rovers, habitats, and...

  13. Scalable Quasineutral solver for gyrokinetic simulation

    OpenAIRE

    Latu, Guillaume; Grandgirard, Virginie; Crouseilles, Nicolas; Dif-Pradalier, Guilhem

    2011-01-01

    Modeling turbulent transport is a major goal in order to predict confinement issues in a tokamak plasma. The gyrokinetic framework considers a computational domain in five dimensions to look at kinetic issues in a plasma. Gyrokinetic simulations lead to huge computational needs. Up to now, the gyrokinetic code GYSELA performed large simulations using a few thousands of cores. The work proposed here improves GYSELA onto two points: memory scalability and execution time. The new solution allows...

  14. Side-information Scalable Source Coding

    CERN Document Server

    Tian, Chao

    2007-01-01

    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condi...

  15. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  16. Global Urbanization Modeling Supported by Remote Sensing

    Science.gov (United States)

    Zhou, Y.; Smith, S.; Zhao, K.; Imhoff, M. L.; Thomson, A. M.; Bond-Lamberty, B. P.; Elvidge, C.

    2014-12-01

    Urbanization, one of the major human induced land cover and land use change, has profound impacts on the Earth system, and plays important roles in a variety of processes such as biodiversity loss, water and carbon cycle, and climate change. Accurate information on urban areas and their spatial distribution at the regional and global scales is important in both scientific and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data. The sensitivity analysis demonstrates the robustness of the derived optimal thresholds and the reliability of the cluster-based method. Compared to existing threshold techniques, our method reduces the over- and under-estimation issue, when mapping urban extent over a large area. Using this cluster-based method, we built new global maps of 1-km urban extent from the NTL data (Figure 1) and evaluated its temporal dynamics from 1992 to 2013. Supported by the derived global urban maps and socio-economic drivers, we developed an integrated modeling framework by integrating a top-down macro-scale statistical model with a bottom-up urban growth model and projected future urban expansion.

  17. A CommonKADS Model Framework for Web Based Agricultural Decision Support System

    Directory of Open Access Journals (Sweden)

    Jignesh Patel

    2015-01-01

    Full Text Available Increased demand of farm products and depletion of natural resources compel the agriculture community to increase the use of Information and Communication Technology (ICT in various farming processes. Agricultural Decision Support Systems (DSS proved useful in this regard. The majority of available Agricultural DSSs are either crop or task specific. Less emphasis has been placed on the development of comprehensive DSS, which are non-specific regarding crops or farming processes. The crop or task specific DSSs are mainly developed with rule based or knowledge transfer based approaches. The DSSs based on these methodologies lack the ability for scaling up and generalization. The Knowledge engineering modeling approach is more suitable for the development of large and generalized DSS. Unfortunately the model based knowledge engineering approach is not much exploited for the development of Agricultural DSS. CommonKADS is one of the popular modeling frameworks used for the development of Knowledge Based System (KBS. The paper presents the organization, agent, task, communication, knowledge and design models based on the CommonKADS approach for the development of scalable Agricultural DSS. A specific web based DSS application is used for demonstrating the multi agent CommonKADS modeling approach. The system offers decision support for irrigation scheduling and weather based disease forecasting for the popular crops of India. The proposed framework along with the required expert knowledge, provides a platform on which the larger DSS can be built for any crop at a given location.

  18. Supporting Collaborative Model and Data Service Development and Deployment with DevOps

    Science.gov (United States)

    David, O.

    2016-12-01

    Adopting DevOps practices for model service development and deployment enables a community to engage in service-oriented modeling and data management. The Cloud Services Integration Platform (CSIP) developed the last 5 years at Colorado State University provides for collaborative integration of environmental models into scalable model and data services as a micro-services platform with API and deployment infrastructure. Originally developed to support USDA natural resource applications, it proved suitable for a wider range of applications in the environmental modeling domain. While extending its scope and visibility it became apparent community integration and adequate work flow support through the full model development and application cycle drove successful outcomes.DevOps provide best practices, tools, and organizational structures to optimize the transition from model service development to deployment by minimizing the (i) operational burden and (ii) turnaround time for modelers. We have developed and implemented a methodology to fully automate a suite of applications for application lifecycle management, version control, continuous integration, container management, and container scaling to enable model and data service developers in various institutions to collaboratively build, run, deploy, test, and scale services within minutes.To date more than 160 model and data services are available for applications in hydrology (PRMS, Hydrotools, CFA, ESP), water and wind erosion prediction (WEPP, WEPS, RUSLE2), soil quality trends (SCI, STIR), water quality analysis (SWAT-CP, WQM, CFA, AgES-W), stream degradation assessment (SWAT-DEG), hydraulics (cross-section), and grazing management (GRAS). In addition, supporting data services include soil (SSURGO), ecological site (ESIS), climate (CLIGEN, WINDGEN), land management and crop rotations (LMOD), and pesticides (WQM), developed using this workflow automation and decentralized governance.

  19. Modeling PMESII Factors to Support Strategic Education

    Science.gov (United States)

    2008-06-11

    Government Friendly Faction Cuban Americans Support for Ruling Govt Support for Ruling Govt Havana Santiago de Cuba Camaguey Support for the ruling...distract from ongoing exercise cycle – Implement running start Scenario authors: pursue PMESII trial: notional educational Cuba scenario Several emerging...educational scenario – PMESII capability brief – JFCOM and USAWC had independent Cuba representations – Provided scenario documentation Iterative

  20. Overload prevention in model supports for wind tunnel model testing

    Directory of Open Access Journals (Sweden)

    Anton IVANOVICI

    2015-09-01

    Full Text Available Preventing overloads in wind tunnel model supports is crucial to the integrity of the tested system. Results can only be interpreted as valid if the model support, conventionally called a sting remains sufficiently rigid during testing. Modeling and preliminary calculation can only give an estimate of the sting’s behavior under known forces and moments but sometimes unpredictable, aerodynamically caused model behavior can cause large transient overloads that cannot be taken into account at the sting design phase. To ensure model integrity and data validity an analog fast protection circuit was designed and tested. A post-factum analysis was carried out to optimize the overload detection and a short discussion on aeroelastic phenomena is included to show why such a detector has to be very fast. The last refinement of the concept consists in a fast detector coupled with a slightly slower one to differentiate between transient overloads that decay in time and those that are the result of aeroelastic unwanted phenomena. The decision to stop or continue the test is therefore conservatively taken preserving data and model integrity while allowing normal startup loads and transients to manifest.

  1. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  2. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William D.

    2014-06-23

    With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.

  3. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William D.

    2014-06-23

    With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.

  4. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  5. Examining the Support Peer Supporters Provide Using Structural Equation Modeling: Nondirective and Directive Support in Diabetes Management.

    Science.gov (United States)

    Kowitt, Sarah D; Ayala, Guadalupe X; Cherrington, Andrea L; Horton, Lucy A; Safford, Monika M; Soto, Sandra; Tang, Tricia S; Fisher, Edwin B

    2017-04-17

    Little research has examined the characteristics of peer support. Pertinent to such examination may be characteristics such as the distinction between nondirective support (accepting recipients' feelings and cooperative with their plans) and directive (prescribing "correct" choices and feelings). In a peer support program for individuals with diabetes, this study examined (a) whether the distinction between nondirective and directive support was reflected in participants' ratings of support provided by peer supporters and (b) how nondirective and directive support were related to depressive symptoms, diabetes distress, and Hemoglobin A1c (HbA1c). Three hundred fourteen participants with type 2 diabetes provided data on depressive symptoms, diabetes distress, and HbA1c before and after a diabetes management intervention delivered by peer supporters. At post-intervention, participants reported how the support provided by peer supporters was nondirective or directive. Confirmatory factor analysis (CFA), correlation analyses, and structural equation modeling examined the relationships among reports of nondirective and directive support, depressive symptoms, diabetes distress, and measured HbA1c. CFA confirmed the factor structure distinguishing between nondirective and directive support in participants' reports of support delivered by peer supporters. Controlling for demographic factors, baseline clinical values, and site, structural equation models indicated that at post-intervention, participants' reports of nondirective support were significantly associated with lower, while reports of directive support were significantly associated with greater depressive symptoms, altogether (with control variables) accounting for 51% of the variance in depressive symptoms. Peer supporters' nondirective support was associated with lower, but directive support was associated with greater depressive symptoms.

  6. Federated Search Scalability

    OpenAIRE

    Txurruka Alberdi, Beñat

    2015-01-01

    The search of images on the internet has become a natural process for the internet surfer. Most of the search engines use complex algorithms to look up for images but their metadata is mostly ignored, in part because many image hosting sites remove metadata when the image is uploaded. The JPSearch standard has been developed to handle interoperability in metadata based searches, but it seems that the market is not interested on supporting it. The starting point of this proje...

  7. Scalability study of solid xenon

    CERN Document Server

    Yoo, J; Jaskierny, W F; Markley, D; Pahlka, R B; Balakishiyeva, D; Saab, T; Filipenko, M

    2015-01-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  8. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  9. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  10. A Web-based Scalable Multi-Sensor Data Fusion System Model%一种基于Web的多传感器数据融合系统模型

    Institute of Scientific and Technical Information of China (English)

    何佳洲; 陈世福

    2002-01-01

    With the development of the broadband network technology,there is a need for information fusion anyone,any time and anywhere.The data fusion systems(DFS) based on client/server model have an ability to process real time data better with high security,but they are usually lack of usability and scalability.However,the browser/server model based Internet has a good expandability,and browser is much convenient in use.In this paper,by absorbing the merits of above two architectures,we propose a Web-based multi-sensor DFS model,which can not only process real time data,but also ensure the system''''''''s usability and scalability.Secondly,the separative mechanism of data server and Web server,which makes the fusion focus on its different resources,can guarantee the robustness of the system.Thirdly,the security troubles related to Internet are resolved via using the two-fold protections of identity authentification and informantion encryption.In the end,a dual-blackboard implementation scheme is given.

  11. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  12. Light Models of Civilian Support in Blue-Red Operations

    Science.gov (United States)

    2012-06-01

    Population 3 2 Civilian Support Social – Cognitive Constructs Perceptions Disposition Moral disengagement Personality Beliefs Grievance How is a...RELEASE Vulnerability Sacrifice Anger Commitment Model Civilian Support 4 3 Perceptions Disposition Moral disengagement Personality Beliefs Grievance • A

  13. Enterprise Modelling supported by Manufacturing Systems Theory

    OpenAIRE

    MYKLEBUST, Odd

    2002-01-01

    There exist today a large number of enterprise models or enterprise modelling approaches. In a study of standards and project developed models there are two approaches: CIMOSA “The Open Systems Architecture for CIM” and GERAM, “Generalised Enterprise Reference Architecture”, which show a system orientation that can be further followed as interesting research topics for a system theory oriented approach for enterprise models. In the selection of system theories, manufacturing system theory...

  14. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  15. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  16. Cognitive model supported tactical training simulation

    NARCIS (Netherlands)

    Doesburg, W.A. van; Bosch, K. van den

    2005-01-01

    Simulation-based tactical training can be made more effective by using cognitive software agents to play key roles (e.g. team mate, adversaries, instructor). Due to the dynamic and complex nature of military tactics, it is hard to create agents that behave realistically and support the training lead

  17. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  18. Le Bon Samaritain: A Community-Based Care Model Supported by Technology.

    Science.gov (United States)

    Gay, Valerie; Leijdekkers, Peter; Gill, Asif; Felix Navarro, Karla

    2015-01-01

    The effective care and well-being of a community is a challenging task especially in an emergency situation. Traditional technology-based silos between health and emergency services are challenged by the changing needs of the community that could benefit from integrated health and safety services. Low-cost smart-home automation solutions, wearable devices and Cloud technology make it feasible for communities to interact with each other, and with health and emergency services in a timely manner. This paper proposes a new community-based care model, supported by technology, that aims at reducing healthcare and emergency services costs while allowing community to become resilient in response to health and emergency situations. We looked at models of care in different industries and identified the type of technology that can support the suggested new model of care. Two prototypes were developed to validate the adequacy of the technology. The result is a new community-based model of care called 'Le Bon Samaritain'. It relies on a network of people called 'Bons Samaritains' willing to help and deal with the basic care and safety aspects of their community. Their role is to make sure that people in their community receive and understand the messages from emergency and health services. The new care model is integrated with existing emergency warning, community and health services. Le Bon Samaritain model is scalable, community-based and can help people feel safer, less isolated and more integrated in their community. It could be the key to reduce healthcare cost, increase resilience and drive the change for a more integrated emergency and care system.

  19. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  20. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  1. An Open Infrastructure for Scalable, Reconfigurable Analysis

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  2. Data integration technologies to support integrated modelling

    NARCIS (Netherlands)

    Knapen, M.J.R.; Roosenschoon, O.R.; Lokers, R.M.; Janssen, S.J.C.; Randen, van Y.; Verweij, P.J.F.M.

    2013-01-01

    Over the recent years the scientific activities of our organisation in large research projects show a shifting priority from model integration to the integration of data itself. Our work in several large projects on integrated modelling for impact assessment studies has clearly shown the importance

  3. Data integration technologies to support integrated modelling

    NARCIS (Netherlands)

    Knapen, M.J.R.; Roosenschoon, O.R.; Lokers, R.M.; Janssen, S.J.C.; Randen, van Y.; Verweij, P.J.F.M.

    2013-01-01

    Over the recent years the scientific activities of our organisation in large research projects show a shifting priority from model integration to the integration of data itself. Our work in several large projects on integrated modelling for impact assessment studies has clearly shown the importance

  4. TH*: Scalable Distributed Trie Hashing

    Directory of Open Access Journals (Sweden)

    Aridj Mohamed

    2010-11-01

    Full Text Available In today's world of computers, dealing with huge amounts of data is not unusual. The need to distribute this data in order to increase its availability and increase the performance of accessing it is more urgent than ever. For these reasons it is necessary to develop scalable distributed data structures. In this paper we propose a TH* distributed variant of the Trie Hashing data structure. First we propose Thsw new version of TH without node Nil in digital tree (trie, then this version will be adapted to multicomputer environment. The simulation results reveal that TH* is scalable in the sense that it grows gracefully, one bucket at a time, to a large number of servers, also TH* offers a good storage space utilization and high query efficiency special for ordering operations.

  5. Representing Conversations for Scalable Overhearing

    CERN Document Server

    Gutnik, G; 10.1613/jair.1829

    2011-01-01

    Open distributed multi-agent systems are gaining interest in the academic community and in industry. In such open settings, agents are often coordinated using standardized agent conversation protocols. The representation of such protocols (for analysis, validation, monitoring, etc) is an important aspect of multi-agent applications. Recently, Petri nets have been shown to be an interesting approach to such representation, and radically different approaches using Petri nets have been proposed. However, their relative strengths and weaknesses have not been examined. Moreover, their scalability and suitability for different tasks have not been addressed. This paper addresses both these challenges. First, we analyze existing Petri net representations in terms of their scalability and appropriateness for overhearing, an important task in monitoring open multi-agent systems. Then, building on the insights gained, we introduce a novel representation using Colored Petri nets that explicitly represent legal joint conv...

  6. A scalable and low power VLIW DSP core for embedded system design

    Institute of Scientific and Technical Information of China (English)

    Sheraz Anjum; CHEN Jie; HAN Liang; LIN Chuan; ZHANG Xiao-xiao; SU Ye-hua; Chip Cheng

    2008-01-01

    Aims to provide the block architecture of CoStar3400 DSP that is a high performance, low power and scalable VLIW DSP core, it efficiently deployed a variable-length execution set (VLES) execution model which utilizes the maximum parallelism by allowing multiple address generations and data arithmetic logic units to exe-cute multiple instructions in a single clock cycle. The scalability was provided mainly in using more or less num-ber of functional units according to the intended application. Low power support was added by careful architectur-al design techniques such as fine-grain clock gating and activation of only the required number of control signals at each stage of the pipeline. The said features of the core make it a suitable candidate for many SoC configurations,especially for compute intensive applications such as wire-line and wireless communications, including infrastruc-ture and subscriber communications. The embedded system designers can efficiently use the scalability and VLIW features of the core by scaling the number of execution units according to specific needs of the application to effec-tively reduce the power consumption, chip area and time to market the intended final product.

  7. Information Service Model with Mobile Agent Supported

    Institute of Scientific and Technical Information of China (English)

    邹涛; 王继成; 张福炎

    2000-01-01

    Mobile Agent is a kind of novel agent technology characterized by mobile, intelligent, parallel and asynchronous computing. In this paper, a new information service model that adopts mobile agent technology is introduced first,and then an experimental system DOLTRIA system that is implemented based on the model is described. The DOLTRIA system implemented by WWW framework and Java can search for relevant HTML documents on a set of Web servers. The result of experiments shows that performance improvement can be achieved by this model, and both the elapsed time and network traffic are reduced significantly.

  8. On Support Functions for the Development of MFM Models

    DEFF Research Database (Denmark)

    Heussen, Kai; Lind, Morten

    2012-01-01

    a review of MFM applications, and contextualizes the model development with respect to process design and operation knowledge. Developing a perspective for an environment for MFM-oriented model- and application-development a tool-chain is outlined and relevant software functions are discussed....... With a perspective on MFM-modeling for existing processes and automation design, modeling stages and corresponding formal model properties are identified. Finally, practically feasible support functions and model-checks to support the model-development are suggested.......A modeling environment and methodology are necessary to ensure quality and reusability of models in any domain. For MFM in particular, as a tool for modeling complex systems, awareness has been increasing for this need. Introducing the context of modeling support functions, this paper provides...

  9. Support vector regression model for complex target RCS predicting

    Institute of Scientific and Technical Information of China (English)

    Wang Gu; Chen Weishi; Miao Jungang

    2009-01-01

    The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.

  10. Controlled Ecological Life Support System (CELSS) modeling

    Science.gov (United States)

    Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray

    1992-01-01

    Attention is given to CELSS, a critical technology for the Space Exploration Initiative. OCAM (object-oriented CELSS analysis and modeling) models carbon, hydrogen, and oxygen recycling. Multiple crops and plant types can be simulated. Resource recovery options from inedible biomass include leaching, enzyme treatment, aerobic digestion, and mushroom and fish growth. The benefit of using many small crops overlapping in time, instead of a single large crop, is demonstrated. Unanticipated results include startup transients which reduce the benefit of multiple small crops. The relative contributions of mass, energy, and manpower to system cost are analyzed in order to determine appropriate research directions.

  11. Decision Support System for Resource Allocation Model

    Science.gov (United States)

    1989-04-01

    by Presutti and Trepp in their paper "Much Ado about EOQ." [2) The constraints used in the stock fund model are total stock fund dollars and limits on...Jersey, 1963. 2. Presutti, Victor J., Jr. and Trepp , Richard C., More Ado About Economic Order Ouantities (EOO), Operations Analysis Office

  12. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  13. Invention software support by integrating function and mathematical modeling

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, Wessel Willems; Bakker, H.M.

    2015-01-01

    New idea generation is imperative for successful product innovation and technology development. This paper presents the development of a novel type of invention support software. The support tool integrates both function modeling and mathematical modeling, thereby enabling quantitative analyses on a

  14. Invention software support by integrating function and mathematical modeling

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, W.W.; Bakker, H.M.

    2015-01-01

    New idea generation is imperative for successful product innovation and technology development. This paper presents the development of a novel type of invention support software. The support tool integrates both function modeling and mathematical modeling, thereby enabling quantitative analyses on a

  15. Key Elements of the Tutorial Support Management Model

    Science.gov (United States)

    Lynch, Grace; Paasuke, Philip

    2011-01-01

    In response to an exponential growth in enrolments the "Tutorial Support Management" (TSM) model has been adopted by Open Universities Australia (OUA) after a two-year project on the provision of online tutor support in first year, online undergraduate units. The essential focus of the TSM model was the development of a systemic approach…

  16. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  17. A Transaction Management Model Supporting Service Grids

    Institute of Scientific and Technical Information of China (English)

    CHEN Xiang-yu; SHEN De-rong; YU Ge; LI Rui; ZHANG Xiao

    2004-01-01

    Service grids possess the features of intelligence and automation to compose heterogeneous resources and will be a commercial focus in network economy pattern.However, service grid business is loosely-coupled, distributed and long-running, so there is a real challenge for the researchers to provide consistency and reliability for service grid business.This paper references the transaction management methods in Web service and the safepoint concept in workflow system, combines with the feature of the service grid, and then presents a transaction management model based on service grid.This model guarantees the robustness of grid service composition and assures that service grid enabling system provides consistent and reliable results for the consumers.

  18. Experimental Demonstration of a Bandwidth Scalable LAN Emulation over EPON Employing OFDMA

    DEFF Research Database (Denmark)

    Deng, Lei; Zhao, Ying; Yu, Xianbin

    2011-01-01

    We propose a novel EPON system supporting bandwidth scalable local area network emulation by using orthogonal frequency division multiplexing access technology. Added to the EPON traffic, 250Mbps and 500Mbps OFDM LAN traffics are experimentally emulated....

  19. PCCM2: A GCM adapted for scalable parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.; Semeraro, B.D.; Worley, P. [Oak Ridge National Lab., TN (United States); Foster, I.; Michalakes, J.; Toonen, B. [Argonne National Lab., IL (United States); Hack, J.J.; Williamson, D.L. [National Center for Atmospheric Research, Boulder, CO (United States)

    1994-01-01

    The Computer Hardware, Advanced Mathematics and Model Physics (CHAMMP) program seeks to provide climate researchers with an advanced modeling capability for the study of global change issues. One of the more ambitious projects being undertaken in the CHAMMP program is the development of PCCM2, an adaptation of the Community Climate Model (CCM2) for scalable parallel computers. PCCM2 uses a message-passing, domain-decomposition approach, in which each processor is allocated responsibility for computation on one part of the computational grid, and messages are generated to communicate data between processors. Much of the research effort associated with development of a parallel code of this sort is concerned with identifying efficient decomposition and communication strategies. In PCCM2, this task is complicated by the need to support both semi-Lagrangian transport and spectral transport. Load balancing and parallel I/O techniques are also required. In this paper, the authors review the various parallel algorithms used in PCCM2 and the work done to arrive at a validated model.

  20. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  1. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    Science.gov (United States)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and

  2. The Geodynamo: Models and supporting experiments

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, U.; Stieglitz, R.

    2003-03-01

    The magnetic field is a characteristic feature of our planet Earth. It shelters the biosphere against particle radiation from the space and offers by its direction orientation to creatures. The question about its origin has challenged scientists to find sound explanations. Major progress has been achieved during the last two decades in developing dynamo models and performing corroborating laboratory experiments to explain convincingly the principle of the Earth magnetic field. The article reports some significant steps towards our present understanding of this subject and outlines in particular relevant experiments, which either substantiate crucial elements of self-excitation of magnetic fields or demonstrate dynamo action completely. The authors are aware that they have not addressed all aspects of geomagnetic studies; rather, they have selected the material from the huge amount of literature such as to motivate the recently growing interest in experimental dynamo research. (orig.)

  3. Academic Support through Information System : Srinivas Integrated Model

    OpenAIRE

    Aithal, Sreeramana; Kumar, Suresh

    2016-01-01

    As part of imparting quality higher education for undergraduate and post graduate students, Srinivas Institute of Management Studies (SIMS) developed an education service model for integrated academic support known as Srinivas Integrated Model. Backed by the presumption that knowledge is power and information is fundamental to knowledge building and knowledge sharing, this model is aimed to provide information support to students for improved academic performance. Information on the college a...

  4. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  5. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  6. Scalable Machine Learning Framework for Behavior-Based Access Control

    Science.gov (United States)

    2013-08-01

    Mahout [10] is an open-source project for scalable machine learning. It provide ready implementations for K-Means clustering following a MapReduce ...paradigm, but does not provide MapReduce implementations for SVMs, which are the most expensive models to train in BBAC. Massive Online Analysis

  7. Scalable Engineering of Quantum Optical Information Processing Architectures (SEQUOIA)

    Science.gov (United States)

    2016-12-13

    interfacing with telecom quantum networks /qubit distribution 4. DV quantum computing using CV cluster Embed circuit model quantum computing into CV...linear-optics mode transformations Realizing scalable, high-fidelity interferometric networks is a central challenge to be addressed on the path...methods for characterizing these large interferometric networks . Figure 1:Photonic integrated circuit. Left: programmable PIC. Right: Transmission at

  8. A model for effective planning of SME support services.

    Science.gov (United States)

    Rakićević, Zoran; Omerbegović-Bijelović, Jasmina; Lečić-Cvetković, Danica

    2016-02-01

    This paper presents a model for effective planning of support services for small and medium-sized enterprises (SMEs). The idea is to scrutinize and measure the suitability of support services in order to give recommendations for the improvement of a support planning process. We examined the applied support services and matched them with the problems and needs of SMEs, based on the survey conducted in 2013 on a sample of 336 SMEs in Serbia. We defined and analysed the five research questions that refer to support services, their consistency with the SMEs' problems and needs, and the relation between the given support and SMEs' success. The survey results have shown a statistically significant connection between them. Based on this result, we proposed an eight-phase model as a method for the improvement of support service planning for SMEs. This model helps SMEs to plan better their requirements in terms of support; government and administration bodies at all levels and organizations that provide support services to understand better SMEs' problems and needs for support.

  9. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  10. Reference models supporting enterprise networks and virtual enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Bernus, Peter

    2003-01-01

    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...

  11. An Expert support model for ex situ soil remediation

    NARCIS (Netherlands)

    Okx, J.P.; Frankhuizen, E.M.; Wit, de J.C.; Pijls, C.G.J.M.; Stein, A.

    2000-01-01

    This paper presents an expert support model recombining knowledge and experience obtained during ex situ soil remediation. To solve soil remediation problems, an inter-disciplinary approach is required. Responsibilities during the soil remediation process, however, are increasingly decentralised, wh

  12. Feedback model to support designers of blended learning courses

    NARCIS (Netherlands)

    Hummel, Hans

    2006-01-01

    Hummel, H. G. K. (2006, December). Feedback model to support designers of blended learning courses. International Review of Open and Distance Learning [Online], 7(3). Available: http://www.irrodl.org/index.php/irrodl/article/view/379/748

  13. A Markov model for measuring artillery fire support effectiveness

    OpenAIRE

    Guzik, Dennis M.

    1988-01-01

    Approved for public release; distribution is unlimited This thesis presents a Markov model, which, given an indirect fire weapon system's parameters, yields measures of the weapon's effectiveness in providing fire support to a maneuver element. These parameters may be determined for a variety of different scenarios. Any indirect fire weapon system may be a candidate for evaluation. This model may be used in comparing alternative weapon systems for the role of direct support of a Marin...

  14. Scalable asynchronous execution of cellular automata

    Science.gov (United States)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  15. Integrating process and ontology to support supply chain modelling

    OpenAIRE

    2011-01-01

    Abstract Many researchers have recognized a lack of common framework to support supply chain modelling and analysis and proposed their solutions accordingly. Majority of the approaches proposed are more concerned with building an object model of a supply chain than identifying processes which realistically describe a supply chain. Though object models provide means or building blocks necessary to model and analyse different elements of a supply chain, an absence of supply chain pro...

  16. Supporting requirements model evolution throughout the system life-cycle

    OpenAIRE

    Ernst, Neil; Mylopoulos, John; Yu, Yijun; Ngyuen, Tien T.

    2008-01-01

    Requirements models are essential not just during system implementation, but also to manage system changes post-implementation. Such models should be supported by a requirements model management framework that allows users to create, manage and evolve models of domains, requirements, code and other design-time artifacts along with traceability links between their elements. We propose a comprehensive framework which delineates the operations and elements necessary, and then describe a tool imp...

  17. lexiDB:a scalable corpus database management system

    OpenAIRE

    Coole, Matt; Rayson, Paul Edward; Mariani, John Amedeo

    2016-01-01

    lexiDB is a scalable corpus database management system designed to fulfill corpus linguistics retrieval queries on multi-billion-word multiply-annotated corpora. It is based on a distributed architecture that allows the system to scale out to support ever larger text collections. This paper presents an overview of the architecture behind lexiDB as well as a demonstration of its functionality. We present lexiDB's performance metrics based on the AWS (Amazon Web Services) infrastructure with tw...

  18. Human Exposure Modeling - Databases to Support Exposure Modeling

    Science.gov (United States)

    Human exposure modeling relates pollutant concentrations in the larger environmental media to pollutant concentrations in the immediate exposure media. The models described here are available on other EPA websites.

  19. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over

  20. Vibrations of a Simply Supported Beam with a Fractional Viscoelastic Material ModelSupports Movement Excitation

    Directory of Open Access Journals (Sweden)

    Jan Freundlich

    2013-01-01

    Full Text Available The paper presents vibration analysis of a simply supported beam with a fractional order viscoelastic material model. The Bernoulli-Euler beam model is considered. The beam is excited by the supports movement. The Riemann – Liouville fractional derivative of order 0 α ⩽ 1 is applied. In the first stage, the steady-state vibrations of the beam are analyzed and therefore the Riemann – Liouville fractional derivative with lower terminal at −∞ is assumed. This assumption simplifies solution of the fractional differential equations and enables us to directly obtain amplitude-frequency characteristics of the examined system. The characteristics are obtained for various values of fractional derivative of order α and values of the Voigt material model parameters. The studies show that the selection of appropriate damping coefficients and fractional derivative order of damping model enables us to fit more accurately dynamic characteristic of the beam in comparison with using integer order derivative damping model.

  1. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan;

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering ...

  2. Microscopy of a scalable superatom

    CERN Document Server

    Zeiher, Johannes; Hild, Sebastian; Macrì, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-01-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions which lead to extreme nonlinearities in laser coupled atomic ensembles. As a result, multiple excitation of a Micrometer sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called "superatom", is a valuable resource for quantum information, providing a collective Qubit. Here we report on the preparation of two orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub shot noise precision by local manipulation of a two-dimensional Mott ins...

  3. Determining the potential scalability of transport interventions for improving maternal, child, and newborn health in Pakistan.

    Science.gov (United States)

    uddin Mian, Naeem; Malik, Mariam Zahid; Iqbal, Sarosh; Alvi, Muhammad Adeel; Memon, Zahid; Chaudhry, Muhammad Ashraf; Majrooh, Ashraf; Awan, Shehzad Hussain

    2015-11-25

    Pakistan is far behind in achieving the Millennium Development Goals regarding the reduction of child and maternal mortality. Amongst other factors, transport barriers make the requisite obstetric care inaccessible for women during pregnancy and at birth, when complications may become life threatening for mother and child. The significance of efficient transport in maternal and neonatal health calls for identifying which currently implemented transport interventions have potential for scalability. A qualitative appraisal of data and information about selected transport interventions generated primarily by beneficiaries, coordinators, and heads of organizations working with maternal, child, and newborn health programs was conducted against the CORRECT criteria of Credibility, Observability, Relevance, Relative Advantage, Easy-Transferability, Compatibility and Testability. Qualitative comparative analysis (QCA) techniques were used to analyse seven interventions against operational indicators. Logical inference was drawn to assess the implications of each intervention. QCA was used to determine simplifying and complicating factors to measure potential for scaling up of the selected transport intervention. Despite challenges like deficient in-journey care and need for greater community involvement, community-based ambulance services were managed with the support of the community and had a relatively simple model, and therefore had high scalability potential. Other interventions, including facility-based services, public-sector emergency services, and transport voucher schemes, had limitations of governance, long-term sustainability, large capital expenditures, and need for management agencies that adversely affected their scalability potential. To reduce maternal and child morbidity and mortality and increase accessibility of health facilities, it is important to build effective referral linkages through efficient transport systems. Effective linkages between

  4. Reference models supporting enterprise networks and virtual enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Bernus, Peter

    2003-01-01

    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing ...... the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.......This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...

  5. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  6. Model based decision support for planning of road maintenance

    NARCIS (Netherlands)

    Worm, J.M.; Harten, van A.

    1996-01-01

    In this article we describe a Decision Support Model, based on Operational Research methods, for the multi-period planning of maintenance of bituminous pavements. This model is a tool for the road manager to assist in generating an optimal maintenance plan for a road. Optimal means: minimising the N

  7. Interrogative Model of Inquiry and Computer-Supported Collaborative Learning.

    Science.gov (United States)

    Hakkarainen, Kai; Sintonen, Matti

    2002-01-01

    Examines how the Interrogative Model of Inquiry (I-Model), developed for the purposes of epistemology and philosophy of science, could be applied to analyze elementary school students' process of inquiry in computer-supported learning. Suggests that the interrogative approach to inquiry can be productively applied for conceptualizing inquiry in…

  8. The Intelligent Decision Support System Model of SARS

    Institute of Scientific and Technical Information of China (English)

    ZhouXingyu; ZhangJiang; LiuYang; XieYanqing; ZhangRan; ZhaoYang; HeZhongxiong

    2004-01-01

    Based on the intelligent decision support system, a new method was presented to defense the catastrophic infectious disease such as SARS, Bird Flu, etc.. By using All Set theory, the decision support system (DSS) model can be built to analyze the noise information and forecast the trend of the catastrophe then to give the method or policy to defend the disease. The model system is composed of four subsystems: the noise analysis subsystem, forecast and simulation subsystem, diagnosis subsystem and second recovery subsystem. They are discussed briefly in this paper. This model can be used not only for SARS but also for other paroxysmal accidences.

  9. CumuloNimbo: a cloud scalable multi-tier SQL database

    OpenAIRE

    Jiménez Peris, Ricardo; Patiño Martínez, Marta; Kemme, Bettina; Brondino, Ivan; Pereira, José; Vilaça, Ricardo; Cruz, Francisco; de Oliveira, Rui; Ahmad, Yousuf

    2015-01-01

    This article presents an overview of the CumuloNimbo platform. CumuloNimbo is a framework for multi-tier applications that provides scalable and fault-tolerant processing of OLTP workloads. The main novelty of CumuloNimbo is that it provides a standard SQL interface and full transactional support without resorting to sharding and no need to know the workload in advance. Scalability is achieved by distributing request execution and transaction control across many compute nodes while data is pe...

  10. Multi-Purpose, Application-Centric, Scalable I/O Proxy Application

    Energy Technology Data Exchange (ETDEWEB)

    2015-06-15

    MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.

  11. Support vector machine-based multi-model predictive control

    Institute of Scientific and Technical Information of China (English)

    Zhejing BA; Youxian SUN

    2008-01-01

    In this paper,a support vector machine-based multi-model predictive control is proposed,in which SVM classification combines well with SVM regression.At first,each working environment is modeled by SVM regression and the support vector machine network-based model predictive control(SVMN-MPC)algorithm corresponding to each environment is developed,and then a multi-class SVM model is established to recognize multiple operating conditions.As for control,the current environment is identified by the multi-class SVM model and then the corresponding SVMN.MPCcontroller is activated at each sampling instant.The proposed modeling,switching and controller design is demonstrated in simulation results.

  12. Prioritization of engineering support requests and advanced technology projects using decision support and industrial engineering models

    Science.gov (United States)

    Tavana, Madjid

    1995-01-01

    The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.

  13. "SERPS Up": Support, Engagement and Retention of Postgraduate Students--A Model of Postgraduate Support

    Science.gov (United States)

    Alston, Margaret; Allan, Julaine; Bell, Karen; Brown, Andy; Dowling, Jane; Hamilton, Pat; McKinnon, Jenny; McKinnon, Noela; Mitchell, Rol; Whittenbury, Kerri; Valentine, Bruce; Wicks, Alison; Williams, Rachael

    2005-01-01

    The federal government's 1999 White Paper Knowledge and Innovation: a policy statement on research and research training, notes concerns about retention and completion rates in doctoral studies programs in Australia. This paper outlines a model of higher education support developed at the Centre for Rural Social Research at Charles Sturt…

  14. Relationship model and supporting activities of JIT, TQM and TPM

    OpenAIRE

    Nuttapon SaeTong; Ketlada Kitiwanwong; Jirarat Teeravaraprug

    2011-01-01

    This paper gives a relationship model and supporting activities of Just-in-time (JIT), Total Quality Management (TQM),and Total Productive Maintenance (TPM). By reviewing the concepts, 5S, Kaizen, preventive maintenance, Kanban, visualcontrol, Poka-Yoke, and Quality Control tools are the main supporting activities. Based on the analysis, 5S, preventive maintenance,and Kaizen are the foundation of the three concepts. QC tools are required activities for implementing TQM, whereasPoka-Yoke and v...

  15. A Components Library System Model and the Support Tool

    Institute of Scientific and Technical Information of China (English)

    MIAO Huai-kou; LIU Hui; LIU Jing; LI Xiao-bo

    2004-01-01

    Component-based development needs a well-designed components library and a set of support tools.This paper presents the design and implementation of a components library system model and its support tool UMLCASE.A set of practical CASE tools is constructed.UMLCASE can use UML to design Use Case Diagram, Class Diagram etc.And it integrates with components library system.

  16. Effective Team Support: From Modeling to Software Agents

    Science.gov (United States)

    Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia

    2003-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.

  17. Design Approaches to Support Preservice Teachers in Scientific Modeling

    Science.gov (United States)

    Kenyon, Lisa; Davis, Elizabeth A.; Hug, Barbara

    2011-02-01

    Engaging children in scientific practices is hard for beginning teachers. One such scientific practice with which beginning teachers may have limited experience is scientific modeling. We have iteratively designed preservice teacher learning experiences and materials intended to help teachers achieve learning goals associated with scientific modeling. Our work has taken place across multiple years at three university sites, with preservice teachers focused on early childhood, elementary, and middle school teaching. Based on results from our empirical studies supporting these design decisions, we discuss design features of our modeling instruction in each iteration. Our results suggest some successes in supporting preservice teachers in engaging students in modeling practice. We propose design principles that can guide science teacher educators in incorporating modeling in teacher education.

  18. MPFS: A truly scalable router architecture for next generation Internet

    Institute of Scientific and Technical Information of China (English)

    SUN ZhiGang; DAI Yi; GONG ZhengHu

    2008-01-01

    A new generation architecture of IP routers called massive parallel forwarding and switching (MPFS) is proposed, which is totally different from modern routers. The basic idea of MPFS is mapping complicated forwarding process into multilevel scalable switch fabric so as to implement packet forwarding in a pipelining and distributed way. This processing mechanism is named forwarding in switching (FIS). By interconnecting multi-stage, lower speed components, called forwarding and switching nodes (FSN), MPFS achieves better scalability in forwarding and switching performance just like MPP. We put emphasis upon IPv6 lookup problem in MPFS and propose a method for partitioning IPv6 FIB and mapping them to switch fabric. Simulation and computation results suggest that MPFS routers can support line-speed forwarding with a million of IPv6 prefixes at 40 Gbps. We also propose an implementation of 160 Tbps core router based on MPFS architecture at last.

  19. Organizational analysis of three community support program models.

    Science.gov (United States)

    Reinke, B; Greenley, J R

    1986-06-01

    Little attention has been paid to the organizational and administrative characteristics of effective community support programs for the chronic mentally ill. The authors analyzed three successful support programs in Wisconsin that employ three different models of service delivery: one provides services through caseworkers who carry specialized caseloads, another through local nonprofessionals who work with a centrally located program coordinator, and the third through a team of various mental health workers. Each program has tailored its organizational process to suit the types of clients it sees, the size of its catchment area, and the availability of other professional resources. The interrelated strengths and weaknesses of each model are discussed.

  20. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  1. Model-Driven Engineering Support for Building C# Applications

    Science.gov (United States)

    Derezińska, Anna; Ołtarzewski, Przemysław

    Realization of Model-Driven Engineering (MDE) vision of software development requires a comprehensive and user-friendly tool support. This paper presents a UML-based approach for building trustful C# applications. UML models are refined using profiles for assigning class model elements to C# concepts and to elements of implementation project. Stereotyped elements are verified on life and during model to code transformation in order to prevent creation of an incorrect code. The Transform OCL Fragments into C# system (T.O.F.I.C.) was created as a feature of the Eclipse environment. The system extends the IBM Rational Software Architect tool.

  2. Designing Psychological Treatments for Scalability: The PREMIUM Approach.

    Directory of Open Access Journals (Sweden)

    Sukumar Vellakkal

    Full Text Available Lack of access to empirically-supported psychological treatments (EPT that are contextually appropriate and feasible to deliver by non-specialist health workers (referred to as 'counsellors' are major barrier for the treatment of mental health problems in resource poor countries. To address this barrier, the 'Program for Effective Mental Health Interventions in Under-resourced Health Systems' (PREMIUM designed a method for the development of EPT for severe depression and harmful drinking. This was implemented over three years in India. This study assessed the relative usefulness and costs of the five 'steps' (Systematic reviews, In-depth interviews, Key informant surveys, Workshops with international experts, and Workshops with local experts in the first phase of identifying the strategies and theoretical model of the treatment and two 'steps' (Case series with specialists, and Case series and pilot trial with counsellors in the second phase of enhancing the acceptability and feasibility of its delivery by counsellors in PREMIUM with the aim of arriving at a parsimonious set of steps for future investigators to use for developing scalable EPT.The study used two sources of data: the usefulness ratings by the investigators and the resource utilization. The usefulness of each of the seven steps was assessed through the ratings by the investigators involved in the development of each of the two EPT, viz. Healthy Activity Program for severe depression and Counselling for Alcohol Problems for harmful drinking. Quantitative responses were elicited to rate the utility (usefulness/influence, followed by open-ended questions for explaining the rankings. The resources used by PREMIUM were computed in terms of time (months and monetary costs.The theoretical core of the new treatments were consistent with those of EPT derived from global evidence, viz. Behavioural Activation and Motivational Enhancement for severe depression and harmful drinking respectively

  3. Electronic market models for decision support systems on the Web

    Institute of Scientific and Technical Information of China (English)

    谢勇; 王红卫; 费奇

    2004-01-01

    With the prevalence of the Web, most decision-makers are likely to use the Web to support their decision-making. Web-based technologies are leading a major stream of researching decision support systems (DSS). We propose a formal definition and a conceptual framework for Web-based open DSS (WODSS). The formal definition gives an overall view of WODSS, and the conceptual framework based on browser/broker/server computing mode employs the electronic market to mediate decision-makers and providers, and facilitate sharing and reusing of decision resources. We also develop an admitting model, a trading model and a competing model of electronic market in WODSS based on market theory in economics. These models reveal the key mechanisms that drive WODSS operate efficiently.

  4. Using overlay network architectures for scalable video distribution

    Science.gov (United States)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  5. Supporting the model of ductile iron dendritic solidification

    Energy Technology Data Exchange (ETDEWEB)

    Santos, H.M.C.M. [Porto Univ. (Portugal). Metall. and Mater. Dept.; Pinto, A.M.P. [Minho Univ. (Portugal). Mechanical Engineering Dept.; Jacinto, M.C.P.L. [Porto Polytechnic Inst. and INEGI, Porto (Portugal). Mechanical Engineering Dept.; Sa, C.P.M. [Porto Univ. (Portugal). Materials Center

    2000-08-01

    Microsegregation in ductile iron is generally accepted as modelled by a regular pattern: the graphite promoter elements are assumed to concentrate in the neighborhood of the graphite nodules and the carbide forming elements in the eutectic cell boundaries. The authors have conducted several microanalyses in several ductile irons and concluded that the microsegregation pattern does not agree with this model but supports the mechanism of dendritic ductile iron solidification. (orig.)

  6. A Conversation Model Enabling Intelligent Agents to Give Emotional Support

    OpenAIRE

    Van der Zwaan, J.M.; Dignum, V; Jonker, C.M.

    2012-01-01

    In everyday life, people frequently talk to others to help them deal with negative emotions. To some extent, everybody is capable of comforting other people, but so far conversational agents are unable to deal with this type of situation. To provide intelligent agents with the capability to give emotional support, we propose a domain-independent conversational model that is based on topics suggested by cognitive appraisal theories of emotion and the 5-phase model that is used to structure onl...

  7. Support vector regression-based internal model control

    Institute of Scientific and Technical Information of China (English)

    HUANG Yan-wei; PENG Tie-gen

    2007-01-01

    This paper proposes a design of internal model control systems for process with delay by using support vector regression (SVR). The proposed system fully uses the excellent nonlinear estimation performance of SVR with the structural risk minimization principle. Closed-system stability and steady error are analyzed for the existence of modeling errors. The simulations show that the proposed control systems have the better control performance than that by neural networks in the cases of the training samples with small size and noises.

  8. IT-Supported Modeling, Analysis and Design of Supply Chains

    Science.gov (United States)

    Nienhaus, Jörg; Alard, Robert; Sennheiser, Andreas

    A common language is a prerequisite for analyzing and optimizing supply chains. Based on experiences with three case studies, this paper identifies the aspects of a supply chain that have to be mapped to take informed decisions on its operations. Current, integrated modeling approaches for supply chains, like the SCOR and the GSCM model, will be analyzed and an advanced approach will be defined. The resulting approach takes advantage of IT-support.

  9. A General Theory of Computational Scalability Based on Rational Functions

    CERN Document Server

    Gunther, Neil J

    2008-01-01

    The universal scalability law (USL) of computational capacity is a rational function C_p = P(p)/Q(p) with P(p) a linear polynomial and Q(p) a second-degree polynomial in the number of physical processors p, that has long been used for statistical modeling and prediction of computer system performance. We prove that C_p is equivalent to the synchronous throughput bound for a machine-repairman with state-dependent service rate. Simpler rational functions, such as Amdahl's law and Gustafson speedup, are corollaries of this queue-theoretic bound. C_p is both necessary and sufficient for modeling all practical characteristics of computational scalability.

  10. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  11. Investigation on Reliability and Scalability of an FBG-Based Hierarchical AOFSN

    Directory of Open Access Journals (Sweden)

    Li-Mei Peng

    2010-03-01

    Full Text Available The reliability and scalability of large-scale based optical fiber sensor networks (AOFSN are considered in this paper. The AOFSN network consists of three-level hierarchical sensor network architectures. The first two levels consist of active interrogation and remote nodes (RNs and the third level, called the sensor subnet (SSN, consists of passive Fiber Bragg Gratings (FBGs and a few switches. The switch architectures in the RN and various SSNs to improve the reliability and scalability of AOFSN are studied. Two SSNs with a regular topology are proposed to support simple routing and scalability in AOFSN: square-based sensor cells (SSC and pentagon-based sensor cells (PSC. The reliability and scalability are evaluated in terms of the available sensing coverage in the case of one or multiple link failures.

  12. Support Vector Machine-Based Nonlinear System Modeling and Control

    Institute of Scientific and Technical Information of China (English)

    张浩然; 韩正之; 冯瑞; 于志强

    2003-01-01

    This paper provides an introduction to a support vector machine, a new kernel-based technique introduced in statistical learning theory and structural risk minimization, then presents a modeling-control framework based on SVM.At last a numerical experiment is taken to demonstrate the proposed approach's correctness and effectiveness.

  13. Modeling Positive Behavior Interventions and Supports for Preservice Teachers

    Science.gov (United States)

    Hill, Doris Adams; Flores, Margaret M.

    2014-01-01

    The authors modeled programwide positive behavior interventions and supports (PBIS) principles to 26 preservice teachers during consolidated yearly extended school year (ESY) services delivered to elementary students from four school districts. While PBIS were in place for preservice teachers to implement with students, a similar system was…

  14. HYDRA: a decision support model for irrigation water management

    NARCIS (Netherlands)

    Jacucci, G.; Kabat, P.; Verrier, P.J.; Teixeira, J.L.; Steduto, P.; Bertanzon, G.; Giannerini, G.; Huygen, J.; Fernando, R.M.; Hooijer, A.A.; Simons, W.; Toller, G.; Tziallas, G.; Uhrik, C.; Broek, van den B.J.; Vera Munoz, J.; Yovchev, P.

    1995-01-01

    HYDRA introduces information modelling and decision-support systems (DSS) to farmers and authorities in European Mediterranean agriculture in order to improve irrigation practices at different levels. Key components of HYDRA-DSS are a hierarchical setof water balance and crop growth simulation

  15. Reference Implementation of Scalable I/O Low-Level API on Intel Paragon

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui

    1999-01-01

    The Scalable I/O (SIO) Initiative'sLow-Level Application Programming Interface (SIO LLAPI) provides filesystem implementers with a simple low-Level interface to supporthigh-level parallel I/O interfaces efficiently and effectively. Thispaper describes a reference implementation and the evaluation of the SIOLLAPI on the Intel Paragon multicomputer. The implementation providesthe file system structure and striping algorithm compatible with theParallel File System (PFS) of Intel Paragon, and runs either inside thekernel or as a user level library. The scatter-gather addressingread/write, asynchronous I/O, client caching and prefetching mechanism,file access hint mechanism, collective I/O and highly efficient filecopy have been implemented. The preliminary experience shows that theSIO LLAPI provides opportunities of significant performance improvementand is easy to implement. Some high level file system interfaces andapplications, such as PFS, ADIO and Hartree-Fock application, are alsoimplemented on top of SIO. The performance of PFS is at least thesame as that of Intel's native PFS, and in many cases, such as smallsequential file access, huge I/O requests and collective I/O, it isstable and much better. The SIO features help to support high levelinterfaces easily, quickly and more efficiently, and the cache,prefetching, hints are useful to get better performance based ondifferent access models. The scalability and performance of SIO arelimited by the network latency, network scalable bandwidth, memory copybandwidth, memory size and pattern of I/O requests. The tradeoff betweengenerality and efficiency should be considered in implementation.

  16. Scalable quantum processor with trapped electrons.

    Science.gov (United States)

    Ciaramicoli, G; Marzoli, I; Tombesi, P

    2003-07-04

    A quantum computer can be implemented by trapping electrons in vacuum within an innovative confining structure. Universal processing is realized by controlling the Coulomb interaction and applying electromagnetic pulses. This system offers scalability, high clock speed, and low decoherence.

  17. A Scalable Segmented Decision Tree Abstract Domain

    Science.gov (United States)

    Cousot, Patrick; Cousot, Radhia; Mauborgne, Laurent

    The key to precision and scalability in all formal methods for static program analysis and verification is the handling of disjunctions arising in relational analyses, the flow-sensitive traversal of conditionals and loops, the context-sensitive inter-procedural calls, the interleaving of concurrent threads, etc. Explicit case enumeration immediately yields to combinatorial explosion. The art of scalable static analysis is therefore to abstract disjunctions to minimize cost while preserving weak forms of disjunctions for expressivity.

  18. Supporting universal prevention programs: a two-phased coaching model.

    Science.gov (United States)

    Becker, Kimberly D; Darney, Dana; Domitrovich, Celene; Keperling, Jennifer Pitchford; Ialongo, Nicholas S

    2013-06-01

    Schools are adopting evidence-based programs designed to enhance students' emotional and behavioral competencies at increasing rates (Hemmeter et al. in Early Child Res Q 26:96-109, 2011). At the same time, teachers express the need for increased support surrounding implementation of these evidence-based programs (Carter and Van Norman in Early Child Educ 38:279-288, 2010). Ongoing professional development in the form of coaching may enhance teacher skills and implementation (Noell et al. in School Psychol Rev 34:87-106, 2005; Stormont et al. 2012). There exists a need for a coaching model that can be applied to a variety of teacher skill levels and one that guides coach decision-making about how best to support teachers. This article provides a detailed account of a two-phased coaching model with empirical support developed and tested with coaches and teachers in urban schools (Becker et al. 2013). In the initial universal coaching phase, all teachers receive the same coaching elements regardless of their skill level. Then, in the tailored coaching phase, coaching varies according to the strengths and needs of each teacher. Specifically, more intensive coaching strategies are used only with teachers who need additional coaching supports, whereas other teachers receive just enough support to consolidate and maintain their strong implementation. Examples of how coaches used the two-phased coaching model when working with teachers who were implementing two universal prevention programs (i.e., the PATHS curriculum and PAX Good Behavior Game [PAX GBG]) provide illustrations of the application of this model. The potential reach of this coaching model extends to other school-based programs as well as other settings in which coaches partner with interventionists to implement evidence-based programs.

  19. TYRE DYNAMICS MODELLING OF VEHICLE BASED ON SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shuibo; TANG Houjun; HAN Zhengzhi; ZHANG Yong

    2006-01-01

    Various methods of tyre modelling are implemented from pure theoretical to empirical or semi-empirical models based on experimental results. A new way of representing tyre data obtained from measurements is presented via support vector machines (SVMs). The feasibility of applying SVMs to steady-state tyre modelling is investigated by comparison with three-layer backpropagation(BP) neural network at pure slip and combined slip. The results indicate SVMs outperform the BP neural network in modelling the tyre characteristics with better generalization performance. The SVMs-tyre is implemented in 8-DOF vehicle model for vehicle dynamics simulation by means of the PAC 2002 Magic Formula as reference. The SVMs-tyre can be a competitive and accurate method to model a tyre for vehicle dynamics simulation.

  20. Is Scalability Necessary for Economic Sustainability?

    Directory of Open Access Journals (Sweden)

    Dennis F.X. Mathaisel

    2015-06-01

    Full Text Available The objective of this paper is to investigate the impact of scalability on the sustainability of any entity (ecological, environmental, human, or enterprise anywhere. Scalability refers to the ability of the enterprise to grow without losing customers, diminishing quality, or changing the core value proposition of the organization.  The question to be addressed is whether or not growth is necessary. The author has developed a framework for a sustainable entity that addresses five abilities for an entity to be sustainable: availability, dependability, capability, affordability and marketability. Scalability, as an addition to these five abilities, represents a unique challenge for some institutions, especially small ones.  Scalability attempts to describe the sensitivity of the entity to changes in the scope of operations, specifically in the context of SMEs (Small and Medium sized Enterprises because their challenges are unique.  Larger organizations have the luxury of established practices, cultures, and recognition.  Small businesses and new ventures face an environment where their brands and cultures may still be evolving, and the quality of their products and services could change with increases in scale. Consequently, this paper explores the theory that entities may not be scalable because, through growth, they might intrinsically change their core businesses and suffer losses. Keywords:  economic sustainability, scalability, SME sustainability, sustainability abilities, small business growth.

  1. A multicriteria prioritization model to support public safety planning

    Directory of Open Access Journals (Sweden)

    André Morais Gurgel

    2013-08-01

    Full Text Available Setting out to solve operational problems is a frequent part of decision making on public safety. However, the pillars of tactics and strategy are normally disregarded. Thus, this paper focuses on a strategic issue, namely that of a city prioritizing areasin which there is a degree of occurrences for criminality to increase. A multiple criteria approach is taken. The reason for this is that such a situation is normally analyzed from the perspective of the degree of police occurrences. The proposed model is based on a SMARTS multicriteria method and was applied in a Brazilian City. It combines a multicriteria method and a Monte Carlo Simulation to support an analysis of robustness. As a result, we highlight some differences between the model developed and police occurrences model. It might support differentiated policies for zones, by indicating where there should be strong actions, infrastructure investments, monitoring procedures and others public safety policies.

  2. Scalable Overlay Multicasting in Mobile Ad Hoc Networks (SOM

    Directory of Open Access Journals (Sweden)

    Pariza Kamboj

    2010-10-01

    Full Text Available Many crucial applications of MANETs like the battlefield, conference and disaster recovery defines the needs for group communications either one-to-many or many-to-many form. Multicast plays an important role in bandwidth scarce multihop mobile ad hoc networks comprise of limited battery power mobile nodes. Multicast protocols in MANETs generate many controls overhead for maintenance of multicast routingstructures due to frequent changes of network topology. Bigger multicast tables for the maintenance of network structures resultsin inefficient consumption of bandwidth of wireless links andbattery power of anemic mobile nodes, which in turn, pose thescalability problems as the network size is scaled up. However,many MANET applications demands scalability from time to time. Multicasting for MANETs, therefore, needs to reduce the state maintenance. As a remedy to these shortcomings, this paper roposes an overlay multicast protocol on application layer. In the proposed protocol titled “Scalable Overlay Multicasting in Mobile Ad Hoc Networks (SOM” the network nodes construct overlay hierarchical framework to reduce the protocols states and constrain their distribution within limited scope. Based on zone around each node, it constructs a virtual structure at application layer mapped with the physical topology at network layer, thus formed two levels of hierarchy. The concept of two level hierarchies reduces the protocol state maintenance and hence supports the vertical scalability. Protocol depends on the location information obtained using a distributed location service, which effectively reduces the overhead for route searching and updating the source based multicast tree.

  3. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Marko Hännikäinen

    2006-10-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  4. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Kulmala Ari

    2006-01-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  5. Systematic Literature Review of Agile Scalability for Large Scale Projects

    Directory of Open Access Journals (Sweden)

    Hina saeeda

    2015-09-01

    Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.

  6. MULTI SUPPORT VECTOR MACHINES DECISION MODEL AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    阎威武; 陈治纲; 邵惠鹤

    2002-01-01

    Support Vector Machines (SVM) is a powerful machine learning method developed from statistical learning theory and is currently an active field in artificial intelligent technology. SVM is sensitive to noise vectors near hyperplane since it is determined only by few support vectors. In this paper, Multi SVM decision model(MSDM)was proposed. MSDM consists of multiple SVMs and makes decision by synthetic information based on multi SVMs. MSDM is applied to heart disease diagnoses based on UCI benchmark data set. MSDM somewhat inproves the robust of decision system.

  7. Relationship model and supporting activities of JIT, TQM and TPM

    Directory of Open Access Journals (Sweden)

    Nuttapon SaeTong

    2011-02-01

    Full Text Available This paper gives a relationship model and supporting activities of Just-in-time (JIT, Total Quality Management (TQM,and Total Productive Maintenance (TPM. By reviewing the concepts, 5S, Kaizen, preventive maintenance, Kanban, visualcontrol, Poka-Yoke, and Quality Control tools are the main supporting activities. Based on the analysis, 5S, preventive maintenance,and Kaizen are the foundation of the three concepts. QC tools are required activities for implementing TQM, whereasPoka-Yoke and visual control are necessary activities for implementing TPM. After successfully implementing TQM andTPM, Kanban is needed for JIT.

  8. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    Science.gov (United States)

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.

    2015-12-01

    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  9. Data and Models Needed to Support Civil Aviation

    Science.gov (United States)

    Onsager, Terrance; Biesecker, D. A.; Berger, Thomas; Rutledge, Robert

    2016-07-01

    The effective utilization of existing data and models is an important element in advancing the goals of the COSPAR/ILWS space weather roadmap. This is recommended to be done through innovative approaches to data utilization, including data driving, data assimilation, and ensemble modeling. This presentation will focus on observations and models needed to support space weather services for civil aviation and commercial space transportation. The service needs for aviation will be discussed, and an overview will be given of some of the existing data and models that can provide these services. Efforts underway to define the requirements for real-time data and to assess current modeling capabilities will be described. Recommendations will be offered for internationally coordinated activities that could identify priorities and further the roadmap goals.

  10. Silicon nanophotonics for scalable quantum coherent feedback networks

    CERN Document Server

    Sarovar, Mohan; Cox, Jonathan; Brif, Constantin; DeRose, Christopher T; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully ...

  11. Scalable, sustainable cost-effective surgical care: a model for safety and quality in the developing world, part III: impact and sustainability.

    Science.gov (United States)

    Campbell, Alex; Restrepo, Carolina; Mackay, Don; Sherman, Randy; Varma, Ajit; Ayala, Ruben; Sarma, Hiteswar; Deshpande, Gaurav; Magee, William

    2014-09-01

    The Guwahati Comprehensive Cleft Care Center (GCCCC) utilizes a high-volume, subspecialized institution to provide safe, quality, and comprehensive and cost-effective surgical care to a highly vulnerable patient population. The GCCCC utilized a diagonal model of surgical care delivery, with vertical inputs of mission-based care transitioning to investments in infrastructure and human capital to create a sustainable, local care delivery system. Over the first 2.5 years of service (May 2011-November 2013), the GCCCC made significant advances in numerous areas. Progress was meticulously documented to evaluate performance and provide transparency to stakeholders including donors, government officials, medical oversight bodies, employees, and patients. During this time period, the GCCCC provided free operations to 7,034 patients, with improved safety, outcomes, and multidisciplinary services while dramatically decreasing costs and increasing investments in the local community. The center has become a regional referral cleft center, and governments of surrounding states have contracted the GCCCC to provide care for their citizens with cleft lip and cleft palate. Additional regional and global impact is anticipated through continued investments into education and training, comprehensive services, and research and outcomes. The success of this public private partnership demonstrates the value of this model of surgical care in the developing world, and offers a blueprint for reproduction. The GCCCC experience has been consistent with previous studies demonstrating a positive volume-outcomes relationship, and provides evidence for the value of the specialty hospital model for surgical delivery in the developing world.

  12. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  13. Context-aware Workflow Model for Supporting Composite Workflows

    Institute of Scientific and Technical Information of China (English)

    Jong-sun CHOI; Jae-young CHOI; Yong-yun CHO

    2010-01-01

    -In recent years,several researchers have applied workflow technologies for service automation on ubiquitous computing environments.However,most context-aware oprkflows do not offer a method to compose several workflows in order to get more large-scale or complicated workflow.They only provide a simple workflow model,not a composite workflow model.In this paper,the autorhs propose a context-aware worrkflow model to support composite workflows by expanding the patterns of the existing context-aware workflows,which support the basic workflow patterns.The suggested worklow modei offers composite workflow patterns for a context-aware workflow,which consists of various flow patterns,such as simple,split,parallel flows,and subflow.With the suggested model,the model can easily reuse few of existing workflows to make a new workflow.As a result,it can save the development efforts and time of cantext-aware workflows and increase the workflow reusability.Therefore,the suggested model is expected to make it easy to develop applications related to context-aware workflow services on ubiquitous computing environments.

  14. Porflow modeling supporting the FY14 salstone special analysis

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Taylor, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-04-01

    PORFLOW related analyses supporting the Saltstone FY14 Special Analysis (SA) described herein are based on prior modeling supporting the Saltstone FY13 SA. Notable changes to the previous round of simulations include: a) consideration of Saltstone Disposal Unit (SDU) design type 6 under “Nominal” and “Margin” conditions, b) omission of the clean cap fill from the nominal SDU 2 and 6 modeling cases as a reasonable approximation of greater waste grout fill heights, c) minor updates to the cementitious materials degradation analysis, d) use of updated I-129 sorption coefficient (Kd) values in soils, e) assignment of the pH/Eh environment of saltstone to the underlying floor concrete, considering down flow through an SDU, and f) implementation of an improved sub-model for Tc release in an oxidizing environment. These new model developments are discussed and followed by a cursory presentation of simulation results. The new Tc release sub-model produced significantly improved (smoother) flux results compared to the FY13 SA. Further discussion of PORFLOW model setup and simulation results will be presented in the FY14 SA, including dose results.

  15. Supporting crosscutting concern modelling in software architecture design

    Institute of Scientific and Technical Information of China (English)

    CAO Donggang; MEI Hong; ZHOU Minghui

    2007-01-01

    Crosscutting concerns such as logging,security,and transaction,are well supported in the programming level by aspect-oriented programming technologies.However,addressing these issues in the high-level architecture design still remains open.This paper presents a novel approach to supporting crosscutting concern modelling in the software architecture design of component-based systems.We introduce a new element named "Aspect"into our architecture description language,ABC/ADL,to clearly model the behavior of crosscutting concerns.Aspect is the first class entity as Component and Connector in ABC/ADL.ABC/ADL Connectors provide the weaving points where the component and aspect crosscut.This approach effectively enables "separation of concerns" in high-level architecture design,and facilitates black-box reuse of COTS components.

  16. A Novel Model for Predicting Rehospitalization Risk Incorporating Physical Function, Cognitive Status, and Psychosocial Support Using Natural Language Processing.

    Science.gov (United States)

    Greenwald, Jeffrey L; Cronin, Patrick R; Carballo, Victoria; Danaei, Goodarz; Choy, Garry

    2017-03-01

    With the increasing focus on reducing hospital readmissions in the United States, numerous readmissions risk prediction models have been proposed, mostly developed through analyses of structured data fields in electronic medical records and administrative databases. Three areas that may have an impact on readmission but are poorly captured using structured data sources are patients' physical function, cognitive status, and psychosocial environment and support. The objective of the study was to build a discriminative model using information germane to these 3 areas to identify hospitalized patients' risk for 30-day all cause readmissions. We conducted clinician focus groups to identify language used in the clinical record regarding these 3 areas. We then created a dataset including 30,000 inpatients, 10,000 from each of 3 hospitals, and searched those records for the focus group-derived language using natural language processing. A 30-day readmission prediction model was developed on 75% of the dataset and validated on the other 25% and also on hospital specific subsets. Focus group language was aggregated into 35 variables. The final model had 16 variables, a validated C-statistic of 0.74, and was well calibrated. Subset validation of the model by hospital yielded C-statistics of 0.70-0.75. Deriving a 30-day readmission risk prediction model through identification of physical, cognitive, and psychosocial issues using natural language processing yielded a model that performs similarly to the better performing models previously published with the added advantage of being based on clinically relevant factors and also automated and scalable. Because of the clinical relevance of the variables in the model, future research may be able to test if targeting interventions to identified risks results in reductions in readmissions.

  17. Toward a Push-Scalable Global Internet

    CERN Document Server

    Agarwal, Sachin

    2010-01-01

    Push message delivery, where a client maintains an ``always-on'' connection with a server in order to be notified of a (asynchronous) message arrival in real-time, is increasingly being used in Internet services. The key message in this paper is that push message delivery on the World Wide Web is not scalable for servers, intermediate network elements, and battery-operated mobile device clients. We present a measurement analysis of a commercially deployed WWW push email service to highlight some of these issues. Next, we suggest content-based optimization to reduce the always-on connection requirement of push messaging. Our idea is based on exploiting the periodic nature of human-to-human messaging. We show how machine learning can accurately model the times of a day or week when messages are least likely to arrive; and turn off always-on connections these times. We apply our approach to a real email data set and our experiments demonstrate that the number of hours of active always-on connections can be cut b...

  18. A scalable neuristor built with Mott memristors

    Science.gov (United States)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  19. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  20. Phone Duration Modeling of Affective Speech Using Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Alexandros Lazaridis

    2012-07-01

    Full Text Available In speech synthesis accurate modeling of prosody is important for producing high quality synthetic speech. One of the main aspects of prosody is phone duration. Robust phone duration modeling is a prerequisite for synthesizing emotional speech with natural sounding. In this work ten phone duration models are evaluated. These models belong to well known and widely used categories of algorithms, such as the decision trees, linear regression, lazy-learning algorithms and meta-learning algorithms. Furthermore, we investigate the effectiveness of Support Vector Regression (SVR in phone duration modeling in the context of emotional speech. The evaluation of the eleven models is performed on a Modern Greek emotional speech database which consists of four categories of emotional speech (anger, fear, joy, sadness plus neutral speech. The experimental results demonstrated that the SVR-based modeling outperforms the other ten models across all the four emotion categories. Specifically, the SVR model achieved an average relative reduction of 8% in terms of root mean square error (RMSE throughout all emotional categories.

  1. Neighborhood Supported Model Level Fuzzy Aggregation for Moving Object Segmentation.

    Science.gov (United States)

    Chiranjeevi, Pojala; Sengupta, Somnath

    2014-02-01

    We propose a new algorithm for moving object detection in the presence of challenging dynamic background conditions. We use a set of fuzzy aggregated multifeature similarity measures applied on multiple models corresponding to multimodal backgrounds. The algorithm is enriched with a neighborhood-supported model initialization strategy for faster convergence. A model level fuzzy aggregation measure driven background model maintenance ensures more robustness. Similarity functions are evaluated between the corresponding elements of the current feature vector and the model feature vectors. Concepts from Sugeno and Choquet integrals are incorporated in our algorithm to compute fuzzy similarities from the ordered similarity function values for each model. Model updating and the foreground/background classification decision is based on the set of fuzzy integrals. Our proposed algorithm is shown to outperform other multi-model background subtraction algorithms. The proposed approach completely avoids explicit offline training to initialize background model and can be initialized with moving objects also. The feature space uses a combination of intensity and statistical texture features for better object localization and robustness. Our qualitative and quantitative studies illustrate the mitigation of varieties of challenging situations by our approach.

  2. Information Model Translation to Support a Wider Science Community

    Science.gov (United States)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald

    2014-05-01

    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  3. Cost modelling as decision support when locating manufacturing facilities

    Directory of Open Access Journals (Sweden)

    Christina Windmark

    2016-01-01

    Full Text Available This paper presents a methodology for cost estimation in developing decision supports for production location issues. The purpose is to provide a structured work procedure to be used by practitioners to derive the knowledge needed to make informed decisions on where to locate production. This paper present a special focus on how to integrate cost effects during the decision process. The work procedure and cost models were developed in close collaboration with a group of industrial partners. The result is a structure of cost estimation tools aligned to different steps in the work procedure. The cost models can facilitate both cost estimation for manufacturing a product under new preconditions, including support costs, and cost simulations to analyse the risks of wrong estimations and uncertainties in the input parameters. Future research aims to test the methodology in ongoing transfer projects to further understand difficulties in managing global production systems. In existing models and methods presented in the literature, cost is usually estimated on a too aggregated level to be suitable for decision support regarding production system design. The cost estimation methodology presented here provides new insights on cost driving factors related to the production system.

  4. A Cost Model for Integrated Logistic Support Activities

    Directory of Open Access Journals (Sweden)

    M. Elena Nenni

    2013-01-01

    Full Text Available An Integrated Logistic Support (ILS service has the objective of improving a system’s efficiency and availability for the life cycle. The system constructor offers the service to the customer, and she becomes the Contractor Logistic Support (CLS. The aim of this paper is to propose an approach to support the CLS in the budget formulation. Specific goals of the model are the provision of the annual cost of ILS activities through a specific cost model and a comprehensive examination of expected benefits, costs and savings under alternative ILS strategies. A simple example derived from an industrial application is also provided to illustrate the idea. Scientific literature is lacking in the topic and documents from the military are just dealing with the issue of performance measurement. Moreover, they are obviously focused on the customer’s perspective. Other scientific papers are general and focused only on maintenance or life cycle management. The model developed in this paper approaches the problem from the perspective of the CLS, and it is specifically tailored on the main issues of an ILS service.

  5. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  6. Support Vector Machine active learning for 3D model retrieval

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.

  7. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  8. Dynamical mass modeling of dispersion-supported dwarf galaxies

    Science.gov (United States)

    Wolf, Joseph

    The currently favored cold dark matter cosmology (LCDM) has had much success in reproducing the large scale structure of the universe. However, on smaller scales there are some possible discrepancies when attempting to match galactic observations with properties of halos in dissipationless LCDM simulations. One advantageous method to test small scale simulations with observations is through dynamical mass modeling of nearby dwarf spheroidal galaxies (dSphs). The stellar tracers of dSphs are dispersion-supported, which poses a significant challenge in accurately deriving mass profiles. Unlike rotationally-supported galaxies, the dynamics of which can be well-approximated by one-dimensional physics, modeling dispersion-supported systems given only line-of-sight data results in a well-known degeneracy between the mass profile and the velocity dispersion anisotropy. The core of this dissertation is rooted in a new advancement which we have discovered: the range of solutions allowed by the mass-anisotropy degeneracy varies as a function of radius, with a considerable minimal near the deprojected half-light radius of almost all observed dispersion-supported galaxies. This finding allows for a wide range of applications in galaxy formation scenarios to be explored in an attempt to address, amongst other hypotheses, whether the LCDM framework needs to be modified in order to reproduce observations on the small scale. This thesis is comprised of both the derivation of this finding, and its applicability to all dispersion-supported systems, ranging from dwarfs galaxies consisting of a few hundred stars to systems of 'intracluster light', containing over a trillion stars. Rarely does one have the privilege of working with systems that span such a large range in luminosity (or any intrinsic property) in a short graduate career. Although the large applicability of this scale-free finding allows for discussion in many subfields, this thesis will mainly focus on one topic: dwarf

  9. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience

    Directory of Open Access Journals (Sweden)

    Corradi Luca

    2012-10-01

    Full Text Available Abstract Background Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. Methods A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of “meta” data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach

  10. Memory-Scalable GPU Spatial Hierarchy Construction.

    Science.gov (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  11. Scalable Resource Discovery Architecture for Large Scale MANETs

    Directory of Open Access Journals (Sweden)

    Saad Al-Ahmadi

    2014-02-01

    Full Text Available The study conducted a primary investigation into using the Gray cube structure, clustering and Distributed Hash Tables (DHTs to build an efficient virtual network backbone for Resource Discovery (RD tasks in large scale Mobile Ad hoc NET works (MANETs. MANET is an autonomous system of mobile nodes characterized by wireless links. One of the major challenges in MANET is RD protocols responsible for advertising and searching network services. We propose an efficient and scalable RD architecture to meet the challenging requirements of reliable, scalable and power-efficient RD protocol suitable for MANETs with potentially thousands of wireless mobile devices. Our RD is based on virtual network backbone created by dividing the network into several non overlapping localities using multi-hop clustering. In every locality we build a Gray cube with locally adapted dimension. All the Gray cubes are connected through gateways and access points to form virtual backbone used as substrate for DHT operations to distribute, register and locate network resources efficiently. The Gray cube is characterized by low network diameter, low average distance and strong connectivity. We evaluated the proposed RD performance and compared it to some of the well known RD schemes in the literature based on modeling and simulation. The results show the superiority of the proposed RD in terms of delay, load balancing, overloading avoidance, scalability and fault-tolerance.

  12. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  13. A prototype computer-aided modelling tool for life-support system models

    Science.gov (United States)

    Preisig, H. A.; Lee, Tae-Yeong; Little, Frank

    1990-01-01

    Based on the canonical decomposition of physical-chemical-biological systems, a prototype kernel has been developed to efficiently model alternative life-support systems. It supports (1) the work in an interdisciplinary group through an easy-to-use mostly graphical interface, (2) modularized object-oriented model representation, (3) reuse of models, (4) inheritance of structures from model object to model object, and (5) model data base. The kernel is implemented in Modula-II and presently operates on an IBM PC.

  14. Scalable Machine Learning for Massive Astronomical Datasets

    Science.gov (United States)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  15. Parallel Heuristics for Scalable Community Detection

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  16. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  17. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    Science.gov (United States)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  18. Bridging groundwater models and decision support with a Bayesian network

    Science.gov (United States)

    Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert

    2013-01-01

    Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.

  19. Computer Modelling of a Tank Battle with Helicopter Support

    Directory of Open Access Journals (Sweden)

    Chatter Singh

    1986-01-01

    Full Text Available The paper attempts to model a tank versus tank battle scenario in which the defender is provided an armed helicopter unit support, against surprise advance of the attacker towards an important place. The stochastic and dynamic nature of the battle system has been handled by means of Monte Carlo simulation. In that activities like move, search, fire, hit and kill are simulated and their effects generated in the model. The game has been repeated for parameters relating to (i fire power (ii mobility (iii intervisibility (iv blind shooting (v defender/attacker force ratio and (vi helicopter unit support with the defender. Then, average numerical effects in each case have been analysed.Although the results are based on tentative data, the. trend seems to suggest that a battalion of Centurion tanks or 2 coys with a helicopter unit support stand fairly good chance to defeat the attack by M-47/48 tanks equivalent to 4 coys. Neyertheless, the methodology provides an effective basis to systematically approach realistic situations and quantitatively assess weapon systems effectiveness under tactical alternatives and battle field environments.

  20. A collaborative model for supporting community-based interdisciplinary education.

    Science.gov (United States)

    Carney, Patricia A; Schifferdecker, Karen E; Pipas, Catherine F; Fall, Leslie H; Poor, Daniel A; Peltier, Deborah A; Nierenberg, David W; Brooks, W Blair

    2002-07-01

    Development and support of community-based, interdisciplinary ambulatory medical education has achieved high priority due to on-site capacity and the unique educational experiences community sites contribute to the educational program. The authors describe the collaborative model their school developed and implemented in 2000 to integrate institution- and community-based interdisciplinary education through a centralized office, the strengths and challenges faced in applying it, the educational outcomes that are being tracked to evaluate its effectiveness, and estimates of funds needed to ensure its success. Core funding of $180,000 is available annually for a centralized office, the keystone of the model described here. With this funding, the office has (1) addressed recruitment, retention, and quality of educators for UME; (2) promoted innovation in education, evaluation, and research; (3) supported development of a comprehensive curriculum for medical school education; and (4) monitored the effectiveness of community-based education programs by tracking product yield and cost estimates needed to generate these programs. The model's Teaching and Learning Database contains information about more than 1,500 educational placements at 165 ambulatory teaching sites (80% in northern New England) involving 320 active preceptors. The centralized office facilitated 36 site visits, 22% of which were interdisciplinary, involving 122 preceptors. A total of 98 follow-up requests by community-based preceptors were fulfilled in 2000. The current submission-to-funding ratio for educational grants is 56%. Costs per educational activity have ranged from $811.50 to $1,938, with costs per preceptor ranging from $101.40 to $217.82. Cost per product (grants, manuscripts, presentations) in research and academic scholarship activities was $2,492. The model allows the medical school to balance institutional and departmental support for its educational programs, and to better position

  1. Scalable High Performance Message Passing over InfiniBand for Open MPI

    Energy Technology Data Exchange (ETDEWEB)

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  2. T:XML: A Tool Supporting User Interface Model Transformation

    Science.gov (United States)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  3. General KBE model with inheritance and multi CAD support

    Science.gov (United States)

    Tiuca, T. L.; Rusu, C.; Noveanu, S.; Mandru, D.

    2016-08-01

    Knowledge-Based Engineering (KBE) is a research field that studies methodologies and technologies for capture and re-use engineering knowledge. The primary objective of KBE is to reduce time and cost of product research processes and/or product development, which is primarily achieved through automation of repetitive design tasks while capturing, retaining and re-using engineering knowledge. Every CAD System includes KBE Tools. The power of these tools is incremented by the use of external high level programming language. The model presented in this paper has the aim to reduce times and costs of particular KBE Models development, by programming inheritance concepts and also the multi CAD Support. The model is implemented through a C# application that is also presented.

  4. Extending the Clapper-Yule model to rough printing supports.

    Science.gov (United States)

    Hébert, Mathieu; Hersch, Roger David

    2005-09-01

    The Clapper-Yule model is the only classical spectral reflection model for halftone prints that takes explicitly into account both the multiple internal reflections between the print-air interface and the paper substrate and the lateral propagation of light within the paper bulk. However, the Clapper-Yule model assumes a planar interface and does not take into account the roughness of the print surface. In order to extend the Clapper-Yule model to rough printing supports (e.g., matte coated papers or calendered papers), we model the print surface as a set of randomly oriented microfacets. The influence of the shadowing effect is evaluated and incorporated into the model. By integrating over all incident angles and facet orientations, we are able to express the internal reflectance of the rough interface as a function of the rms facet slope. By considering also the rough interface transmittances both for the incident light and for the emerging light, we obtain a generalization of the Clapper-Yule model for rough interfaces. The comparison between the classical Clapper-Yule model and the model extended to rough surfaces shows that the influence of the surface roughness on the predicted reflectance factor is small. For high-quality papers such as coated and calendered papers, as well as for low-quality papers such as newsprint or copy papers, the influence of surface roughness is negligible, and the classical Clapper-Yule model can be used to predict the halftone-print reflectance factors. The influence of roughness becomes significant only for very rough and thick nondiffusing coatings.

  5. Emulation Modeling with Bayesian Networks for Efficient Decision Support

    Science.gov (United States)

    Fienen, M. N.; Masterson, J.; Plant, N. G.; Gutierrez, B. T.; Thieler, E. R.

    2012-12-01

    Bayesian decision networks (BDN) have long been used to provide decision support in systems that require explicit consideration of uncertainty; applications range from ecology to medical diagnostics and terrorism threat assessments. Until recently, however, few studies have applied BDNs to the study of groundwater systems. BDNs are particularly useful for representing real-world system variability by synthesizing a range of hydrogeologic situations within a single simulation. Because BDN output is cast in terms of probability—an output desired by decision makers—they explicitly incorporate the uncertainty of a system. BDNs can thus serve as a more efficient alternative to other uncertainty characterization methods such as computationally demanding Monte Carlo analyses and others methods restricted to linear model analyses. We present a unique application of a BDN to a groundwater modeling analysis of the hydrologic response of Assateague Island, Maryland to sea-level rise. Using both input and output variables of the modeled groundwater response to different sea-level (SLR) rise scenarios, the BDN predicts the probability of changes in the depth to fresh water, which exerts an important influence on physical and biological island evolution. Input variables included barrier-island width, maximum island elevation, and aquifer recharge. The variability of these inputs and their corresponding outputs are sampled along cross sections in a single model run to form an ensemble of input/output pairs. The BDN outputs, which are the posterior distributions of water table conditions for the sea-level rise scenarios, are evaluated through error analysis and cross-validation to assess both fit to training data and predictive power. The key benefit for using BDNs in groundwater modeling analyses is that they provide a method for distilling complex model results into predictions with associated uncertainty, which is useful to decision makers. Future efforts incorporate

  6. Cognitive Support using BDI Agent and Adaptive User Modeling

    DEFF Research Database (Denmark)

    Hossain, Shabbir

    2012-01-01

    a set of goals for attaining the objective of this thesis. The initial goal is to recognize the activities of the users to assess the need of support for the user during the activity. However, one of the challenges of the recognition process is the adaptability for variant user behaviour due to physical...... for higher accuracy and reliability. The second goal focus on the selection process of the type support required for user based on the aptitude of performing the activities. The capability model has been extracted from International Classification of Functioning, Disability and Health (ICF), a well...... Observable Markov Decision Process (POMDP) in the BDI agent is proposed in this thesis to handle the irrational behaviour of the user. The fourth goal represents the implementation of the research approaches and performs validation of the system through experiments. The empirical results of the experiments...

  7. Distributed Hydrologic Modeling Apps for Decision Support in the Cloud

    Science.gov (United States)

    Swain, N. R.; Latu, K.; Christiensen, S.; Jones, N.; Nelson, J.

    2013-12-01

    Advances in computation resources and greater availability of water resources data represent an untapped resource for addressing hydrologic uncertainties in water resources decision-making. The current practice of water authorities relies on empirical, lumped hydrologic models to estimate watershed response. These models are not capable of taking advantage of many of the spatial datasets that are now available. Physically-based, distributed hydrologic models are capable of using these data resources and providing better predictions through stochastic analysis. However, there exists a digital divide that discourages many science-minded decision makers from using distributed models. This divide can be spanned using a combination of existing web technologies. The purpose of this presentation is to present a cloud-based environment that will offer hydrologic modeling tools or 'apps' for decision support and the web technologies that have been selected to aid in its implementation. Compared to the more commonly used lumped-parameter models, distributed models, while being more intuitive, are still data intensive, computationally expensive, and difficult to modify for scenario exploration. However, web technologies such as web GIS, web services, and cloud computing have made the data more accessible, provided an inexpensive means of high-performance computing, and created an environment for developing user-friendly apps for distributed modeling. Since many water authorities are primarily interested in the scenario exploration exercises with hydrologic models, we are creating a toolkit that facilitates the development of a series of apps for manipulating existing distributed models. There are a number of hurdles that cloud-based hydrologic modeling developers face. One of these is how to work with the geospatial data inherent with this class of models in a web environment. Supporting geospatial data in a website is beyond the capabilities of standard web frameworks and it

  8. Scalable tensor factorizations with missing data.

    Energy Technology Data Exchange (ETDEWEB)

    Morup, Morten (Technical University of Denmark); Dunlavy, Daniel M.; Acar, Evrim (Turkish National Research Institute of Electronics and Cryptology); Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  9. A new decision support model for preanesthetic evaluation.

    Science.gov (United States)

    Sobrie, Olivier; Lazouni, Mohammed El Amine; Mahmoudi, Saïd; Mousseau, Vincent; Pirlot, Marc

    2016-09-01

    The principal challenges in the field of anesthesia and intensive care consist of reducing both anesthetic risks and mortality rate. The ASA score plays an important role in patients' preanesthetic evaluation. In this paper, we propose a methodology to derive simple rules which classify patients in a category of the ASA scale on the basis of their medical characteristics. This diagnosis system is based on MR-Sort, a multiple criteria decision analysis model. The proposed method intends to support two steps in this process. The first is the assignment of an ASA score to the patient; the second concerns the decision to accept-or not-the patient for surgery. In order to learn the model parameters and assess its effectiveness, we use a database containing the parameters of 898 patients who underwent preanesthesia evaluation. The accuracy of the learned models for predicting the ASA score and the decision of accepting the patient for surgery is assessed and proves to be better than that of other machine learning methods. Furthermore, simple decision rules can be explicitly derived from the learned model. These are easily interpretable by doctors, and their consistency with medical knowledge can be checked. The proposed model for assessing the ASA score produces accurate predictions on the basis of the (limited) set of patient attributes in the database available for the tests. Moreover, the learned MR-Sort model allows for easy interpretation by providing human-readable classification rules. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Scalable still image coding based on wavelet

    Science.gov (United States)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  11. Scalable k-means statistics with Titan.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  12. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  13. A Quantum Logic Array Microarchitecture: Scalable Quantum Data Movement and Computation

    CERN Document Server

    Metodi, T S; Cross, A W; Chong, F T; Chuang, I L; Metodi, Tzvetan S.; Thaker, Darshan D.; Cross, Andrew W.; Chong, Frederic T.; Chuang, Isaac L.

    2005-01-01

    Recent experimental advances have demonstrated technologies capable of supporting scalable quantum computation. A critical next step is how to put those technologies together into a scalable, fault-tolerant system that is also feasible. We propose a Quantum Logic Array (QLA) microarchitecture that forms the foundation of such a system. The QLA focuses on the communication resources necessary to efficiently support fault-tolerant computations. We leverage the extensive groundwork in quantum error correction theory and provide analysis that shows that our system is both asymptotically and empirically fault tolerant. Specifically, we use the QLA to implement a hierarchical, array-based design and a logarithmic expense quantum-teleportation communication protocol. Our goal is to overcome the primary scalability challenges of reliability, communication, and quantum resource distribution that plague current proposals for large-scale quantum computing.

  14. Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation

    CERN Document Server

    Kanstrén, Teemu; 10.4204/EPTCS.80.5

    2012-01-01

    We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java) programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained) parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.

  15. Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation

    Directory of Open Access Journals (Sweden)

    Teemu Kanstrén

    2012-02-01

    Full Text Available We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.

  16. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  17. Advancing LGBT Elder Policy and Support Services: The Massachusetts Model.

    Science.gov (United States)

    Krinsky, Lisa; Cahill, Sean

    2017-04-04

    The Massachusetts-based LGBT Aging Project has trained elder service providers in affirming and culturally competent care for LGBT older adults, supported development of LGBT-friendly meal programs, and advanced LGBT equality under aging policy. Working across sectors, this innovative model launched the country's first statewide Legislative Commission on Lesbian, Gay, Bisexual, and Transgender Aging. Advocates are working with policymakers to implement key recommendations, including cultural competency training and data collection in statewide networks of elder services. The LGBT Aging Project's success provides a template for improving services and policy for LGBT older adults throughout the country.

  18. Multiscale molecular modeling of tertiary supported lipid bilayers

    Science.gov (United States)

    Ranz, Holden T.; Faller, Roland

    2015-08-01

    Ternary lipid bilayer systems assembled from mixtures of dipalmitoylphosphatidylcholine (DPPC), dioleoylphosphatidylcholine (DOPC), and cholesterol have been studied using coarse-grained molecular dynamics at biologically relevant temperatures (280 K to 310 K), which are between the chain melting temperatures of the pure lipid component. Free lipid bilayers were simulated using the MARTINI model (Stage I) and a variant with water-water interactions reduced to 76% (Stage II). The latter was subsequently used for preparing supported lipid bilayer simulations (Stage III). Clustering of like lipids was observed, but the simulation timescale did not yield larger phaseseparated domains.

  19. Scalable multi-core model checking

    NARCIS (Netherlands)

    Laarman, Alfons Wilhelmus

    2014-01-01

    Our modern society relies increasingly on the sound performance of digital systems. Guaranteeing that these systems actually behave correctly according to their specification is not a trivial task, yet it is essential for mission-critical systems like auto-pilots, (nuclear) power-plant controllers a

  20. A Composite Modelling Approach to Decision Support by the Use of the CBA-DK Model

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Salling, Kim Bang; Leleur, Steen

    2007-01-01

    This paper presents a decision support system for assessment of transport infrastructure projects. The composite modelling approach, COSIMA, combines a cost-benefit analysis by use of the CBA-DK model with multi-criteria analysis applying the AHP and SMARTER techniques. The modelling uncertainties...

  1. A formal model for access control with supporting spatial context

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong; HE YePing; SHI ZhiGuo

    2007-01-01

    There is an emerging recognition of the importance of utilizing contextual information in authorization decisions. Controlling access to resources in the field of wireless and mobile networking require the definition of a formal model for access control with supporting spatial context. However, traditional RBAC model does not specify these spatial requirements. In this paper, we extend the existing RBAC model and propose the SC-RBAC model that utilizes spatial and location-based information in security policy definitions. The concept of spatial role is presented,and the role is assigned a logical location domain to specify the spatial boundary.Roles are activated based on the current physical position of the user which obtained from a specific mobile terminal. We then extend SC-RBAC to deal with hierarchies, modeling permission, user and activation inheritance, and prove that the hierarchical spatial roles are capable of constructing a lattice which is a means for articulate multi-level security policy and more suitable to control the information flow security for safety-critical location-aware information systems. Next, constrained SC-RBAC allows express various spatial separations of duty constraints,location-based cardinality and temporal constraints for specify fine-grained spatial semantics that are typical in location-aware systems. Finally, we introduce 9 invariants for the constrained SC-RBAC and its basic security theorem is proven. The constrained SC-RBAC provides the foundation for applications in need of the constrained spatial context aware access control.

  2. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  3. Model and Implementation of Communication Link Management Supporting High Availability

    Institute of Scientific and Technical Information of China (English)

    Luo Juan; Cao Yang; He Zheng; Li Feng

    2004-01-01

    Despite the rapid evolution in all aspects of computer technology, both the computer hardware and software are prone to numerous failure conditions. In this paper, we analyzed the characteristic of a computer system and the methods of constructing a system , proposed a communication link management model supporting high availability for network applications, Which will greatly increase the high availability of network applications. Then we elaborated on heartbeat or service detect, fail-over, service take-over, switchback and error recovery process of the model. In the process of constructing the communication link, we implemented the link management and service take-over with high availability requirement, and discussed the state and the state transition of building the communication link between the hosts, depicted the message transfer and the start of timer. At Last, we applied the designed high availability system to a network billing system, and showed how the system was constructed and implemented, which perfectly satisfied the system requirements.

  4. A community college model to support nursing workforce diversity.

    Science.gov (United States)

    Colville, Janet; Cottom, Sherry; Robinette, Teresa; Wald, Holly; Waters, Tomi

    2015-02-01

    Community College of Allegheny County (CCAC), Allegheny Campus, is situated on the North Side of Pittsburgh. The neighborhood is 60% African American. At the time of the Health Resources and Services Administration (HRSA) application, approximately one third of the students admitted to the program were African American, less than one third of whom successfully completed it. With the aid of HRSA funding, CCAC developed a model that significantly improved the success rate of disadvantaged students. Through the formation of a viable cohort, the nursing faculty nurtured success among the most at-risk students. The cohort was supported by a social worker, case managers who were nursing faculty, and tutors. Students formed study groups, actively participated in community activities, and developed leadership skills through participation in the Student Nurse Association of Pennsylvania. This article provides the rationale for the Registered Nurse (RN) Achievement Model, describes the components of RN Achievement, and discusses the outcomes of the initiative.

  5. Concepts to Support HRP Integration Using Publications and Modeling

    Science.gov (United States)

    Mindock, J.; Lumpkins, S.; Shelhamer, M.

    2014-01-01

    Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for

  6. An innovative model of supportive clinical teaching and learning for undergraduate nursing students: the cluster model.

    Science.gov (United States)

    Bourgeois, Sharon; Drayton, Nicola; Brown, Ann-Marie

    2011-03-01

    Students look forward to their clinical practicum to learn within the context of reality nursing. As educators we need to actively develop models of clinical practicum whereby students are supported to engage and learn in the clinical learning environment. The aim of this paper is to describe an innovative model of supportive clinical teaching and learning for undergraduate nursing students as implemented in a large teaching hospital in New South Wales, Australia. The model of supportive clinical teaching and learning situates eight students at a time, across a shift, on one ward, with an experienced registered nurse from the ward specialty, who is employed as the clinical teacher to support nursing students during their one to two week block practicum. Results from written evaluation statements inform the discussion component of the paper for a model that has proved to be successful in this large healthcare facility.

  7. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  8. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, G.C.; Pras, A.

    2009-01-01

    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web server

  9. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  10. Modeling studies in support of the IMHEX MCFC commercialization

    Energy Technology Data Exchange (ETDEWEB)

    Jewulski, J.R.; Resnick, G.L.; Hu, W.C.S.

    1998-07-01

    Performance modeling studies are a necessary and cost effective element of the IMHEX-MCFC stack commercialization. Technologix Corporation, in cooperation with M-C Power, has developed the algorithms and computer code for two of the models in addition to modifying the PSI model for applications specific to IMHEX fuel cell concept. Three performance models support the product development effort: a modified PSI model; a two-dimensional cross-flow cell model and a three-dimensional stack model. The sizing, number and location of the stack inter-coolers in a fuel cell stack are typical model application. Recently M-C Power modified its stack configuration to cross-flow. The cross-flow allows simplified repeat parts manufacturing and reduces the risk of gas crossover. The MCFC cross-flow model developed at M-C Power supports heat loss from the stack edges, variable fuel flow rate regions and variable oxidant flow rate regions (coupled with the optimization module) among other features. Extensive computational experiments were conducted in support of the cross-flow geometry development for the MCFC stack. The oxidant flow distribution optimization was used to mitigate the hardware temperature hot-spot typical for the cross-flow geometry. The hardware temperature hot-spot increases corrosion rate, electrolyte loss, and leads to deterioration of the long-term MCFC stack performance. Under the normal operating conditions, the maximum local temperature of the cell hardware should not exceed 960 K. The mathematical optimization software was applied to find the optimum flow distribution. The minimization of the maximum hardware temperature was defined as an optimization goal. The gas flow rate in each region was selected as independent variable subjected to optimization. In some cases the authors have also added a distance between the fuel inlet and the flow region divider to the list of independent variables. The total gas flow rates, inlet gas temperatures and compositions

  11. Model catalytic oxidation studies using supported monometallic and heterobimetallic oxides

    Energy Technology Data Exchange (ETDEWEB)

    Ekerdt, J.G.

    1992-02-03

    This research program is directed toward a more fundamental understanding of the effects of catalyst composition and structure on the catalytic properties of metal oxides. Metal oxide catalysts play an important role in many reactions bearing on the chemical aspects of energy processes. Metal oxides are the catalysts for water-gas shift reactions, methanol and higher alcohol synthesis, isosynthesis, selective catalytic reduction of nitric oxides, and oxidation of hydrocarbons. A key limitation to developing insight into how oxides function in catalytic reactions is in not having precise information of the surface composition under reaction conditions. To address this problem we have prepared oxide systems that can be used to study cation-cation effects and the role of bridging (-O-) and/or terminal (=O) surface oxygen anion ligands in a systematic fashion. Since many oxide catalyst systems involve mixtures of oxides, we selected a model system that would permit us to examine the role of each cation separately and in pairwise combinations. Organometallic molybdenum and tungsten complexes were proposed for use, to prepare model systems consisting of isolated monomeric cations, isolated monometallic dimers and isolated bimetallic dimers supported on silica and alumina. The monometallic and bimetallic dimers were to be used as models of more complex mixed- oxide catalysts. Our current program was to develop the systems and use them in model oxidation reactions.

  12. Knowledge model-based decision support system for maize management

    Institute of Scientific and Technical Information of China (English)

    GUO Yinqiao; ZHAO Chuande; WANG Wenxin; LI Cundong

    2007-01-01

    Based on the relationship between crops and circumstances,a dynamic knowledge model for maize management with wide applicability was developed using the system method and mathematical modeling technique.With soft component characteristics incorporated,a component and digital knowledge model-based decision support system for maize management was established on the Visual C++platform.This system realized six major functions:target yield calculation,design of pre-sowing plan,prediction of regular indices,real-time management control,expert knowledge reference and system administration.Cases were studied on the target yield knowledge model with data sets that include different eco-sites,yield levels of the last three years,and fertilizer and water management levels.The results indicated that this system overcomes the shortcomings of traditional expert systems and planting patterns,such as sitespecific conditions and narrow applicability,and can be used more under different conditions and environments.This system provides a scientific knowledge system and a broad decision-making tool for maize management.

  13. Modeling Global Urbanization Supported by Nighttime Light Remote Sensing

    Science.gov (United States)

    Zhou, Y.

    2015-12-01

    Urbanization, a major driver of global change, profoundly impacts our physical and social world, for example, altering carbon cycling and climate. Understanding these consequences for better scientific insights and effective decision-making unarguably requires accurate information on urban extent and its spatial distributions. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the nighttime light remote sensing data, extended this method to the global domain by developing a computational method (parameterization) to estimate the key parameters in the cluster-based method, and built a consistent 20-year global urban map series to evaluate the time-reactive nature of global urbanization (e.g. 2000 in Fig. 1). Supported by urban maps derived from nightlights remote sensing data and socio-economic drivers, we developed an integrated modeling framework to project future urban expansion by integrating a top-down macro-scale statistical model with a bottom-up urban growth model. With the models calibrated and validated using historical data, we explored urban growth at the grid level (1-km) over the next two decades under a number of socio-economic scenarios. The derived spatiotemporal information of historical and potential future urbanization will be of great value with practical implications for developing adaptation and risk management measures for urban infrastructure, transportation, energy, and water systems when considered together with other factors such as climate variability and change, and high impact weather events.

  14. Agricultural Model for the Nile Basin Decision Support System

    Science.gov (United States)

    van der Bolt, Frank; Seid, Abdulkarim

    2014-05-01

    To analyze options for increasing food supply in the Nile basin the Nile Agricultural Model (AM) was developed. The AM includes state-of-the-art descriptions of biophysical, hydrological and economic processes and realizes a coherent and consistent integration of hydrology, agronomy and economics. The AM covers both the agro-ecological domain (water, crop productivity) and the economic domain (food supply, demand, and trade) and allows to evaluate the macro-economic and hydrological impacts of scenarios for agricultural development. Starting with the hydrological information from the NileBasin-DSS the AM calculates the available water for agriculture, the crop production and irrigation requirements with the FAO-model AquaCrop. With the global commodity trade model MAGNET scenarios for land development and conversion are evaluated. The AM predicts consequences for trade, food security and development based on soil and water availability, crop allocation, food demand and food policy. The model will be used as a decision support tool to contribute to more productive and sustainable agriculture in individual Nile countries and the whole region.

  15. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  16. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  17. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  18. Urban modeling over Houston in support of SIMMER

    Science.gov (United States)

    Barlage, M. J.; Monaghan, A. J.; Feddema, J. J.; Oleson, K. W.; Brunsell, N. A.; Wilhelmi, O.

    2011-12-01

    Extreme heat is a leading cause of weather-related human mortality in the United States. As global warming patterns continue, researchers anticipate increases in the severity, frequency and duration of extreme heat events, especially in the southern and western U.S. Many cities in these regions may have amplified vulnerability due to their rapidly evolving socioeconomic fabric (for example, growing elderly populations). This raises a series of questions about the increased health risks of urban residents to extreme heat, and about effective means of mitigation and adaptation in present and future climates. We will introduce a NASA-funded project aimed at addressing these questions via the System for Integrated Modeling of Metropolitan Extreme Heat Risk (SIMMER). Through SIMMER, we hope to advance methodology for assessing current and future urban vulnerabilities from the heat waves through the refinement and integration of physical and social science models, and to build local capacity for heat hazard mitigation and climate change adaptation in the public health sector. We will also present results from a series of sensitivity studies over Houston and surrounding area employing a recently-implemented multi-layer urban canopy model (UCM) within the Noah Land Surface Model. The UCM has multiple layers in the atmosphere to explicitly resolve the effects of buildings, and has an indoor-outdoor exchange model that directly interacts with the atmospheric boundary layer. The goal of this work, which supports the physical science component of SIMMER, is to characterize the ill-defined and uncertain parameter space, including building characteristics and spatial organization, in the new multi-layer UCM for Houston, and to assess whether and how this parameter space is sensitive to the choice of urban morphology datasets. Results focus on the seasonal and inter-annual range of both the modeled urban heat island effect and the magnitude of surface energy components and

  19. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  20. The Controlling Model as Management Support in Decision- Making

    Directory of Open Access Journals (Sweden)

    Berislav Bolfek

    2010-07-01

    Full Text Available business system to operate successfully and make profit, which represents the success criterion for each business system on the market. For managing the business result, the management of business systems needs different types of information on which numerous managerial decisions are based. That is why it is necessary to develop and set a Business System Controlling Model which would have the capability to transform available data into the information necessary for managerial decision-making. The above-mentioned model is prepared in such a way that the whole process of the transformation of the data into required information is carried out in two interconnected steps which have to be made in every single case and situation. However, there are a certain number of different activities within each step which do not have to be performed in every case. The forecast presented in the form of the Profit Forecast Procedure and the Liquidity Forecast Procedure represents the essence of the Business System Controlling Model. The Business System Controlling Model developed and set out in this paper should enable the business system management to evaluate future business activities, in addition to monitoring the past and present business activities. It is precisely the evaluation of the future business activities that, in today’s conditions of greater market globalization and internationalization, should help the business system management to control the business result in a better and easier way. In this way, the Business System Controlling Model represents a new tool in the sense of management support in making various business decisions.

  1. Green Transport Balanced Scorecard Model with Analytic Network Process Support

    Directory of Open Access Journals (Sweden)

    David Staš

    2015-11-01

    Full Text Available In recent decades, the performance of economic and non-economic activities has required them to be friendly with the environment. Transport is one of the areas having considerable potential within the scope. The main assumption to achieve ambitious green goals is an effective green transport evaluation system. However, these systems are researched from the industrial company and supply chain perspective only sporadically. The aim of the paper is to design a conceptual framework for creating the Green Transport (GT Balanced Scorecard (BSC models from the viewpoint of industrial companies and supply chains using an appropriate multi-criteria decision making method. The models should allow green transport performance evaluation and support of an effective implementation of green transport strategies. Since performance measures used in Balanced Scorecard models are interdependent, the Analytic Network Process (ANP was used as the appropriate multi-criteria decision making method. The verification of the designed conceptual framework was performed on a real supply chain of the European automotive industry.

  2. Modelling Framework to Support Decision-Making in Manufacturing Enterprises

    Directory of Open Access Journals (Sweden)

    Tariq Masood

    2013-01-01

    Full Text Available Systematic model-driven decision-making is crucial to design, engineer, and transform manufacturing enterprises (MEs. Choosing and applying the best philosophies and techniques is challenging as most MEs deploy complex and unique configurations of process-resource systems and seek economies of scope and scale in respect of changing and distinctive product flows. This paper presents a novel systematic enhanced integrated modelling framework to facilitate transformation of MEs, which is centred on CIMOSA. Application of the new framework in an automotive industrial case study is also presented. The following new contributions to knowledge are made: (1 an innovative structured framework that can support various decisions in design, optimisation, and control to reconfigure MEs; (2 an enriched and generic process modelling approach with capability to represent both static and dynamic aspects of MEs; and (3 an automotive industrial case application showing benefits in terms of reduced lead time and cost with improved responsiveness of process-resource system with a special focus on PPC. It is anticipated that the new framework is not limited to only automotive industry but has a wider scope of application. Therefore, it would be interesting to extend its testing with different configurations and decision-making levels.

  3. hQT*: A Scalable Distributed Data Structure for High-Performance Spatial Access

    NARCIS (Netherlands)

    Karlsson, J.S.

    1998-01-01

    Spatial data storage stresses the capability of conventional DBMSs. We present a scalable distributed data structure, hQTs, which offers support for efficient spatial point and range queries using order preserving hashing. It is designed to deal with skewed data and extends results obtained with sca

  4. Simplex-stochastic collocation method with improved scalability

    Energy Technology Data Exchange (ETDEWEB)

    Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hopital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hopital, 75013 Paris (France)

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  5. Aggregation of Environmental Model Data for Decision Support

    Science.gov (United States)

    Alpert, J. C.

    2013-12-01

    model output offering access to probability and calibrating information for real time decision making. The aggregation content server reports over ensemble component and forecast time in addition to the other data dimensions of vertical layer and position for each variable. The unpacking, organization and reading of many binary packed files is accomplished most efficiently on the server while weather element event probability calculations, the thresholds for more accurate decision support, or display remain for the client. Our goal is to reduce uncertainty for variables of interest, e.g, agricultural importance. The weather service operational GFS model ensemble and short range ensemble forecasts can make skillful probability forecasts to alert users if and when their selected weather events will occur. A description of how this framework operates and how it can be implemented using existing NOMADS content services and applications is described.

  6. Progressor: social navigation support through open social student modeling

    Science.gov (United States)

    Hsiao, I.-Han; Bakalov, Fedor; Brusilovsky, Peter; König-Ries, Birgitta

    2013-06-01

    The increased volumes of online learning content have produced two problems: how to help students to find the most appropriate resources and how to engage them in using these resources. Personalized and social learning have been suggested as potential ways to address these problems. Our work presented in this paper combines the ideas of personalized and social learning in the context of educational hypermedia. We introduce Progressor, an innovative Web-based tool based on the concepts of social navigation and open student modeling that helps students to find the most relevant resources in a large collection of parameterized self-assessment questions on Java programming. We have evaluated Progressor in a semester-long classroom study, the results of which are presented in this paper. The study confirmed the impact of personalized social navigation support provided by the system in the target context. The interface encouraged students to explore more topics attempting more questions and achieving higher success rates in answering them. A deeper analysis of the social navigation support mechanism revealed that the top students successfully led the way to discovering most relevant resources by creating clear pathways for weaker students.

  7. Simulation modeling of supported lipid membranes - a review.

    Science.gov (United States)

    Hirtz, Michael; Kumar, Naresh; Chi, Lifeng

    2014-03-01

    Lipid membranes are of great importance for many biological systems and biotechnological applications. One method to gain a profound understanding of the dynamics in lipid membranes and their interaction with other system components is by modeling these systems by computer simulations. Many different approaches have been undertaken in this endeavor that have led to molecular level insights into the underlying mechanisms of several experimental observations and biological processes with an extremely high temporal resolution. As compared to the free-standing lipid bilayers, there are fewer simulation studies addressing the systems of supported lipid membranes. Nevertheless, these have significantly enhanced our understanding of the behavior of lipid layers employed in applications spanning from biosensors to drug delivery and for biological processes such as the breathing cycle of lung surfactants. In this review, we give an account of the state of the art of methods and applications of the simulations of supported lipid bilayers, interfacial membranes at the air/water interface and on solid surfaces.

  8. NEAR-REAL-TIME PARALLEL ETL+Q FOR AUTOMATIC SCALABILITY IN BIGDATA

    Directory of Open Access Journals (Sweden)

    Pedro Martins

    2016-01-01

    Full Text Available (Extract, transform, load and querying process of data warehouses. In general, data loading, transformation and integration are heavy tasks that are performed only periodically during small fixed time windows. We propose an approach to enable the automatic scalability and freshness of any data warehouse and ETL+Q process for near-real-time BigData scenarios. A general framework for testing the proposed system was implementing, supporting parallelization solutions for each part of the ETL+Q pipeline. The results show that the proposed system is capable of handling scalability to provide the desired processing speed.

  9. Experiments and Modeling in Support of Generic Salt Repository Science

    Energy Technology Data Exchange (ETDEWEB)

    Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caporuscio, Florie Andre [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Otto, Shawn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jordan, Amy B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-19

    Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.

  10. Scalability issues affecting the design of a dense linear algebra library

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. (Univ. of Tennessee, Knoxville, TN (United States) Oak Ridge National Lab., TN (United States). Mathematical Sciences Section); Geijn, R.A. van de (Univ. of Texas, Austin, TX (United States). Dept. of Computer Sciences); Walker, D.W. (Oak Ridge National Lab., TN (United States). Mathematical Sciences Section)

    1994-09-01

    This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical software library that extends the widely used LAPACK library to run efficiently on scalable concurrent computers. To ensure good scalability and performance, the ScaLAPACK routines are based on block-partitioned algorithms that reduce the frequency of data movement between different levels of the memory hierarchy, and particularly between processors. The block cyclic data distribution, that is used in all three factorization algorithms, is described. An outline of the sequential and parallel block-partitioned algorithm is given. Approximate models of algorithms' performance are presented to indicate which factors in the design of the algorithm have an impact upon scalability. These models are compared with timings results on a 128-node Intel iPSC/860 hypercube. It is shown that the routines are highly scalable on this machine for problems that occupy more than about 25% of the memory on each processor, and that the measured timings are consistent with the performance model. The contribution of this paper goes beyond reporting experience: the implementations are available in the public domain.

  11. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    Directory of Open Access Journals (Sweden)

    Sumiyana Sumiyana

    2010-05-01

    Full Text Available This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007. This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment scalabilities, and growth opportunities, co associate positively with stock price. The remaining factor,which is the pure interest rate, is negatively related to annualstock return. This study finds that inducing short-run and long-run investment scalabilities into the model could improve the degree of association. In other words, they have value rel-evance. Finally, this study suggests that basic trading strategieswill improve if investors revert to the accounting fundamentals. Keywords: accounting fundamentals; book value; earnings yield; growth opportuni­ties; short­run and long­run investment scalabilities; trading strategy;value relevance

  12. Multi-Pulse Based Code Excited Linear Predictive Speech Coder with Fine Granularity Scalability for Tonal Language

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2010-01-01

    Full Text Available Problem statement: The flexible bit-rate speech coder plays an important role in the modern speech communication. The MP-CELP speech coder which is a candidate of the MPEG4 natural speech coder supports a flexible and wide bit-rate range. However, a fine scalability had not been included. To support finer scalability of the coding rate, it had been studied in this study. Approach: In this study, based on the MP-CELP speech coding with HPDR technique, Fine Granularity Scalability was introduced by adjusting the amount of transmitted fixed excitation information. The FGS feature aim at changing the bit rate of the conventional coding more finely and more smoothly. Results: Through performance analysis and computer simulation, the quality of scalability of the MP-CELP coding was presented with an improvement from conventional scalable MP-CELP. The HPDR technique is also applied to the MP-CELP to use for tonal language, meanwhile it can support the core coding rate of 4.2, 5.5, 7.5 kbps and additional scaled bit rates. Conclusion: The core coder with high pitch delay resolution technique and adaptive codebook for tonal speech quality improvement has been conducted and the FGS brings about further efficient scalability.

  13. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  14. Supporting the European Water Framework Directive by enhancing the credibility of modelling studies: the HarmoniQuA Modelling Support Tool (MoST)

    NARCIS (Netherlands)

    Old, G.H.; Packman, J.C.; Scholten, H.

    2005-01-01

    This paper aims to illustrate how the HarmoniQuA modelling Support Tool (MoST) supports modelling for the EU Water Framework Directive (WFD). More specifically the objectives are to: … Present an overview of the European Water Framework Directive; … Identify where computer models are likely to be us

  15. Supporting scalable Bayesian networks using configurable discretizer actuators

    CSIR Research Space (South Africa)

    Osunmakinde, I

    2009-04-01

    Full Text Available Bayesian Networks using Configurable Discretizer Actuators Isaac Osunmakinde, SMIEEE and Antoine Bagula Department of Computer Science, Faculty of Sciences, University of Cape Town, 18 University Avenue, Rhodes Gift, 7707 Rondebosch, Cape Town, South... and limited memory space may crash during these operations. This affects business and research deliveries, and may hinder the growing usage of Bayesian networks in industries that keep massive datasets to build intelligent systems. From our practical...

  16. RIPPLE: Scalable Medical Telemetry System for Supporting Combat Rescue

    Science.gov (United States)

    2014-01-09

    utilizing 802.15.4 [1] radios and IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) was chosen. Table 1 summarizes the requirements...standard that provides an IPv6 adaptation layer for low power communication devices, particularly IEEE 802.15.4 devices. The result of employing this...Std 802.15.4-2006 (Revision of IEEE Std 802.15.4-2003) , 2006. [2] G. Montenegro, N. Kushalnagar, J. Hui and D. Culler, "Transmission of IPv6

  17. SIMULATION MODEL FOR DESIGN SUPPORT OF INFOCOMM REDUNDANT SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. A. Bogatyrev

    2016-09-01

    Full Text Available Subject of Research. The paper deals with the effectiveness of multipath transfer of request copies through the network and their redundant service without the use of laborious analytical modeling. The model and support tools for the design of highly reliable distributed systems based on simulation modeling have been created. Method. The effectiveness of many variants of service organization and delivery through the network to the query servers is formed and analyzed. Options for providing redundant service and delivery via the network to the servers of request copies are also considered. The choice of variants for the distribution and service of requests is carried out taking into account the criticality of queries to the time of their stay in the system. The request is considered successful if at least one of its copies is accurately delivered to the working server, ready to service the request received through a network, if it is fulfilled in the set time. Efficiency analysis of the redundant transmission and service of requests is based on the model built in AnyLogic 7 simulation environment. Main Results. Simulation experiments based on the proposed models have shown the effectiveness of redundant transmission of copies of queries (packets to the servers in the cluster through multiple paths with redundant service of request copies by a group of servers in the cluster. It is shown that this solution allows increasing the probability of exact execution of at least one copy of the request within the required time. We have carried out efficiency evaluation of destruction of outdated request copies in the queues of network nodes and the cluster. We have analyzed options for network implementation of multipath transfer of request copies to the servers in the cluster over disjoint paths, possibly different according to the number of their constituent nodes. Practical Relevance. The proposed simulation models can be used when selecting the optimal

  18. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  19. A bio-inspired image coder with temporal scalability

    CERN Document Server

    Masmoudi, Khaled; Kornprobst, Pierre

    2011-01-01

    We present a novel bio-inspired and dynamic coding scheme for static images. Our coder aims at reproducing the main steps of the visual stimulus processing in the mammalians retina taking into account its time behavior. The main novelty of this work is to show how to exploit the time behavior of the retina cells to ensure, in a simple way, scalability and bit allocation. To do so, our main source of inspiration will be the biologically plausible retina model called Virtual Retina. Following a similar structure, our model has two stages. The first stage is an image transform which is performed by the outer layers in the retina. Here it is modelled by filtering the image with a bank of difference of Gaussians with time-delays. The second stage is a time-dependent analog-to-digital conversion which is performed by the inner layers in the retina. Thanks to its conception, our coder enables scalability and bit allocation across time. Also, compared to the JPEG standards, our decoded images do not show annoying art...

  20. Improving Medication Adherence in a Regional Healthcare Information Exchange using a Scalable, Claims-Driven, and Service-Oriented Approach.

    Science.gov (United States)

    Del Fiol, Guilherme; Kawamoto, Kensaku; Lapointe, Nancy M Allen; Eisenstein, Eric L; Anstrom, Kevin J; Wood, Laura L; Lobach, David F

    2010-11-13

    Evidence-based pharmacotherapy is a central aspect of optimal patient care for many chronic conditions. However, medication non-adherence frequently inhibits the attainment of optimal pharmacotherapy regimens. In this study, we designed, developed, and implemented a multifaceted clinical decision support (CDS) intervention that supports evidence-based pharmacotherapy and enhanced medication adherence through the use of a scalable, claims-driven, and service-oriented approach. The intervention includes a medication management report and a low adherence alert based on thirteen evidence-based pharmacotherapy rules for seven chronic conditions. Reports and alerts are delivered to primary care clinics and care managers that participate in a healthcare information exchange in North Carolina. The resulting system architecture may enable this CDS intervention to be widely disseminated to healthcare networks through an open-source model.

  1. Improving Medication Adherence in a Regional Healthcare Information Exchange using a Scalable, Claims-Driven, and Service-Oriented Approach

    Science.gov (United States)

    Del Fiol, Guilherme; Kawamoto, Kensaku; LaPointe, Nancy M Allen; Eisenstein, Eric L; Anstrom, Kevin J; Wood, Laura L; Lobach, David F

    2010-01-01

    Evidence-based pharmacotherapy is a central aspect of optimal patient care for many chronic conditions. However, medication non-adherence frequently inhibits the attainment of optimal pharmacotherapy regimens. In this study, we designed, developed, and implemented a multifaceted clinical decision support (CDS) intervention that supports evidence-based pharmacotherapy and enhanced medication adherence through the use of a scalable, claims-driven, and service-oriented approach. The intervention includes a medication management report and a low adherence alert based on thirteen evidence-based pharmacotherapy rules for seven chronic conditions. Reports and alerts are delivered to primary care clinics and care managers that participate in a healthcare information exchange in North Carolina. The resulting system architecture may enable this CDS intervention to be widely disseminated to healthcare networks through an open-source model. PMID:21346956

  2. ISMuS: interactive, scalable, multimedia streaming platform

    Science.gov (United States)

    Cha, Jihun; Kim, Hyun-Cheol; Jeong, Seyoon; Kim, Kyuheon; Patrikakis, Charalampos; van der Schaar, Mihaela

    2005-08-01

    Technical evolutions in the field of information technology have changed many aspects of the industries and the life of human beings. Internet and broadcasting technologies act as core ingredients for this revolution. Various new services that were never possible are now available to general public by utilizing these technologies. Multimedia service via IP networks becomes one of easily accessible service in these days. Technical advances in Internet services, the provision of constantly increasing network bandwidth capacity, and the evolution of multimedia technologies have made the demands for multimedia streaming services increased explosively. With this increasing demand Internet becomes deluged with multimedia traffics. Although multimedia streaming services became indispensable, the quality of a multimedia service over Internet can not be technically guaranteed. Recently users demand multimedia service whose quality is competitive to the traditional TV broadcasting service with additional functionalities. Such additional functionalities include interactivity, scalability, and adaptability. A multimedia that comprises these ancillary functionalities is often called richmedia. In order to satisfy aforementioned requirements, Interactive Scalable Multimedia Streaming (ISMuS) platform is designed and developed. In this paper, the architecture, implementation, and additional functionalities of ISMuS platform are presented. The presented platform is capable of providing user interactions based on MPEG-4 Systems technology [1] and supporting an efficient multimedia distribution through an overlay network technology. Loaded with feature-rich technologies, the platform can serve both on-demand and broadcast-like richmedia services.

  3. Facilitating Image Search With a Scalable and Compact Semantic Mapping.

    Science.gov (United States)

    Wang, Meng; Li, Weisheng; Liu, Dong; Ni, Bingbing; Shen, Jialie; Yan, Shuicheng

    2015-08-01

    This paper introduces a novel approach to facilitating image search based on a compact semantic embedding. A novel method is developed to explicitly map concepts and image contents into a unified latent semantic space for the representation of semantic concept prototypes. Then, a linear embedding matrix is learned that maps images into the semantic space, such that each image is closer to its relevant concept prototype than other prototypes. In our approach, the semantic concepts equated with query keywords and the images mapped into the vicinity of the prototype are retrieved by our scheme. In addition, a computationally efficient method is introduced to incorporate new semantic concept prototypes into the semantic space by updating the embedding matrix. This novelty improves the scalability of the method and allows it to be applied to dynamic image repositories. Therefore, the proposed approach not only narrows semantic gap but also supports an efficient image search process. We have carried out extensive experiments on various cross-modality image search tasks over three widely-used benchmark image datasets. Results demonstrate the superior effectiveness, efficiency, and scalability of our proposed approach.

  4. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  5. Scalable Noise Estimation with Random Unitary Operators

    CERN Document Server

    Emerson, J; Zyczkowski, K; Emerson, Joseph; Alicki, Robert; Zyczkowski, Karol

    2005-01-01

    We describe a scalable stochastic method for the experimental measurement of generalized fidelities characterizing the accuracy of the implementation of a coherent quantum transformation. The method is based on the motion reversal of random unitary operators. In the simplest case our method enables direct estimation of the average gate fidelity. The more general fidelities are characterized by a universal exponential rate of fidelity loss. In all cases the measurable fidelity decrease is directly related to the strength of the noise affecting the implementation -- quantified by the trace of the superoperator describing the non--unitary dynamics. While the scalability of our stochastic protocol makes it most relevant in large Hilbert spaces (when quantum process tomography is infeasible), our method should be immediately useful for evaluating the degree of control that is achievable in any prototype quantum processing device. By varying over different experimental arrangements and error-correction strategies a...

  6. Scalable noise estimation with random unitary operators

    Energy Technology Data Exchange (ETDEWEB)

    Emerson, Joseph [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada); Alicki, Robert [Institute of Theoretical Physics and Astrophysics, University of Gdansk, Wita Stwosza 57, PL 80-952 Gdansk (Poland); Zyczkowski, Karol [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2005-10-01

    We describe a scalable stochastic method for the experimental measurement of generalized fidelities characterizing the accuracy of the implementation of a coherent quantum transformation. The method is based on the motion reversal of random unitary operators. In the simplest case our method enables direct estimation of the average gate fidelity. The more general fidelities are characterized by a universal exponential rate of fidelity loss. In all cases the measurable fidelity decrease is directly related to the strength of the noise affecting the implementation, quantified by the trace of the superoperator describing the non-unitary dynamics. While the scalability of our stochastic protocol makes it most relevant in large Hilbert spaces (when quantum process tomography is infeasible), our method should be immediately useful for evaluating the degree of control that is achievable in any prototype quantum processing device. By varying over different experimental arrangements and error-correction strategies, additional information about the noise can be determined.

  7. Conscientiousness at the workplace: Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, Bernard P.

    2010-01-01

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

  8. Conscientiousness in the workplace : Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, B.P.

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

  9. Conscientiousness in the workplace : Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, B.P.

    2010-01-01

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive vali

  10. Conscientiousness in the workplace: Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, Iris J.L.; Meijer, Rob R.; Veldkamp, Bernard P.

    2010-01-01

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive vali

  11. Scalable descriptive and correlative statistics with Titan.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  12. Scalable services for massively multiplayer online games

    OpenAIRE

    Veron, Maxime Pierre Andre

    2015-01-01

    Massively Multi-player Online Games (MMOGs) aim at gathering an infinite number ofplayers within the same virtual universe. Yet all existing MMOGs rely on centralizedclient/server architectures which impose a limit on the maximum number of players(avatars) and resources that can coexist in any given virtual universe. This thesisaims at proposing solutions to improve the scalability of MMOGs.There are many variants of MMOGs, like role playing games (MMORPGs), first personshooters (MMOFPSs), an...

  13. 可伸缩TAGS%Scalable TAGS

    Institute of Scientific and Technical Information of China (English)

    闵帆; 张君雁; 杨国纬

    2003-01-01

    In a distributed Web server system where tasks are unpreemptible,the most important issue for improving quality of service (QoS) is how to realize fairness and reduce average slow down. In this paper we present an algorithm named Scalable TAGS by integrating Central Queue algorithm and Task Assignment by Guessing Size (TAGS), together with its performance analysis, system parameter setting algorithm subject to fairness requirement, and optimal grouping method.

  14. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    anurgali@andrew.cmu.edu Abstract— Designing massively scalable, highly available big data systems is an immense challenge for software architects . Big...features by software architects . QuABaseBD links the taxonomy to general quality attribute scenarios and design tactics for big data systems. This...evolving design space for an architect to navigate. Architects must carefully compare candidate database technologies and features and select platforms

  15. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  16. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  17. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  18. Performance analysis of a scalable optical packet switching architecture

    Science.gov (United States)

    Wu, Ho-Ting; Tuan, Chia-Wei

    2010-10-01

    We carry out the analysis of a scalable switching architecture for all-optical packet switching networks. The underlying switch is based on a 2×2 two-stage multibuffer switched delay-line-based optical switching node. By incorporating an additional bypass line and employing a novel switch control strategy, the optical packet switching node can effectively resolve packet contentions, thus reducing the packet deflection probability substantially. In this work, we develop an exact queueing model from a discrete time Markov chain (DTMC) to evaluate the system performance under bursty, nonbursty, symmetric, and asymmetric traffic conditions. The accurate deflection probability and mean packet delay are obtained from this analytical model. Furthermore, we derive an approximate analysis to calculate the lower bound of deflection probability without the heavy computational complexities incurred by the exact analytical model. Simulation results are performed to confirm the validity of our analytic models.

  19. Efficient and scalable graph similarity joins in MapReduce.

    Science.gov (United States)

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  20. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  1. Scalable desktop visualisation of very large radio astronomy data cubes

    Science.gov (United States)

    Perkins, Simon; Questiaux, Jacques; Finniss, Stephen; Tyler, Robin; Blyth, Sarah; Kuttel, Michelle M.

    2014-07-01

    Observation data from radio telescopes is typically stored in three (or higher) dimensional data cubes, the resolution, coverage and size of which continues to grow as ever larger radio telescopes come online. The Square Kilometre Array, tabled to be the largest radio telescope in the world, will generate multi-terabyte data cubes - several orders of magnitude larger than the current norm. Despite this imminent data deluge, scalable approaches to file access in Astronomical visualisation software are rare: most current software packages cannot read astronomical data cubes that do not fit into computer system memory, or else provide access only at a serious performance cost. In addition, there is little support for interactive exploration of 3D data. We describe a scalable, hierarchical approach to 3D visualisation of very large spectral data cubes to enable rapid visualisation of large data files on standard desktop hardware. Our hierarchical approach, embodied in the AstroVis prototype, aims to provide a means of viewing large datasets that do not fit into system memory. The focus is on rapid initial response: our system initially rapidly presents a reduced, coarse-grained 3D view of the data cube selected, which is gradually refined. The user may select sub-regions of the cube to be explored in more detail, or extracted for use in applications that do not support large files. We thus shift the focus from data analysis informed by narrow slices of detailed information, to analysis informed by overview information, with details on demand. Our hierarchical solution to the rendering of large data cubes reduces the overall time to complete file reading, provides user feedback during file processing and is memory efficient. This solution does not require high performance computing hardware and can be implemented on any platform supporting the OpenGL rendering library.

  2. Garuda: a scalable tiled display wall using commodity PCs.

    Science.gov (United States)

    Nirnimesh; Harish, Pawan; Narayanan, P J

    2007-01-01

    Cluster-based tiled display walls can provide cost-effective and scalable displays with high resolution and a large display area. The software to drive them needs to scale too if arbitrarily large displays are to be built. Chromium is a popular software API used to construct such displays. Chromium transparently renders any OpenGL application to a tiled display by partitioning and sending individual OpenGL primitives to each client per frame. Visualization applications often deal with massive geometric data with millions of primitives. Transmitting them every frame results in huge network requirements that adversely affect the scalability of the system. In this paper, we present Garuda, a client-server-based display wall framework that uses off-the-shelf hardware and a standard network. Garuda is scalable to large tile configurations and massive environments. It can transparently render any application built using the Open Scene Graph (OSG) API to a tiled display without any modification by the user. The Garuda server uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display tile using a novel adaptive algorithm that culls the scene graph to a hierarchy of frustums. Required parts of the scene graph are transmitted to the clients, which cache them to exploit the interframe redundancy. A multicast-based protocol is used to transmit the geometry to exploit the spatial redundancy present in tiled display systems. A geometry push philosophy from the server helps keep the clients in sync with one another. Neither the server nor a client needs to render the entire scene, making the system suitable for interactive rendering of massive models. Transparent rendering is achieved by intercepting the cull, draw, and swap functions of OSG and replacing them with our own. We demonstrate the performance and scalability of the Garuda system for different configurations of display wall. We also show that the

  3. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  4. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Directory of Open Access Journals (Sweden)

    Alex Wu

    2016-10-01

    Full Text Available The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g. light, water and nitrogen, aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modelling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modelling leaf photosynthesis has progressed from empirical modelling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modelling that connects models at the biochemical and crop levels and utilises developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modelling framework and reinforce the need for connections across levels of modelling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modelling framework to support crop improvement through photosynthetic manipulation.

  5. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  6. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi‐robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction‐based strategy, which has been implemented and analysed in a multi‐robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub‐auction policy. To coordinate and control the team, a layered behaviour‐based architecture has been applied that allows the reusing of the auction‐based strategy to achieve different coordination levels.

  7. A Scalable Policy and SNMP Based Network Management Framework

    Institute of Scientific and Technical Information of China (English)

    LIU Su-ping; DING Yong-sheng

    2009-01-01

    Traditional SNMP-based network management can not deal with the task of managing large-scaled distributed network,while policy-based management is one of the effective solutions in network and distributed systems management. However,cross-vendor hardware compatibility is one of the limitations in policy-based management. Devices existing in current network mostly support SNMP rather than Common Open Policy Service (COPS) protocol. By analyzing traditional network management and policy-based network management, a scalable network management framework is proposed. It is combined with Internet Engineering Task Force (IETF) framework for policybased management and SNMP-based network management. By interpreting and translating policy decision to SNMP message,policy can be executed in traditional SNMP-based device.

  8. A Scalable Intrusion Detection System for IPv6

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; LI Zhanchun

    2006-01-01

    The next generation protocol IPv6 brings the new challenges to the information security. This paper presents the design and implementation of a network-based intrusion detection system that support both IPv6 protocol and IPv4 protocol. This system's architecture is focused on performance, simplicity, and scalability. There are four primary subsystems that make it up: the packet capture, the packet decoder, the detection engine, and the logging and alerting subsystem. This paper further describes a new approach to packet capture whose goal is to improve the performance of the capture process at high speeds. The evaluation shows that the system has a good performance to detect IPv6 attacks and IPv4 attacks, and achieves 61% correct detection rate with 20% false detection rate at the speed of 100 Mb·s-1.

  9. SPHERE: a scalable multicast framework in overlay networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper presents Sphere, a scalable multicast framework in overlay network. Sphere is a highly efficient, self-organizing and robust multicast protocol overlayed on the Internet. The main contributions of this paper are twofold. First, Sphere organizes the control topology of overlay network in two directions: horizontal and vertical. The horizontal meshes are used to locate and organize hosts in tracks, and the vertical meshes are used to manage the data paths between tracks. Second, Sphere balances stress and stretch of the overlay network by assigning hosts into different tracks and clusters. This structure distributes stress on the multicast trees uniformly, and meantime makes path stretch as small as possible.Simulations results show that Sphere can support multicast with large group size and has good performance on organizing meshes and building data delivery trees.

  10. D-ZENIC:A Scalable Distributed SDN Controller Architecture

    Institute of Scientific and Technical Information of China (English)

    Yongsheng Hu; Tian Tian; Jun Wang

    2014-01-01

    In a software-defined network, a powerful central controller provides a flexible platform for defining network traffic through the use of software. When SDN is used in a large-scale network, the logical central controller comprises multiple physical servers, and multiple controllers must act as one to provide transparent control logic to network applications and devices. The challenge is to minimize the cost of network state distribution. To this end, we propose Distributed ZTE Elastic Network Intelligent Controller (D-ZENIC), a network-control platform that supports distributed deployment and linear scale-out. A dedicated component in the D-ZENIC controller provides a global view of the network topology as well as the distribution of host information. The evaluation shows that balance complexity with scalability, the network state distribution needs to be strictly classified.

  11. Flexible three-band motion-compensated temporal filtering for scalable video coding

    Institute of Scientific and Technical Information of China (English)

    WANG Yong-yu; SUN Qu; YUAN Chao-wei

    2009-01-01

    A novel scheme for scalable video coding using three-band lifting-based motion-compensated transform is presented in this article. A series of flexible three-band motion-compensated lifting steps are used to implement the temporal wavelet transform, which provide improved compression performance by selecting specific motion model according to real video sequences, and offer higher temporal scalability flexibility by using three-band lifting steps. The experimental results compared with motion picture expert group (MPEG)-4 codec concerning standard video sequences demonstrate the effectiveness of the method.

  12. Internal Models Support Specific Gaits in Orthotic Devices

    DEFF Research Database (Denmark)

    Matthias Braun, Jan; Wörgötter, Florentin; Manoonpong, Poramate

    2014-01-01

    Patients use orthoses and prosthesis for the lower limbs to support and enable movements, they can not or only with difficulties perform themselves. Because traditional devices support only a limited set of movements, patients are restricted in their mobility. A possible approach to overcome...

  13. Cause and Event: Supporting Causal Claims through Logistic Models

    Science.gov (United States)

    O'Connell, Ann A.; Gray, DeLeon L.

    2011-01-01

    Efforts to identify and support credible causal claims have received intense interest in the research community, particularly over the past few decades. In this paper, we focus on the use of statistical procedures designed to support causal claims for a treatment or intervention when the response variable of interest is dichotomous. We identify…

  14. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  15. A Scalable Privacy Preserving Scheme In Vehicular Network

    Institute of Scientific and Technical Information of China (English)

    YAN Gong-jun; SHI Hui; Awny Alnusair; Matthew Todd Bradley

    2014-01-01

    Vehicles enlisted with computing, sensing and communicating devices can create vehicular networks, a subset of cooperative systems in heterogeneous environments, aiming at improving safety and entertainment in traffic. In vehicular networks, a vehicle’s identity is associated to its owner’s identity as a unique linkage. Therefore, it is of importance to protect privacy of vehicles from being possibly tracked. Obviously, the privacy protection must be scalable because of the high mobility and large population of vehicles. In this work, we take a non-trivial step towards protecting privacy of vehicles. As privacy draws public concerns, we firstly present privacy implications of operational challenges from the public policy perspective. Additionally, we envision vehicular networks as geographically partitioned subnetworks (cells). Each subnetwork maintains a list of pseudonyms. Each pseudonym includes the cell’s geographic id and a random number as host id. Before starting communication, vehicles need to request a pseudonym on demand from pseudonym server. In order to improve utilization of pseudonyms, we address a stochastic model with time-varying arrival and departure rates. Our main contribution includes:1) proposing a scalable and effective algorithm to protect privacy; 2) providing analytical results of probability, variance and expected number of requests on pseudonym servers. The empirical results confirm the accuracy of our analytical predictions.

  16. Epidemiological models to support animal disease surveillance activities

    DEFF Research Database (Denmark)

    Willeberg, Preben; Paisley, Larry; Lind, Peter

    2011-01-01

    Epidemiological models have been used extensively as a tool in improving animal disease surveillance activities. A review of published papers identified three main groups of model applications: models for planning surveillance, models for evaluating the performance of surveillance systems...... and models for interpreting surveillance data as part of ongoing control or eradication programmes. Two Danish examples are outlined. The first illustrates how models were used in documenting country freedom from disease (trichinellosis) and the second demonstrates how models were of assistance in predicting...

  17. Modelling supported driving as an optimal control cycle: Framework and model characteristics

    CERN Document Server

    Wang, Meng; Daamen, Winnie; Hoogendoorn, Serge P; van Arem, Bart

    2014-01-01

    Driver assistance systems support drivers in operating vehicles in a safe, comfortable and efficient way, and thus may induce changes in traffic flow characteristics. This paper puts forward a receding horizon control framework to model driver assistance and cooperative systems. The accelerations of automated vehicles are controlled to optimise a cost function, assuming other vehicles driving at stationary conditions over a prediction horizon. The flexibility of the framework is demonstrated with controller design of Adaptive Cruise Control (ACC) and Cooperative ACC (C-ACC) systems. The proposed ACC and C-ACC model characteristics are investigated analytically, with focus on equilibrium solutions and stability properties. The proposed ACC model produces plausible human car-following behaviour and is unconditionally locally stable. By careful tuning of parameters, the ACC model generates similar stability characteristics as human driver models. The proposed C-ACC model results in convective downstream and abso...

  18. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models

    Science.gov (United States)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee

    2017-08-01

    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  19. Supporting prospective teachers' conceptions of modelling in science

    Science.gov (United States)

    Crawford, Barbara A.; Cullin, Michael J.

    2004-11-01

    This study investigated prospective secondary science teachers' understandings of and intentions to teach about scientific modelling in the context of a model-based instructional module. Qualitative methods were used to explore the influence of instruction using dynamic computer modelling. Participants included 14 secondary science prospective teachers in the USA. Research questions included: (1) What do prospective teachers understand about models and modelling in science? (2) How do their understandings change, following building and testing dynamic computer models? and (3) What are prospective teachers' intentions to teach about scientific models? Scaffolds in the software, Model-IT, enabled participants to easily build dynamic models. Findings related to the process, content, and epistemological aspects of modelling, including: (a) prospective teachers became more articulate with the language of modelling; and (b) the module enabled prospective teachers to think critically about aspects of modelling. Still, teachers did not appear to achieve full understanding of scientific modelling.

  20. A Study on Integrated Model of Decision Support Systems

    Institute of Scientific and Technical Information of China (English)

    MO Zan; FENG Shan; TANG Chao

    2002-01-01

    This paper discusses two kinds of systems integrated models available to DSS: Multi-AgentBased Model and Application-Framework-Oriented Model. Both of them are application-oriented integration so it is possible to combine them at the level of application. Based on this theory, this paper presents a new model, MAAFUM, which combines two models and applies them synthetically in DSS.

  1. The Relationship between Social Anxiety and Social Support in Adolescents: A Test of Competing Causal Models

    Science.gov (United States)

    Calsyn, Robert J.; Winter, Joel P.; Burger, Gary K.

    2005-01-01

    This study compared the strength of competing causal models in explaining the relationship between perceived support, enacted support, and social anxiety in adolescents. The social causation hypothesis postulates that social support causes social anxiety, whereas the social selection hypothesis postulates that social anxiety causes social support.…

  2. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  3. Engineering a Scalable High Quality Graph Partitioner

    CERN Document Server

    Holtgrewe, Manuel; Schulz, Christian

    2009-01-01

    We describe an approach to parallel graph partitioning that scales to hundreds of processors and produces a high solution quality. For example, for many instances from Walshaw's benchmark collection we improve the best known partitioning. We use the well known framework of multi-level graph partitioning. All components are implemented by scalable parallel algorithms. Quality improvements compared to previous systems are due to better prioritization of edges to be contracted, better approximation algorithms for identifying matchings, better local search heuristics, and perhaps most notably, a parallelization of the FM local search algorithm that works more locally than previous approaches.

  4. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  5. Scalable and Practical Nonblocking Switching Networks

    Institute of Scientific and Technical Information of China (English)

    Si-Qing Zheng; Ashwin Gumaste

    2006-01-01

    Large-scale strictly nonblocking (SNB) and wide-sense nonblocking (WSNB) networks may be infeasible due to their high cost. In contrast, rearrangeable nonblocking (RNB) networks are more scalable because of their much lower cost. However, RNB networks are not suitable for circuit switching. In this paper, the concept of virtual nonblockingness is introduced. It is shown that a virtual nonblocking (VNB) network functions like an SNB or WSNB network, but it is constructed with the cost of an RNB network. The results indicate that for large-scale circuit switching applications, it is only needed to build VNB networks.

  6. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  7. Physical Principles for Scalable Neural Recording

    Directory of Open Access Journals (Sweden)

    Adam Henry Marblestone*

    2013-10-01

    Full Text Available Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical,magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. We also study the physics of powering and communicating with microscale devices embedded in brain tissue.

  8. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  9. Grassmann Averages for Scalable Robust PCA

    OpenAIRE

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...

  10. xQTL workbench : a scalable web environment for multi-level QTL analysis

    NARCIS (Netherlands)

    Arends, Danny; van der Velde, K. Joeri; Prins, Pjotr; Broman, Karl W.; Moller, Steffen; Jansen, Ritsert C.; Swertz, Morris A.

    2012-01-01

    xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations

  11. xQTL workbench: a scalable web environment for multi-level QTL analysis

    NARCIS (Netherlands)

    Arends, D.; Velde, van der K.J.; Prins, J.C.P.; Broman, K.W.; Möller, S.; Jansen, R.C.; Swertz, M.A.

    2012-01-01

    Summary: xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human

  12. Scalability Analysis of KVM-Based Private Cloud For Iaas

    Directory of Open Access Journals (Sweden)

    Fayruz Rahma

    2013-10-01

    Full Text Available One of the cloud technology cores is virtualization. Virtual Machine Manager (VMM, which is also called hypervisor, is said to have good scalability if it provides services to many virtual machines with a fair management of resources to maintain optimal performance of virtual machines’. Scalability evaluation of virtualization technology needs to be done so that cloud developers can choose the appropriate hypervisor according to the scenario of cloud usage. This study was conducted to determine the scalability of KVM in a cloud with OpenStack platform. Three scalability metrics were used (overhead, linearity, and isolation to measure the scalability of different machine resources: CPU, network, and disk. The results showed that KVM exhibits good scalability in CPU and network. KVM is suitable for a scenario in which isolation between its CPU and harddisk is needed. KVM is suggested not to be used in a scenario where harddisk is accessed intensively.

  13. Modeling the Construct of an Expert Evidence-Adaptive Knowledge Base for a Pressure Injury Clinical Decision Support System

    Directory of Open Access Journals (Sweden)

    Peck Chui Betty Khong

    2017-07-01

    Full Text Available The selection of appropriate wound products for the treatment of pressure injuries is paramount in promoting wound healing. However, nurses find it difficult to decide on the most optimal wound product(s due to limited live experiences in managing pressure injuries resulting from successfully implemented pressure injury prevention programs. The challenges of effective decision-making in wound treatments by nurses at the point of care are compounded by the yearly release of wide arrays of newly researched wound products into the consumer market. A clinical decision support system for pressure injury (PI-CDSS was built to facilitate effective decision-making and selection of optimal wound treatments. This paper describes the development of PI-CDSS with an expert knowledge base using an interactive development environment, Blaze Advisor. A conceptual framework using decision-making and decision theory, knowledge representation, and process modelling guided the construct of the PI-CDSS. This expert system has incorporated the practical and relevant decision knowledge of wound experts in assessment and wound treatments in its algorithm. The construct of the PI-CDSS is adaptive, with scalable capabilities for expansion to include other CDSSs and interoperability to interface with other existing clinical and administrative systems. The algorithm was formatively evaluated and tested for usability. The treatment modalities generated after using patient-specific assessment data were found to be consistent with the treatment plan(s proposed by the wound experts. The overall agreement exceeded 90% between the wound experts and the generated treatment modalities for the choice of wound products, instructions, and alerts. The PI-CDSS serves as a just-in-time wound treatment protocol with suggested clinical actions for nurses, based on the best evidence available.

  14. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  15. Enhancing Formal Modelling Tool Support with Increased Automation

    DEFF Research Database (Denmark)

    Lausdahl, Kenneth

    Progress report for the qualification exam report for PhD Student Kenneth Lausdahl. Initial work on enhancing tool support for the formal method VDM and the concept of unifying a abstract syntax tree with the ability for isolated extensions is described. The tool support includes a connection to ...... to UML and a test automation principle based on traces written as a kind of regular expressions....

  16. A Knowledge Model- and Growth Model-Based Decision Support System for Wheat Management

    Institute of Scientific and Technical Information of China (English)

    ZHU Yan; CAO Wei-xing; WANG Qi-meng; TIAN Yong-chao; PAN Jie

    2003-01-01

    By applying the system analysis principle and mathematical modeling technique to knowledge expression system for crop cultural management, the fundamental relationships and quantitative algorithms of wheat growth and management indices to variety types, ecological environments and production levels were analysed and extracted, and a dynamic knowledge model with temporal and spatial characters for wheat management (WheatKnow) was developed. By adopting the soft component characteristics as non language rele vance, re-utilization and portable system maintenance, and by further integrating the wheat growth simulation model (WheatGrow) and intelligent system for wheat management, a comprehensive and digital knowledge model, growth model and component-based decision support system for wheat management (MBDSSWM) was established on the platforms of Visual C++ and Visual Basic. The MBDSSWM realized the effective integration and coupling of the prediction and decision-making functions for digital crop management.

  17. Epidemiological models to support animal disease surveillance activities

    DEFF Research Database (Denmark)

    Willeberg, Preben; Paisley, Larry; Lind, Peter

    2011-01-01

    Epidemiological models have been used extensively as a tool in improving animal disease surveillance activities. A review of published papers identified three main groups of model applications: models for planning surveillance, models for evaluating the performance of surveillance systems and mod...

  18. Mathematical Models for the Education Sector, Supporting Material to the Survey. (Les Modeles Mathematiques du Secteur Enseignement. Annexes.) Technical Report.

    Science.gov (United States)

    Organisation for Economic Cooperation and Development, Paris (France).

    This document contains supporting material for the survey on current practice in the construction and use of mathematical models for education. Two kinds of supporting material are included: (1) the responses to the questionnaire, and (2) supporting documents and other materials concerning the mathematical model-building effort in education.…

  19. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    trap quantum computer . This architecture has two separate layers of scalability: the first is to increase the number of ion qubits in a single trap...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation , scalable modular architectures REPORT DOCUMENTATION PAGE 11

  20. Modeling snail breeding in Bioregenerative Life Support System

    Science.gov (United States)

    Kovalev, Vladimir; Tikhomirov, Alexander A.; Nickolay Manukovsky, D..

    It is known that snail meat is a high quality food that is rich in protein. Hence, heliciculture or land snail farming spreads worldwide because it is a profitable business. The possibility to use the snails of Helix pomatia in Biological Life Support System (BLSS) was studied by Japanese Researches. In that study land snails were considered to be producers of animal protein. Also, snail breeding was an important part of waste processing, because snails were capable to eat the inedible plant biomass. As opposed to the agricultural snail farming, heliciculture in BLSS should be more carefully planned. The purpose of our work was to develop a model for snail breeding in BLSS that can predict mass flow rates in and out of snail facility. There are three linked parts in the model called “Stoichiometry”, “Population” and “Mass balance”, which are used in turn. Snail population is divided into 12 age groups from oviposition to one year. In the submodel “Stoichiometry” the individual snail growth and metabolism in each of 12 age groups are described with stoichiometry equations. Reactants are written on the left side of the equations, while products are written on the right side. Stoichiometry formulas of reactants and products consist of four chemical elements: C, H, O, N. The reactants are feed and oxygen, products are carbon dioxide, metabolic water, snail meat, shell, feces, slime and eggs. If formulas of substances in the stoichiometry equations are substituted with their molar masses, then stoichiometry equations are transformed to the equations of molar mass balance. To get the real mass balance of individual snail growth and metabolism one should multiply the value of each molar mass in the equations on the scale parameter, which is the ratio between mass of monthly consumed feed and molar mass of feed. Mass of monthly consumed feed and stoichiometry coefficients of formulas of meat, shell, feces, slime and eggs should be determined experimentally

  1. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  2. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  3. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  4. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  5. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  6. Artificial intelligence support for scientific model-building

    Science.gov (United States)

    Keller, Richard M.

    1992-01-01

    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  7. A 3D Geometry Model Search Engine to Support Learning

    Science.gov (United States)

    Tam, Gary K. L.; Lau, Rynson W. H.; Zhao, Jianmin

    2009-01-01

    Due to the popularity of 3D graphics in animation and games, usage of 3D geometry deformable models increases dramatically. Despite their growing importance, these models are difficult and time consuming to build. A distance learning system for the construction of these models could greatly facilitate students to learn and practice at different…

  8. A scalable method for computing quadruplet wave-wave interactions

    Science.gov (United States)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  9. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    Science.gov (United States)

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics. © 2016 S. Karger AG, Basel.

  10. An Intelligent Mobile-Agent Based Scalable Network Management Architecture for Large-Scale Enterprise System

    CERN Document Server

    Sharma, A K; Singh, Vijay

    2012-01-01

    Several Mobile Agent based distributed network management models have been proposed in recent times to address the scalability and flexibility problems of centralized (SNMP or CMIP management models) models. Though the use of Mobile Agents to distribute and delegate management tasks comes handy in dealing with the previously stated issues, many of the agent-based management frameworks like initial flat bed models and static mid-level managers employing mobile agents models cannot efficiently meet the demands of current networks which are growing in size and complexity. Moreover, varied technologies, such as SONET, ATM, Ethernet, DWDM etc., present at different layers of the Access, Metro and Core (long haul) sections of the network, have contributed to the complexity in terms of their own framing and protocol structures. Thus, controlling and managing the traffic in these networks is a challenging task. This paper presents an intelligent scalable hierarchical agent based model for the management of large-scal...

  11. Constitutive modelling of an arterial wall supported by microscopic measurements

    Directory of Open Access Journals (Sweden)

    Vychytil J.

    2012-06-01

    Full Text Available An idealized model of an arterial wall is proposed as a two-layer system. Distinct mechanical response of each layer is taken into account considering two types of strain energy functions in the hyperelasticity framework. The outer layer, considered as a fibre-reinforced composite, is modelled using the structural model of Holzapfel. The inner layer, on the other hand, is represented by a two-scale model mimicing smooth muscle tissue. For this model, material parameters such as shape, volume fraction and orientation of smooth muscle cells are determined using the microscopic measurements. The resulting model of an arterial ring is stretched axially and loaded with inner pressure to simulate the mechanical response of a porcine arterial segment during inflation and axial stretching. Good agreement of the model prediction with experimental data is promising for further progress.

  12. Real Time Traffic Models, Decision Support for Traffic Management

    NARCIS (Netherlands)

    Wismans, L.; De Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  13. Modelling and simulation-based acquisition decision support: present & future

    CSIR Research Space (South Africa)

    Naidoo, S

    2009-10-01

    Full Text Available during support phases Classes of Analysis Pe rfo rm a n ce Sp e ci fic a tio n Pe rfo rm a n ce Tr a de - o ffs Pe rfo rm a n ce D e ve lo pm e n t Pe rfo rm a n ce Pr e di ct io n Pe...

  14. Real time traffic models, decision support for traffic management

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; de Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  15. Supporting Universal Prevention Programs: A Two-Phased Coaching Model

    Science.gov (United States)

    Becker, Kimberly D.; Darney, Dana; Domitrovich, Celene; Keperling, Jennifer Pitchford; Ialongo, Nicholas S.

    2013-01-01

    Schools are adopting evidence-based programs designed to enhance students' emotional and behavioral competencies at increasing rates (Hemmeter et al. in "Early Child Res Q" 26:96-109, 2011). At the same time, teachers express the need for increased support surrounding implementation of these evidence-based programs (Carter and Van Norman in "Early…

  16. A Gaussian Model of Expert Opinions for Supporting Design Decisions

    NARCIS (Netherlands)

    Rajabalinejad, M.

    2012-01-01

    Decision making in design is of great importance, resulting in success or failure of a system. This paper describes a robust decision support tool for engineering design process, which can be used throughout the design process. The tool is graphical and designed to communicate efficiently with diffe

  17. Appropriate models in decision support systems for river basin management

    NARCIS (Netherlands)

    Xu, YuePing; Booij, Martijn J.; Morell, M.; Todorovik, O.; Dimitrov, D.; Selenica, A.; Spirkovski, Z.

    2004-01-01

    In recent years, new ideas and techniques appear very quickly, like sustainability, adaptive management, Geographic Information System, Remote Sensing and participations of new stakeholders, which contribute a lot to the development of decision support systems in river basin management. However, the

  18. Real time traffic models, decision support for traffic management

    NARCIS (Netherlands)

    Wismans, L.J.J.; Romph, de E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various con

  19. Real Time Traffic Models, Decision Support for Traffic Management

    NARCIS (Netherlands)

    Wismans, L.; De Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various con

  20. Ordered mesoporous materials as model supports to study catalyst preparation

    NARCIS (Netherlands)

    Sietsma, J.R.A.

    2007-01-01

    Catalysts are indispensable to modern-day society because of their prominent role in petroleum refining, chemical processing, and the reduction of environmental pollution. The catalytically active component often consists of small metal (oxide) particles that are supported on a carrier such as silic

  1. Social Validity of a Positive Behavior Interventions and Support Model

    Science.gov (United States)

    Miramontes, Nancy Y.; Marchant, Michelle; Heath, Melissa Allen; Fischer, Lane

    2011-01-01

    As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were…

  2. Designing, Modeling and Evaluating Influence Strategiesfor Behavior Change Support Systems

    NARCIS (Netherlands)

    Öörni, Anssi; Kelders, Saskia Marion; van Gemert-Pijnen, Julia E.W.C.; Oinas-Kukkonen, Harri

    2014-01-01

    Behavior change support systems (BCSS) research is an evolving area. While the systems have been demonstrated to work to the effect, there is still a lot of work to be done to better understand the influence mechanisms of behavior change, and work out their influence on the systems architecture. The

  3. Supporting Universal Prevention Programs: A Two-Phased Coaching Model

    Science.gov (United States)

    Becker, Kimberly D.; Darney, Dana; Domitrovich, Celene; Keperling, Jennifer Pitchford; Ialongo, Nicholas S.

    2013-01-01

    Schools are adopting evidence-based programs designed to enhance students' emotional and behavioral competencies at increasing rates (Hemmeter et al. in "Early Child Res Q" 26:96-109, 2011). At the same time, teachers express the need for increased support surrounding implementation of these evidence-based programs (Carter and Van Norman in "Early…

  4. Developing a Language Support Model for Mainstream Primary School Teachers

    Science.gov (United States)

    McCartney, Elspeth; Ellis, Sue; Boyle, James; Turnbull, Mary; Kerr, Jane

    2010-01-01

    In the UK, speech and language therapists (SLTs) work with teachers to support children with language impairment (LI) in mainstream schools. Consultancy approaches are often used, where SLTs advise educational staff who then deliver language-learning activities. However, some research suggests that schools may not always sustain activities as…

  5. Using Technology to Support the Army Learning Model

    Science.gov (United States)

    2016-02-01

    technologies that could support the execution of these scenarios in the most effective manner. It was indicated that the team had benefited greatly from...Leadership & Education , Personnel, and Facilities) studies are conducted which feed into the JCIDS (Joint Capabilities Integration Development...from each organization for inclusion . A total of 21 products were selected for the final sample, with each organization contributing approximately

  6. SUPPORTING MPLS VPN MULTICAST

    Institute of Scientific and Technical Information of China (English)

    Wang Yufeng; Wang Wendong; Cheng Shiduan

    2004-01-01

    MPLS(Multi-Protocol Label Switching) VPN(Virtual Private Network) traffic has been deployed widely, but currently only supports unicast. This paper briefly introduces several available MPLS VPN multicast approaches, and then analyzes their disadvantages. A novel mechanism that uses two-layer label stack to support MPLS VPN explicit multicast is proposed and the process is discussed in detail. The scalability and performance of the proposed mechanism are studied analytically. The result shows that our solution has great advantage over the currently available scheme in terms of saving core network bandwidth and improving the scalability.

  7. Using self-made drawings to support modelling in science education

    NARCIS (Netherlands)

    Leenaars, F.A.J.; Joolingen, van W.R.; Bollen, L.

    2013-01-01

    The value of modelling in science education is evident, both from scientific practice and from theories of learning. However, students find modelling difficult and need support. This study investigates how self-made drawings could be used to support the modelling process. An experiment with undergra

  8. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    Science.gov (United States)

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  9. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  10. Memory bandwidth-scalable motion estimation for mobile video coding

    Science.gov (United States)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  11. Image and geometry processing with Oriented and Scalable Map.

    Science.gov (United States)

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  13. Improving Scalability of Java Archive Search Engine through Recursion Conversion And Multithreading

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2016-05-01

    Full Text Available Based on the fact that bytecode always exists on Java archive, a bytecode based Java archive search engine had been developed [1, 2]. Although this system is quite effective, it still lack of scalability since many modules apply recursive calls and this system only utilizes one core (single thread. In this research, Java archive search engine architecture is redesigned in order to improve its scalability. All recursion are converted to iterative forms although most of these modules are logically recursive and quite difficult to convert (e.g. Tarjan’s strongly connected component algorithm. Recursion conversion can be conducted by following its respective recursive pattern. Each recursion is broke down to four parts (before and after actions of current and its children and converted to iteration with the help of caller reference. This conversion mechanism improves scalability by avoiding stack overflow error caused by method calls. System scalability is also improved by applying multithreading mechanism which successfully cut off its processing time. Shorter processing time may enable system to handle larger data. Multithreading is applied on major parts which are indexer, vector space model (VSM retriever, low-rank vector space model (LRVSM retriever, and semantic relatedness calculator (semantic relatedness calculator also involves multiprocess. The correctness of both recursion conversion and multithread design are proved by the fact that all implementation yield similar result.

  14. A model of union participation: the impact of perceived union support, union instrumentality, and union loyalty.

    Science.gov (United States)

    Tetrick, Lois E; Shore, Lynn M; McClurg, Lucy Newton; Vandenberg, Robert J

    2007-05-01

    Perceived union support and union instrumentality have been shown to uniquely predict union loyalty. This study was the first to explicitly examine the relation between perceived union support and union instrumentality. Surveys were completed by 273 union members and 29 union stewards. A comparison of 2 models, 1 based on organizational support theory and 1 based on union participation theories, found that the model based on organizational support theory, in which union instrumentality was an antecedent to perceived union support and led to union loyalty and subsequently union participation, best fit the data. The model based on union participation theories, in which perceived union support was an antecedent of union instrumentality and led to union loyalty and subsequently union participation, was not supported. Union instrumentality was related to union commitment, but the relation was completely mediated by perceived union support.

  15. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    Science.gov (United States)

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit

  16. Cognitive Support using BDI Agent and Adaptive User Modeling

    DEFF Research Database (Denmark)

    Hossain, Shabbir

    2012-01-01

    -known framework to measure the individual health status and functioning level. The third goal is to develop an approach for supporting for users with irrational behaviour due to cognitive impairment. To deal with this challenge, a Belief, Desire and Intention (BDI) agent based approach is proposed due to its...... conducted using the developed system shows potentials in terms of providing freedom of mobility for the user with cognitive impairment.......The need of constant support and care for the elderly people with special needs is gradually increasing. It is becoming the primary challenge for most of the western countries and research society to innovate new eective approaches and develop novel technologies to address the demographics...

  17. Model of hospital-supported discharge after stroke

    DEFF Research Database (Denmark)

    Torp, Claus Rydahl; Vinkler, Sonja; Pedersen, Kirsten Damgaard

    2006-01-01

    BACKGROUND AND PURPOSE: Readmission rate within 6 months after a stroke is 40% to 50%. The purpose of the project was to evaluate whether an interdisciplinary stroke team could reduce length of hospital stay, readmission rate, increase patient satisfaction and reduce dependency of help. METHODS......: One hundred and ninety-eight patients with acute stroke were randomized into 103 patients whose discharge was supported by an interdisciplinary stroke team and 95 control patients who received standard aftercare. Baseline characteristics were comparable in the 2 groups. The patients were evaluated...... services. Furthermore, there was no significant difference in functional scores or patient satisfaction. CONCLUSIONS: In this setting we could not show benefit of an interdisciplinary stroke team supporting patients at discharge perhaps because standard aftercare was very efficient already....

  18. Knowledge, Models and Tools in Support of Advanced Distance Learning

    Science.gov (United States)

    2006-06-01

    instruction, such as the need for frequent assessment and for customized pedagogy . In addition, however, 1 Certain types of procedural skills cannot...requiring that such components be reinvented in the iRides simulation language. For example, the Text Entry Dialog function makes it possible to use a...that the student can have more than one chance to produce the correct response. A lesson authoring dialog in iRides author supports the specification of

  19. Real time traffic models, decision support for traffic management

    OpenAIRE

    Wismans, L.; De Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various control strategies and enhance the performance of the overall network. By taking proactive action deploying traffic management measures, congestion may be prevented or its effects limited. An approach...

  20. Computer-Aided Design Models to Support Ergonomics

    Science.gov (United States)

    1985-12-01

    LABORATORY 4 WILLIAM B. ASKREN AIR FORCE HUMAN RESOURCES LABORA7IJRY LEC OCT 2 2 DECEMBER 1985 Apprw. edfor public release, distributionz is...SUPPORT ERGONOMICS (U) .__ _ _ _ _ 12. PERSONAL AUTHOR(S) McDaniel, Joe W. and Askren, William B.* ". k % 13.. TYPE OF REPORT 13b. TIME COVERED 14...technician which the designer can effectively use for maintainability analyses. ,% 4,1’ 12.. 6- - p *. **, . k % ... APPROACH v There are three elements to

  1. Data to support "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations & Biological Condition"

    Data.gov (United States)

    U.S. Environmental Protection Agency — Spreadsheets are included here to support the manuscript "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition". This...

  2. Exploiting Modelling and Simulation in Support of Cyber Defence

    NARCIS (Netherlands)

    Klaver, M.H.A.; Boltjes, B.; Croom-Jonson, S.; Jonat, F.; Çankaya, Y.

    2014-01-01

    The rapidly evolving environment of Cyber threats against the NATO Alliance has necessitated a renewed focus on the development of Cyber Defence policy and capabilities. The NATO Modelling and Simulation Group is looking for ways to leverage Modelling and Simulation experience in research, analysis

  3. Gilbert's Behavior Engineering Model: Contemporary Support for an Established Theory

    Science.gov (United States)

    Crossman, Donna Cangelosi

    2010-01-01

    This study was an effort to add to the body of research surrounding Gilbert's Behavior Engineering Model (BEM). The model was tested to determine its ability to explain factor relationships of organizational safety culture in a high-risk work environment. Three contextual variables were measured: communication, resource availability, and…

  4. Support of Modelling in Process-Engineering Education

    NARCIS (Netherlands)

    Schaaf, van der H.; Vermuë, M.H.; Tramper, J.; Hartog, R.J.M.

    2006-01-01

    An objective of the Process Technology curriculum at Wageningen University is to teach students a stepwise modeling approach in the context of process engineering. Many process-engineering students have difficulty with learning to design a model. Some common problems are lack of structure in the des

  5. Supporting an Externally Developed Model of Education in Greenland

    Science.gov (United States)

    Wyatt, Tasha R.

    2010-01-01

    This study investigated the adaptation process of an externally developed model of reform in Greenland's educational system. Under investigation was how reform leaders responded to the needs of the community after implementing an educational model developed in the United States by researchers at the Center for Research on Education, Diversity, and…

  6. Exploiting Modelling and Simulation in Support of Cyber Defence

    NARCIS (Netherlands)

    Klaver, M.H.A.; Boltjes, B.; Croom-Jonson, S.; Jonat, F.; Çankaya, Y.

    2014-01-01

    The rapidly evolving environment of Cyber threats against the NATO Alliance has necessitated a renewed focus on the development of Cyber Defence policy and capabilities. The NATO Modelling and Simulation Group is looking for ways to leverage Modelling and Simulation experience in research, analysis

  7. Community Mobilization Model Applied to Support Grandparents Raising Grandchildren

    Science.gov (United States)

    Miller, Jacque; Bruce, Ann; Bundy-Fazioli, Kimberly; Fruhauf, Christine A.

    2010-01-01

    This article discusses the application of a community mobilization model through a case study of one community's response to address the needs of grandparents raising grandchildren. The community mobilization model presented is one that is replicable in addressing diverse community identified issues. Discussed is the building of the partnerships,…

  8. A Model of Teacher Professional Development to Support Technology Integration

    Science.gov (United States)

    Ehman, Lee; Bonk, Curt; Yamagata-Lynch, Lisa

    2005-01-01

    The purpose of this paper is to report on the professional development model of Teacher Institute for Curriculum Knowledge about Integration of Technology (TICKIT). This paper will situate the TICKIT model with past findings from professional development research, and provide researchers and practitioners facilitating future programs advice based…

  9. Two Models of Magnetic Support for Photoevaporated Molecular Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Ryutov, D; Kane, J; Mizuta, A; Pound, M; Remington, B

    2004-05-05

    The thermal pressure inside molecular clouds is insufficient for maintaining the pressure balance at an ablation front at the cloud surface illuminated by nearby UV stars. Most probably, the required stiffness is provided by the magnetic pressure. After surveying existing models of this type, we concentrate on two of them: the model of a quasi-homogeneous magnetic field and the recently proposed model of a ''magnetostatic turbulence''. We discuss observational consequences of the two models, in particular, the structure and the strength of the magnetic field inside the cloud and in the ionized outflow. We comment on the possible role of reconnection events and their observational signatures. We mention laboratory experiments where the most significant features of the models can be tested.

  10. Phylogenies support out-of-equilibrium models of biodiversity.

    Science.gov (United States)

    Manceau, Marc; Lambert, Amaury; Morlon, Hélène

    2015-04-01

    There is a long tradition in ecology of studying models of biodiversity at equilibrium. These models, including the influential Neutral Theory of Biodiversity, have been successful at predicting major macroecological patterns, such as species abundance distributions. But they have failed to predict macroevolutionary patterns, such as those captured in phylogenetic trees. Here, we develop a model of biodiversity in which all individuals have identical demographic rates, metacommunity size is allowed to vary stochastically according to population dynamics, and speciation arises naturally from the accumulation of point mutations. We show that this model generates phylogenies matching those observed in nature if the metacommunity is out of equilibrium. We develop a likelihood inference framework that allows fitting our model to empirical phylogenies, and apply this framework to various mammalian families. Our results corroborate the hypothesis that biodiversity dynamics are out of equilibrium.

  11. Scalable Parallel Computers:System Architecture and Up-to-Date Development%可扩展并行计算机系统结构和发展现状

    Institute of Scientific and Technical Information of China (English)

    曾庆华; 陈天麒

    2003-01-01

    Scalable parallel computer is becoming a trend in developing parallel computers. Scalable computers are classified into three system models: the Symmetric Multiprocessor, the Massively Parallel Processor and the Cluster of Workstation. In this paper, the three models are discussed and Dawn parallel computers which belong to MPP and COW models are introduced.

  12. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  13. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  14. Atomic structure of graphene supported heterogeneous model catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Franz, Dirk

    2017-04-15

    Graphene on Ir(111) forms a moire structure with well defined nucleation centres. Therefore it can be utilized to create hexagonal metal cluster lattices with outstanding structural quality. At diffraction experiments these 2D surface lattices cause a coherent superposition of the moire cell structure factor, so that the measured signal intensity scales with the square of coherently scattering unit cells. This artificial signal enhancement enables the opportunity for X-ray diffraction to determine the atomic structure of small nano-objects, which are hardly accessible with any experimental technique. The uniform environment of every metal cluster makes the described metal cluster lattices on graphene/Ir(111) an attractive model system for the investigation of catalytic, magnetic and quantum size properties of ultra-small nano-objects. In this context the use of x-rays provides a maximum of flexibility concerning the possible sample environments (vacuum, selected gases, liquids, sample temperature) and allows in-situ/operando measurements. In the framework of the present thesis the structure of different metal clusters grown by physical vapor deposition in an UHV environment and after gas exposure have been investigated. On the one hand the obtained results will explore many aspects of the atomic structure of these small metal clusters and on the other hand the presented results will proof the capabilities of the described technique (SXRD on cluster lattices). For iridium, platinum, iridium/palladium and platinum/rhodium the growth on graphene/Ir(111) of epitaxial, crystalline clusters with an ordered hexagonal lattice arrangement has been confirmed using SXRD. The clusters nucleate at the hcp sites of the moire cell and bind via rehybridization of the carbon atoms (sp{sup 2} → sp{sup 3}) to the Ir(111) substrate. This causes small displacements of the substrate atoms, which is revealed by the diffraction experiments. All metal clusters exhibit a fcc structure

  15. Modelling Per Capita Water Demand Change to Support System Planning

    Science.gov (United States)

    Garcia, M. E.; Islam, S.

    2016-12-01

    Water utilities have a number of levers to influence customer water usage. These include levers to proactively slow demand growth over time such as building and landscape codes as well as levers to decrease demands quickly in response to water stress including price increases, education campaigns, water restrictions, and incentive programs. Even actions aimed at short term reductions can result in long term water usage declines when substantial changes are made in water efficiency, as in incentives for fixture replacement or turf removal, or usage patterns such as permanent lawn watering restrictions. Demand change is therefore linked to hydrological conditions and to the effects of past management decisions - both typically included in water supply planning models. Yet, demand is typically incorporated exogenously using scenarios or endogenously using only price, though utilities also use rules and incentives issued in response to water stress and codes specifying standards for new construction to influence water usage. Explicitly including these policy levers in planning models enables concurrent testing of infrastructure and policy strategies and illuminates interactions between the two. The City of Las Vegas is used as a case study to develop and demonstrate this modeling approach. First, a statistical analysis of system data was employed to rule out alternate hypotheses of per capita demand decrease such as changes in population density and economic structure. Next, four demand sub-models were developed including one baseline model in which demand is a function of only price. The sub-models were then calibrated and tested using monthly data from 1997 to 2012. Finally, the best performing sub-model was integrated with a full supply and demand model. The results highlight the importance of both modeling water demand dynamics endogenously and taking a broader view of the variables influencing demand change.

  16. Dependability breakeven point mathematical model for production - quality strategy support

    Science.gov (United States)

    Vilcu, Adrian; Verzea, Ion; Chaib, Rachid

    2016-08-01

    This paper connects the field of dependability system with the production-quality strategies through a new mathematical model based on breakeven points. The novelties consist in the identification of the parameters of dependability system which, in safety control, represents the degree to which an item is capable of performing its required function at any randomly chosen time during its specified operating period disregarding non-operation related influences, as well as the analysis of the production-quality strategies, defining a mathematical model based on a new concept - dependability breakeven points, model validation on datasets and shows the practical applicability of this new approach.

  17. Open Models of Decision Support Towards a Framework

    OpenAIRE

    Diasio, Stephen Ray

    2012-01-01

    Aquesta tesi presenta un marc per als models oberts de suport a les decisions en les organitzacions. El treball es vehicula a través d’un compendi d’articles on s’analitzen els fluxos d’entrada i de sortida de coneixement en les organitzacions, així como les tecnologies existents de suport a les decisions. Es presenten els factors subjacents que impulsen nous models per a formes obertes de suport a la decisió. La tesis presenta un estudi de les distintes tipologies de models de suport a les d...

  18. Efficient scalable solid-state neutron detector

    Science.gov (United States)

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a 6Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m2, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  19. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...... representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our...... parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in....

  20. A scalable sparse eigensolver for petascale applications

    Science.gov (United States)

    Keceli, Murat; Zhang, Hong; Zapol, Peter; Dixon, David; Wagner, Albert

    2015-03-01

    Exploiting locality of chemical interactions and therefore sparsity is necessary to push the limits of quantum simulations beyond petascale. However, sparse numerical algorithms are known to have poor strong scaling. Here, we show that shift-and-invert parallel spectral transformations (SIPs) method can scale up to two-hundred thousand cores for density functional based tight-binding (DFTB), or semi-empirical molecular orbital (SEMO) applications. We demonstrated the robustness and scalability of the SIPs method on various kinds of systems including metallic carbon nanotubes, diamond crystals and water clusters. We analyzed how sparsity patterns and eigenvalue spectrums of these different type of applications affect the computational performance of the SIPs. The SIPs method enables us to perform simulations with more than five hundred thousands of basis functions utilizing more than hundreds of thousands of cores. SIPs has a better scaling for memory and computational time in contrast to dense eigensolvers, and it does not require fast interconnects.