WorldWideScience

Sample records for hybrid computing environment

  1. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  2. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    Science.gov (United States)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  3. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  4. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  7. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    simulation with diagonal preconditioning shows the better speedup. The MPI library was used for node-to-node communication among partitioned subdomains, and the OpenMP threads were activated in every single node using multi-core computing environment. The results of hybrid computing show good performance comparing the pure MPI parallel computing.

  8. Auditing Hybrid IT Environments

    Directory of Open Access Journals (Sweden)

    Georgiana Mateescu

    2014-01-01

    Full Text Available This paper presents a personal approach of auditing the hybrid IT environments consisting in both on premise and on demand services and systems. The analysis is performed from both safety and profitability perspectives and it aims to offer to strategy, technical and business teams a representation of the value added by the cloud programme within the company’s portfolio. Starting from the importance of the IT Governance in the actual business environments, we presented in the first section the main principles that drive the technology strategy in order to maximize the value added by IT assets in the business products. Section two summarizes the frameworks leveraged by our approach in order to implement the safety and profitability computation algorithms described in the third section. The paper concludes with benefits of our personal frameworks and presents the future developments.

  9. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  10. BF-PSO-TS: Hybrid Heuristic Algorithms for Optimizing Task Schedulingon Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Hussin M. Alkhashai

    2016-06-01

    Full Text Available Task Scheduling is a major problem in Cloud computing because the cloud provider has to serve many users. Also, a good scheduling algorithm helps in the proper and efficient utilization of the resources. So, task scheduling is considered as one of the major issues on the Cloud computing systems. The objective of this paper is to assign the tasks to multiple computing resources. Consequently, the total cost of execution is to be minimum and load to be shared between these computing resources. Therefore, two hybrid algorithms based on Particle Swarm Optimization (PSO have been introduced to schedule the tasks; Best-Fit-PSO (BFPSO and PSO-Tabu Search (PSOTS. According to BFPSO algorithm, Best-Fit (BF algorithm has been merged into the PSO algorithm to improve the performance. The main principle of the modified BFSOP algorithm is that BF algorithm is used to generate the initial population of the standard PSO algorithm instead of being initiated randomly. According to the proposed PSOTS algorithm, the Tabu-Search (TS has been used to improve the local research by avoiding the trap of the local optimality which could be occurred using the standard PSO algorithm. The two proposed algorithms (i.e., BFPSO and PSOTS have been implemented using Cloudsim and evaluated comparing to the standard PSO algorithm using five problems with different number of independent tasks and resources. The performance parameters have been considered are the execution time (Makspan, cost, and resources utilization. The implementation results prove that the proposed hybrid algorithms (i.e., BFPSO, PSOTS outperform the standard PSO algorithm.

  11. Exploring a Hybrid of Geospatial Semantic Information in Ubiquitous Computing Environments

    Directory of Open Access Journals (Sweden)

    Raghda Fouad

    2011-11-01

    Full Text Available Nowadays, geospatial information plays a critical role. Searching and obtaining geospatial information, however, is a difficult and time-consuming task. The Semantic Web promises to facilitate this by improving the capability to search for information by better expressing the meaning of search queries. Combining the two approaches to create a Geospatial Semantic Web is an idea that is gaining acceptance in both areas. Here, we present a prototype that promises to prove that the meshing of these two areas is a promising field in conjunction with information retrieval and ubiquitous computing. The aim of this prototype is to exploit geospatial semantic information retrieved from multiple data sources ina mobile environment. Our prototype uses three geospatial data sources: GeoNames, LinkedGeoData, and DBpedia. Experimental results show how the merging of the geospatial data sources and the use ofmore than one level of indexing is more effective in terms of recall and precision.

  12. Exploring a Hybrid of Geospatial Semantic Information in Ubiquitous Computing Environments

    Directory of Open Access Journals (Sweden)

    Raghda A. Fouad

    2011-11-01

    Full Text Available Nowadays, geospatial information plays a critical role. Searching and obtaining geospatial information, however, is a difficult and time-consuming task. The Semantic Web promises to facilitate this by improving the capability to search for information by better expressing the meaning of search queries. Combining the two approaches to create a Geospatial Semantic Web is an idea that is gaining acceptance in both areas. Here, we present a prototype that promises to prove that the meshing of these two areas is a promising field in conjunction with information retrieval and ubiquitous computing. The aim of this prototype is to exploit geospatial semantic information retrieved from multiple data sources in a mobile environment. Our prototype uses three geospatial data sources: GeoNames, LinkedGeoData, and DBpedia. Experimental results show how the merging of the geospatial data sources and the use of more than one level of indexing is more effective in terms of recall and precision.

  13. Reachability computation for hybrid systems with Ariadne

    NARCIS (Netherlands)

    L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli

    2008-01-01

    htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the

  14. Computing environment logbook

    Science.gov (United States)

    Osbourn, Gordon C; Bouchard, Ann M

    2012-09-18

    A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

  15. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  16. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  17. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  18. Towards Hybrid Overset Grid Simulations of the Launch Environment

    Science.gov (United States)

    Moini-Yekta, Shayan

    A hybrid overset grid approach has been developed for the design and analysis of launch vehicles and facilities in the launch environment. The motivation for the hybrid grid methodology is to reduce the turn-around time of computational fluid dynamic simulations and improve the ability to handle complex geometry and flow physics. The LAVA (Launch Ascent and Vehicle Aerodynamics) hybrid overset grid scheme consists of two components: an off-body immersed-boundary Cartesian solver with block-structured adaptive mesh refinement and a near-body unstructured body-fitted solver. Two-way coupling is achieved through overset connectivity between the off-body and near-body grids. This work highlights verification using code-to-code comparisons and validation using experimental data for the individual and hybrid solver. The hybrid overset grid methodology is applied to representative unsteady 2D trench and 3D generic rocket test cases.

  19. Checkpointing for a hybrid computing node

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  20. Hybrid Verification by Exploiting the Environment

    Science.gov (United States)

    1994-07-01

    Lecture Notes in Computer Science Vol. 600, Springer-Verlag, 1992. [2] R. Alur, C. Courcoubetis, T.A. Henzinger, and...and H. Rischel, editors, Hybrid Systems, pages 209-229. Lecture Notes in Computer Science Vol. 736, Springer-Verlag, 1993. [3] H. Barringer, R. Kuiper...pages 36-59. Lecture Notes in Computer Science Vol. 736, Springer-Verlag, 1993. [6] E.M. Clarke, D.E. Long, and K.L. McMillan. Compositional

  1. Computer code for intraply hybrid composite design

    Science.gov (United States)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  2. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  3. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  4. Computational Environment of Software Agents

    Directory of Open Access Journals (Sweden)

    Martin Tomášek

    2008-05-01

    Full Text Available Presented process calculus for software agent communication and mobility canbe used to express distributed computational environment and mobile code applications ingeneral. Agents are abstraction of the functional part of the system architecture and theyare modeled as process terms. Agent actions model interactions within the distributedenvironment: local/remote communication and mobility. Places are abstraction of thesingle computational environment where the agents are evaluated and where interactionstake place. Distributed environment is modeled as a parallel composition of places whereeach place is evolving asynchronously. Operational semantics defines rules to describebehavior within the distributed environment and provides a guideline for implementations.Via a series of examples we show that mobile code applications can be naturally modeled.

  5. Printing in ubiquitous computing environments

    NARCIS (Netherlands)

    Karapantelakis, Athanasios; Delvic, Alisa; Zarifi Eslami, Mohammad; Khamit, Saltanat

    2009-01-01

    Document printing has long been considered an indispensable part of the workspace. While this process is considered trivial and simple for environments where resources are ample (e.g. desktop computers connected to printers within a corporate network), it becomes complicated when applied in a mobile

  6. Feasibility study: PASS computer environment

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-03-10

    The Policy Analysis Screening System (PASS) is a computerized information-retrieval system designed to provide analysts in the Department of Energy, Assistant Secretary for Environment, Office of Technology Impacts (DOE-ASEV-OTI) with automated access to articles, computer simulation outputs, energy-environmental statistics, and graphics. Although it is essential that PASS respond quickly to user queries, problems at the computer facility where it was originally installed seriously slowed PASS's operations. Users attempting to access the computer by telephone repeatedly encountered busy signals and, once logged on, experienced unsatisfactory delays in response to commands. Many of the problems stemmed from the system's facility manager having brought another large user onto the system shortly after PASS was implemented, thereby significantly oversubscribing the facility. Although in March 1980 Energy Information Administration (EIA) transferred operations to its own computer facility, OTI has expressed concern that any improvement in computer access time and response time may not be sufficient or permanent. Consequently, a study was undertaken to assess the current status of the system, to identify alternative computer environments, and to evaluate the feasibility of each alternative in terms of its cost and its ability to alleviate current problems.

  7. Adaptation and hybridization in computational intelligence

    CERN Document Server

    Jr, Iztok

    2015-01-01

      This carefully edited book takes a walk through recent advances in adaptation and hybridization in the Computational Intelligence (CI) domain. It consists of ten chapters that are divided into three parts. The first part illustrates background information and provides some theoretical foundation tackling the CI domain, the second part deals with the adaptation in CI algorithms, while the third part focuses on the hybridization in CI. This book can serve as an ideal reference for researchers and students of computer science, electrical and civil engineering, economy, and natural sciences that are confronted with solving the optimization, modeling and simulation problems. It covers the recent advances in CI that encompass Nature-inspired algorithms, like Artificial Neural networks, Evolutionary Algorithms and Swarm Intelligence –based algorithms.  

  8. Lyapunov exponents computation for hybrid neurons.

    Science.gov (United States)

    Bizzarri, Federico; Brambilla, Angelo; Gajani, Giancarlo Storti

    2013-10-01

    Lyapunov exponents are a basic and powerful tool to characterise the long-term behaviour of dynamical systems. The computation of Lyapunov exponents for continuous time dynamical systems is straightforward whenever they are ruled by vector fields that are sufficiently smooth to admit a variational model. Hybrid neurons do not belong to this wide class of systems since they are intrinsically non-smooth owing to the impact and sometimes switching model used to describe the integrate-and-fire (I&F) mechanism. In this paper we show how a variational model can be defined also for this class of neurons by resorting to saltation matrices. This extension allows the computation of Lyapunov exponent spectrum of hybrid neurons and of networks made up of them through a standard numerical approach even in the case of neurons firing synchronously.

  9. Hybrid Parallel Computation of Integration in GRACE

    CERN Document Server

    Yuasa, F; Kawabata, S; Perret-Gallix, D; Itakura, K; Hotta, Y; Okuda, M; Yuasa, Fukuko; Ishikawa, Tadashi; Kawabata, Setsuya; Perret-Gallix, Denis; Itakura, Kazuhiro; Hotta, Yukihiko; Okuda, Motoi

    2000-01-01

    With an integrated software package {\\tt GRACE}, it is possible to generate Feynman diagrams, calculate the total cross section and generate physics events automatically. We outline the hybrid method of parallel computation of the multi-dimensional integration of {\\tt GRACE}. We used {\\tt MPI} (Message Passing Interface) as the parallel library and, to improve the performance we embedded the mechanism of the dynamic load balancing. The reduction rate of the practical execution time was studied.

  10. Hybrid Nanoelectronics: Future of Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Wei Wang; Ming Liu; Andrew Hsu

    2006-01-01

    Nanotechnology may well prove to be the 21st century's new wave of scientific knowledge that transforms people's lives. Nanotechnology research activities are booming around the globe. This article reviews the recent progresses made on nanoelectronic research in US and China, and introduces several novel hybrid solutions specifically useful for future computer technology. These exciting new directions will lead to many future inventions, and have a huge impact to research communities and industries.

  11. Airborne Cloud Computing Environment (ACCE)

    Science.gov (United States)

    Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz

    2011-01-01

    Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.

  12. Airborne Cloud Computing Environment (ACCE)

    Science.gov (United States)

    Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz

    2011-01-01

    Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.

  13. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  14. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  15. Evaluation of a Compact Hybrid Brain-Computer Interface System

    Directory of Open Access Journals (Sweden)

    Jaeyoung Shin

    2017-01-01

    Full Text Available We realized a compact hybrid brain-computer interface (BCI system by integrating a portable near-infrared spectroscopy (NIRS device with an economical electroencephalography (EEG system. The NIRS array was located on the subjects’ forehead, covering the prefrontal area. The EEG electrodes were distributed over the frontal, motor/temporal, and parietal areas. The experimental paradigm involved a Stroop word-picture matching test in combination with mental arithmetic (MA and baseline (BL tasks, in which the subjects were asked to perform either MA or BL in response to congruent or incongruent conditions, respectively. We compared the classification accuracies of each of the modalities (NIRS or EEG with that of the hybrid system. We showed that the hybrid system outperforms the unimodal EEG and NIRS systems by 6.2% and 2.5%, respectively. Since the proposed hybrid system is based on portable platforms, it is not confined to a laboratory environment and has the potential to be used in real-life situations, such as in neurorehabilitation.

  16. Resourceful Computing in Unstructured Environments

    Science.gov (United States)

    1991-07-31

    Patt A 11, no. 3 (March 1989): 244-257. Little, James J., Guy E. Blelloch, and Todd Cass, "How to Program the Connection Machine for Computer Vision...Blelloch, and Todd Cass, "Parallel Algorithms for Computer Vision on the Connection Machine," Proceedings of the Image Understanding Workshop, Los...L. Jones, Emmanuel Mazer, Patrick A. O’Donnell, "Task-Level Planning of Pick-and-Place Robot Motions," Computer Magazine, vol. 22, no. 3, March 1989

  17. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  18. Computational approaches for urban environments

    NARCIS (Netherlands)

    Helbich, M; Jokar Arsanjani, J; Leitner, M

    2015-01-01

    This book aims to promote the synergistic usage of advanced computational methodologies in close relationship to geospatial information across cities of different scales. A rich collection of chapters subsumes current research frontiers originating from disciplines such as geography, urban planning,

  19. Andrew: CMU's New Computing Environment.

    Science.gov (United States)

    Zabowski, Susan

    1986-01-01

    Reviews the progress and problems associated with the development of Carnegie Mellon University's new computing and communications system, "Andrew." Describes the accomplishments and capacities of the system and provides examples of the programs developed for "Andrew." (ML)

  20. A survey of computational steering environments

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van; Liere, R. van

    1998-01-01

    Computational steering is a powerful concept that allows scientists to interactively control a computational process during its execution. In this paper, a survey of computational steering environments for the on-line steering of ongoing scientific and engineering simulations is presented. These env

  1. Green Computing: Protect Our Environment from Computer and its Devices

    Directory of Open Access Journals (Sweden)

    Jamshed Siddiqui

    2013-12-01

    Full Text Available Computer is the basic need of everyone and everybody use computer for its own purpose but no one is aware about the injurious impact of the use of computer on our environment and its devices. The concept of green computing is about environmentally responsible and eco-friendly use of computers and there resources. Besides the extensive sensitively to ecological issues, energy, costs, such interest also seems from economic needs and electrical requirement of IT around the world show a continuously growing trend. In this paper, we will discuss how can protect our environment by the comparative study of computing devices from the injurious impact of computer and Eco-friendly devices. The comparative study suggest that we can save generous amount of power, make environment green and while saving the cost.

  2. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  3. Emotions in Pervasive Computing Environments

    Directory of Open Access Journals (Sweden)

    Nevin Vunka Jungum

    2009-11-01

    Full Text Available The ability of an intelligent environment to connect and adapt to real internal sates, needs and behaviors' meaning of humans can be made possible by considering users' emotional states as contextual parameters. In this paper, we build on enactive psychology and investigate the incorporation of emotions in pervasive systems. We define emotions, and discuss the coding of emotional human markers by smart environments. In addition, we compare some existing works and identify how emotions can be detected and modeled by a pervasive system in order to enhance its service and response to users. Finally, we analyze closely one XML-based language for representing and annotating emotions known as EARL and raise two important issues which pertain to emotion representation and modeling in XML-based languages.

  4. Emotions in Pervasive Computing Environments

    CERN Document Server

    Jungum, Nevin Vunka

    2009-01-01

    The ability of an intelligent environment to connect and adapt to real internal sates, needs and behaviors' meaning of humans can be made possible by considering users' emotional states as contextual parameters. In this paper, we build on enactive psychology and investigate the incorporation of emotions in pervasive systems. We define emotions, and discuss the coding of emotional human markers by smart environments. In addition, we compare some existing works and identify how emotions can be detected and modeled by a pervasive system in order to enhance its service and response to users. Finally, we analyze closely one XML-based language for representing and annotating emotions known as EARL and raise two important issues which pertain to emotion representation and modeling in XML-based languages.

  5. Embedding Moodle into Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus

    2010-01-01

    Glahn, C., & Specht, M. (2010). Embedding Moodle into Ubiquitous Computing Environments. In M. Montebello, et al. (Eds.), 9th World Conference on Mobile and Contextual Learning (MLearn2010) (pp. 100-107). October, 19-22, 2010, Valletta, Malta.

  6. Computational origami environment on the web

    Institute of Scientific and Technical Information of China (English)

    Asem KASEM; Tetsuo IDA

    2008-01-01

    We present a computing environment for ori-gami on the web. The environment consists of the compu-tational origami engine Eos for origami construction, visualization, and geometrical reasoning, WEвEOS for pro-viding web interface to the functionalities of Eos, and web service system SCORUM for symbolic computing web ser-vices. WEBEOS is developed using Web2.0 technologies, and provides a graphical interactive web interface for ori-gami construction and proving. In SCORUM, we are prepar-ing web services for a wide range of symbolic computing systems, and are using these services in our origami envir-onment. We explain the functionalities of this environment, and discuss its architectural and technological features.

  7. Intelligent computing for sustainable energy and environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang [Queen' s Univ. Belfast (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Li, Shaoyuan; Li, Dewei [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Niu, Qun (eds.) [Shanghai Univ. (China). School of Mechatronic Engineering and Automation

    2013-07-01

    Fast track conference proceedings. State of the art research. Up to date results. This book constitutes the refereed proceedings of the Second International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2012, held in Shanghai, China, in September 2012. The 60 full papers presented were carefully reviewed and selected from numerous submissions and present theories and methodologies as well as the emerging applications of intelligent computing in sustainable energy and environment.

  8. Security Management Model in Cloud Computing Environment

    OpenAIRE

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  9. Heterogeneity in Health Care Computing Environments

    OpenAIRE

    Sengupta, Soumitra

    1989-01-01

    This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.

  10. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  11. Towards molecular computers that operate in a biological environment

    Science.gov (United States)

    Kahan, Maya; Gil, Binyamin; Adar, Rivka; Shapiro, Ehud

    2008-07-01

    Even though electronic computers are the only computer species we are accustomed to, the mathematical notion of a programmable computer has nothing to do with electronics. In fact, Alan Turing’s notional computer [L.M. Turing, On computable numbers, with an application to the entcheidungsproblem, Proc. Lond. Math. Soc. 42 (1936) 230-265], which marked in 1936 the birth of modern computer science and still stands at its heart, has greater similarity to natural biomolecular machines such as the ribosome and polymerases than to electronic computers. This similarity led to the investigation of DNA-based computers [C.H. Bennett, The thermodynamics of computation - Review, Int. J. Theoret. Phys. 21 (1982) 905-940; A.M. Adleman, Molecular computation of solutions to combinatorial problems, Science 266 (1994) 1021-1024]. Although parallelism, sequence specific hybridization and storage capacity, inherent to DNA and RNA molecules, can be exploited in molecular computers to solve complex mathematical problems [Q. Ouyang, et al., DNA solution of the maximal clique problem, Science 278 (1997) 446-449; R.J. Lipton, DNA solution of hard computational problems, Science 268 (1995) 542-545; R.S. Braich, et al., Solution of a 20-variable 3-SAT problem on a DNA computer, Science 296 (2002) 499-502; Liu Q., et al., DNA computing on surfaces, Nature 403 (2000) 175-179; D. Faulhammer, et al., Molecular computation: RNA solutions to chess problems, Proc. Natl. Acad. Sci. USA 97 (2000) 1385-1389; C. Mao, et al., Logical computation using algorithmic self-assembly of DNA triple-crossover molecules, Nature 407 (2000) 493-496; A.J. Ruben, et al., The past, present and future of molecular computing, Nat. Rev. Mol. Cell. Biol. 1 (2000) 69-72], we believe that the more significant potential of molecular computers lies in their ability to interact directly with a biochemical environment such as the bloodstream and living cells. From this perspective, even simple molecular computations may have

  12. Service Composition in a Cloud Computing Environment

    OpenAIRE

    Abrha, Abrha; Heggen Skogen, Endre

    2011-01-01

    Today, composite services can be constructed by combining and coordinating a set of independent services in a process referred to as a service composition. Due to their dependencies on external components, composite services are especially dependent on a reliable hosting environment. The next generation service delivery platform, cloud computing, is emerging as a hosting environment that can support these service compositions better than a traditional hosting environment. This is mainly due t...

  13. Digital Potentiometer for Hybrid Computer EAI 680-PDP-8/I

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe; Olsen, Jens V.

    1974-01-01

    In this article a description is given of a 12 bit digital potentiometer for hybrid computer application. The system is composed of standard building blocks. Emphasis is laid on the development problems met and the problem solutions developed.......In this article a description is given of a 12 bit digital potentiometer for hybrid computer application. The system is composed of standard building blocks. Emphasis is laid on the development problems met and the problem solutions developed....

  14. Remarks on parallel computations in MATLAB environment

    Science.gov (United States)

    Opalska, Katarzyna; Opalski, Leszek

    2013-10-01

    The paper attempts to summarize author's investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other - CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).

  15. Genotype x environment interaction on experimental hybrids of chili pepper.

    Science.gov (United States)

    Cabral, N S S; Medeiros, A M; Neves, L G; Sudré, C P; Pimenta, S; Coelho, V J; Serafim, M E; Rodrigues, R

    2017-04-20

    In Brazil, cultivation of hybrid plants comprise near 40% of the area grown with vegetables. For Capsicum, hybrids of bell and chili peppers have already exceeded 50% and over 25% of all are commercialized seeds. This study aimed to evaluate new pepper hybrids in two environments, Cáceres, MT, and Campos dos Goytacazes, RJ, Brazil. Nine experimental hybrids of C. baccatum var. pendulum were tested and trials were performed in a randomized block design, with three replications and eight plants per plot. In each environment, plants were assessed for canopy diameter, plant height, number of fruit per plant, mean fruit mass per plant, fruit length and diameter, pulp thickness, and content of soluble solids. Seven of the eight traits have differed significantly due to environment variation. Furthermore, genotype and environment interaction was highly significant for number of fruit per plant, mean fruit mass per plant, fruit length, and fruit diameter. Choosing a hybrid to be grown in one of the studied locations must be in accordance with the sought characteristics since there is a complex interaction for some studied traits.

  16. Hybrid system for computing reachable workspaces for redundant manipulators

    Science.gov (United States)

    Alameldin, Tarek K.; Sobh, Tarek M.

    1991-03-01

    An efficient computation of 3D workspaces for redundant manipulators is based on a " hybrid" a!- gorithm between direct kinematics and screw theory. Direct kinematics enjoys low computational cost but needs edge detection algorithms when workspace boundaries are needed. Screw theory has exponential computational cost per workspace point but does not need edge detection. Screw theory allows computing workspace points in prespecified directions while direct kinematics does not. Applications of the algorithm are discussed.

  17. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  18. Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home

    Directory of Open Access Journals (Sweden)

    Gila Cohen Zilka

    2016-06-01

    Full Text Available Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school; and (b the digital divide in literacy skills. In the present study we examined the degree of effectiveness of receiving a desktop or hybrid computer for the home in reducing the digital divide among children of low socio-economic status aged 8-12 from various localities across Israel. The sample consisted of 1,248 respondents assessed in two measurements. As part of the mixed-method study, 128 children were also interviewed. Findings indicate that after the children received desktop or hybrid computers, changes occurred in their frequency of access, mobility, and computer literacy. Differences were found between the groups: hybrid computers reduce disparities and promote work with the computer and surfing the Internet more than do desktop computers. Narrowing the digital divide for this age group has many implications for the acquisition of skills and study habits, and consequently, for the realization of individual potential. The children spoke about self improvement as a result of exposure to the digital environment, about a sense of empowerment and of improvement in their advantage in the social fabric. Many children expressed a desire to continue their education and expand their knowledge of computer applications, the use of software, of games, and more. Therefore, if there is no computer in the home and it is necessary to decide between a desktop and a hybrid computer, a hybrid computer is preferable.

  19. Optical-digital hybrid image search system in cloud environment

    Science.gov (United States)

    Ikeda, Kanami; Kodate, Kashiko; Watanabe, Eriko

    2016-09-01

    To improve the versatility and usability of optical correlators, we developed an optical-digital hybrid image search system consisting of digital servers and an optical correlator that can be used to perform image searches in the cloud environment via a web browser. This hybrid system employs a simple method to obtain correlation signals and has a distributed network design. The correlation signals are acquired by using an encoder timing signal generated by a rotating disk, and the distributed network design facilitates the replacement and combination of the digital correlation server and the optical correlator.

  20. Resource management in mobile computing environments

    CERN Document Server

    Mavromoustakis, Constandinos X; Mastorakis, George

    2014-01-01

    This book reports the latest advances on the design and development of mobile computing systems, describing their applications in the context of modeling, analysis and efficient resource management. It explores the challenges on mobile computing and resource management paradigms, including research efforts and approaches recently carried out in response to them to address future open-ended issues. The book includes 26 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of mobile computing, from basic concepts to advanced findings, reporting the state-of-the-art on resource management in such environments. It is mainly intended as a reference guide for researchers and practitioners involved in the design, development and applications of mobile computing systems, seeking solutions to related issues. It also represents a useful textbook for advanced undergraduate and graduate courses, addressing special t...

  1. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  2. Novel hybrid adaptive controller for manipulation in complex perturbation environments.

    Directory of Open Access Journals (Sweden)

    Alex M C Smith

    Full Text Available In this paper we present a hybrid control scheme, combining the advantages of task-space and joint-space control. The controller is based on a human-like adaptive design, which minimises both control effort and tracking error. Our novel hybrid adaptive controller has been tested in extensive simulations, in a scenario where a Baxter robot manipulator is affected by external disturbances in the form of interaction with the environment and tool-like end-effector perturbations. The results demonstrated improved performance in the hybrid controller over both of its component parts. In addition, we introduce a novel method for online adaptation of learning parameters, using the fuzzy control formalism to utilise expert knowledge from the experimenter. This mechanism of meta-learning induces further improvement in performance and avoids the need for tuning through trial testing.

  3. Novel Hybrid Adaptive Controller for Manipulation in Complex Perturbation Environments

    Science.gov (United States)

    Smith, Alex M. C.; Yang, Chenguang; Ma, Hongbin; Culverhouse, Phil; Cangelosi, Angelo; Burdet, Etienne

    2015-01-01

    In this paper we present a hybrid control scheme, combining the advantages of task-space and joint-space control. The controller is based on a human-like adaptive design, which minimises both control effort and tracking error. Our novel hybrid adaptive controller has been tested in extensive simulations, in a scenario where a Baxter robot manipulator is affected by external disturbances in the form of interaction with the environment and tool-like end-effector perturbations. The results demonstrated improved performance in the hybrid controller over both of its component parts. In addition, we introduce a novel method for online adaptation of learning parameters, using the fuzzy control formalism to utilise expert knowledge from the experimenter. This mechanism of meta-learning induces further improvement in performance and avoids the need for tuning through trial testing. PMID:26029916

  4. InSAR Scientific Computing Environment

    Science.gov (United States)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with

  5. Cost Optimization Using Hybrid Evolutionary Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    B. Kavitha

    2015-07-01

    Full Text Available The main aim of this research is to design the hybrid evolutionary algorithm for minimizing multiple problems of dynamic resource allocation in cloud computing. The resource allocation is one of the big problems in the distributed systems when the client wants to decrease the cost for the resource allocation for their task. In order to assign the resource for the task, the client must consider the monetary cost and computational cost. Allocation of resources by considering those two costs is difficult. To solve this problem in this study, we make the main task of client into many subtasks and we allocate resources for each subtask instead of selecting the single resource for the main task. The allocation of resources for the each subtask is completed through our proposed hybrid optimization algorithm. Here, we hybrid the Binary Particle Swarm Optimization (BPSO and Binary Cuckoo Search algorithm (BCSO by considering monetary cost and computational cost which helps to minimize the cost of the client. Finally, the experimentation is carried out and our proposed hybrid algorithm is compared with BPSO and BCSO algorithms. Also we proved the efficiency of our proposed hybrid optimization algorithm.

  6. Carbon nanotube reinforced hybrid composites: Computational modeling of environmental fatigue and usability for wind blades

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2015-01-01

    The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....... automatically using the Python based code. 3D computational studies of environment and fatigue analyses of multiscale composites with secondary nano-scale reinforcement in different material phases and different CNTs arrangements are carried out systematically in this paper. It was demonstrated that composites...

  7. Fitting method for hybrid temperature control in smart home environment

    OpenAIRE

    CHENG, Zhuo; TAN, Yasuo; Lim, Azman Osman

    2014-01-01

    The design of control system is crucial for improving the comfort level of home environment. Cyber-Physical Systems (CPSs) can offer numerous opportunities to design high efficient control systems. In this paper, we focus on the design of temperature control systems. By using the idea of CPS, a hybrid temperature control (HTC) system is proposed. It combines supervisory and proportional-integral-derivative (PID) controllers. Through an energy efficient temperature control (EETC) algorithm, HT...

  8. Load flow computations in hybrid transmission - distributed power systems

    NARCIS (Netherlands)

    Wobbes, E.D.; Lahaye, D.J.P.

    2013-01-01

    We interconnect transmission and distribution power systems and perform load flow computations in the hybrid network. In the largest example we managed to build, fifty copies of a distribution network consisting of fifteen nodes is connected to the UCTE study model, resulting in a system consisting

  9. A Hybrid Data Association Approach for SLAM in Dynamic Environments

    Directory of Open Access Journals (Sweden)

    Baifan Chen

    2013-02-01

    Full Text Available Data association is critical for Simultaneous Localization and Mapping (SLAM. In a real environment, dynamic obstacles will lead to false data associations which compromise SLAM results. This paper presents a simple and effective data association method for SLAM in dynamic environments. A hybrid approach of data association based on local maps by combining ICNN and JCBB algorithms is used initially. Secondly, we set a judging condition of outlier features in association assumptions and then the static and dynamic features are detected according to spatial and temporal difference. Finally, association assumptions are updated by filtering out the dynamic features. Simulations and experimental results show that this method is feasible.

  10. Multi-site production planning in hybrid make-to-stock/make-to-order production environment

    Science.gov (United States)

    Rafiei, Hamed; Rabbani, Masoud; Kokabi, Reza

    2014-06-01

    Today competitive environment has enforced practitioners and researchers to pay great attention to issues enhancing both production and marketing competitiveness. To do so, it has been obligatory for the firms to consider production side activities while customer requirements are on the other side of competition. In this regard, hybrid make-to-stock (MTS)/make-to-order (MTO) production systems have revealed outstanding results. This paper addresses multi-site production planning of a hybrid manufacturing firm for the first time in the hybrid systems' body of literature. In this regard, a network of suppliers, manufacturers and customers is considered for which a mixed-integer mathematical model is proposed. Objective function of the proposed mathematical model seeks to maximize profitability of the manufacturing firm. Because of computational complexity of the developed mathematical model, a genetic algorithm is developed upon which numerical experiments are reported in order to show validity and applicability of the proposed model.

  11. Mobile Virtual Environments in Pervasive Computing

    Science.gov (United States)

    Lazem, Shaimaa; Abdel-Hamid, Ayman; Gračanin, Denis; Adams, Kevin P.

    Recently, human computer interaction has shifted from traditional desktop computing to the pervasive computing paradigm where users are engaged with everywhere and anytime computing devices. Mobile virtual environments (MVEs) are an emerging research area that studies the deployment of virtual reality applications on mobile devices. MVEs present additional challenges to application developers due to the restricted resources of the mobile devices, in addition to issues that are specific to wireless computing, such as limited bandwidth, high error rate and handoff intervals. Moreover, adaptive resource allocation is a key issue in MVEs where user interactions affect system resources, which, in turn, affects the user’s experience. Such interplay between the user and the system can be modelled using game theory. This chapter presents MVEs as a real-time interactive distributed system, and investigates the challenges in designing and developing a remote rendering prefetching application for mobile devices. Furthermore, we introduce game theory as a tool for modelling decision-making in MVEs by describing a game between the remote rendering server and the mobile client.

  12. A Hybrid Immigrants Scheme for Genetic Algorithms in Dynamic Environments

    Institute of Scientific and Technical Information of China (English)

    Shengxiang Yang; Renato Tinós

    2007-01-01

    Dynamic optimization problems are a kind of optimization problems that involve changes over time. They pose a serious challenge to traditional optimization methods as well as conventional genetic algorithms since the goal is no longer to search for the optimal solution(s) of a fixed problem but to track the moving optimum over time. Dynamic optimization problems have attracted a growing interest from the genetic algorithm community in recent years. Several approaches have been developed to enhance the performance of genetic algorithms in dynamic environments. One approach is to maintain the diversity of the population via random immigrants. This paper proposes a hybrid immigrants scheme that combines the concepts of elitism, dualism and random immigrants for genetic algorithms to address dynamic optimization problems. In this hybrid scheme, the best individual, i.e., the elite, from the previous generation and its dual individual are retrieved as the bases to create immigrants via traditional mutation scheme. These elitism-based and dualism-based immigrants together with some random immigrants are substituted into the current population, replacing the worst individuals in the population. These three kinds of immigrants aim to address environmental changes of slight, medium and significant degrees respectively and hence efficiently adapt genetic algorithms to dynamic environments that are subject to different severities of changes. Based on a series of systematically constructed dynamic test problems, experiments are carried out to investigate the performance of genetic algorithms with the hybrid immigrants scheme and traditional random immigrants scheme. Experimental results validate the efficiency of the proposed hybrid immigrants scheme for improving the performance of genetic algorithms in dynamic environments.

  13. Computer simulation of spacecraft/environment interaction.

    Science.gov (United States)

    Krupnikov, K K; Makletsov, A A; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-10-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991 1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  14. Computer simulation of spacecraft/environment interaction

    CERN Document Server

    Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  15. Security Risk Scoring Incorporating Computers' Environment

    Directory of Open Access Journals (Sweden)

    Eli Weintraub

    2016-04-01

    Full Text Available A framework of a Continuous Monitoring System (CMS is presented, having new improved capabilities. The system uses the actual real-time configuration of the system and environment characterized by a Configuration Management Data Base (CMDB which includes detailed information of organizational database contents, security and privacy specifications. The Common Vulnerability Scoring Systems' (CVSS algorithm produces risk scores incorporating information from the CMDB. By using the real updated environmental characteristics the system enables achieving accurate scores compared to existing practices. Framework presentation includes systems' design and an illustration of scoring computations.

  16. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  17. Hybrid Algorithm for Optimal Load Sharing in Grid Computing

    Directory of Open Access Journals (Sweden)

    A. Krishnan

    2012-01-01

    Full Text Available Problem statement: Grid Computing is the fast growing industry, which shares the resources in the organization in an effective manner. Resource sharing requires more optimized algorithmic structure, otherwise the waiting time and response time are increased and the resource utilization is reduced. Approach: In order to avoid such reduction in the performances of the grid system, an optimal resource sharing algorithm is required. In recent days, many load sharing technique are proposed, which provides feasibility but there are many critical issues are still present in these algorithms. Results: In this study a hybrid algorithm for optimization of load sharing is proposed. The hybrid algorithm contains two components which are Hash Table (HT and Distributed Hash Table (DHT. Conclusion: The results of the proposed study show that the hybrid algorithm will optimize the task than existing systems.

  18. Use of a hybrid computer in engineering-seismology research

    Science.gov (United States)

    Park, R.B.; Hays, W.W.

    1977-01-01

    A hybrid computer is an important tool in the seismological research conducted by the U.S. Geological Survey in support of the Energy Research and Development Administration nuclear explosion testing program at the Nevada Test Site and the U.S. Geological Survey Earthquake Hazard Reduction Program. The hybrid computer system, which employs both digital and analog computational techniques, facilitates efficient seismic data processing. Standard data processing operations include: (1) preview of dubbed magnetic tapes of data; (2) correction of data for instrument response; (3) derivation of displacement and acceleration time histories from velocity recordings; (4) extraction of peak-amplitude data; (5) digitization of time histories; (6) rotation of instrumental axes; (7) derivation of response spectra; and (8) derivation of relative transfer functions between recording sites. Catalog of time histories and response spectra of ground motion from nuclear explosions and earthquakes that have been processed by the hybrid computer are used in the Earthquake Hazard Research Program to evaluate the effects of source, propagation path, and site effects on recorded ground motion; to assess seismic risk; to predict system response; and to solve system design problems.

  19. Codesign Environment for Computer Vision Hw/Sw Systems

    Science.gov (United States)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  20. A hybrid computational grid architecture for comparative genomics.

    Science.gov (United States)

    Singh, Aarti; Chen, Chen; Liu, Weiguo; Mitchell, Wayne; Schmidt, Bertil

    2008-03-01

    Comparative genomics provides a powerful tool for studying evolutionary changes among organisms, helping to identify genes that are conserved among species, as well as genes that give each organism its unique characteristics. However, the huge datasets involved makes this approach impractical on traditional computer architectures leading to prohibitively long runtimes. In this paper, we present a new computational grid architecture based on a hybrid computing model to significantly accelerate comparative genomics applications. The hybrid computing model consists of two types of parallelism: coarse grained and fine grained. The coarse-grained parallelism uses a volunteer computing infrastructure for job distribution, while the fine-grained parallelism uses commodity computer graphics hardware for fast sequence alignment. We present the deployment and evaluation of this approach on our grid test bed for the all-against-all comparison of microbial genomes. The results of this comparison are then used by phenotype--genotype explorer (PheGee). PheGee is a new tool that nominates candidate genes responsible for a given phenotype.

  1. A Hybrid Brain-Computer Interface-Based Mail Client

    Directory of Open Access Journals (Sweden)

    Tianyou Yu

    2013-01-01

    Full Text Available Brain-computer interface-based communication plays an important role in brain-computer interface (BCI applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG. With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI. An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.

  2. Computational simulation of intermingled-fiber hybrid composite behavior

    Science.gov (United States)

    Mital, Subodh K.; Chamis, Christos C.

    1992-01-01

    Three-dimensional finite-element analysis and a micromechanics based computer code ICAN (Integrated Composite Analyzer) are used to predict the composite properties and microstresses of a unidirectional graphite/epoxy primary composite with varying percentages of S-glass fibers used as hydridizing fibers at a total fiber volume of 0.54. The three-dimensional finite-element model used in the analyses consists of a group of nine fibers, all unidirectional, in a three-by-three unit cell array. There is generally good agreement between the composite properties and microstresses obtained from both methods. The results indicate that the finite-element methods and the micromechanics equations embedded in the ICAN computer code can be used to obtain the properties of intermingled fiber hybrid composites needed for the analysis/design of hybrid composite structures. However, the finite-element model should be big enough to be able to simulate the conditions assumed in the micromechanics equations.

  3. Recent developments in computer assisted rehabilitation environments

    Institute of Scientific and Technical Information of China (English)

    Rob van der Meer

    2014-01-01

    Computer Assisted Rehabilitation Environment (CAREN) is a system that integrates a training platform (motion base), a virtual environment, a sensor system (motion capture) and D-flow software. It is useful for both diagnostic and therapeutic use. The human gait pattern can be impaired due to disease, trauma or natural decline. Gait analysis is a useful tool to identify impaired gait patterns. Traditional gait analysis is a very time consuming process and therefore only used in exceptional cases. With new systems a quick and extensive analysis is possible and provides useful tools for therapeutic purposes. The range of systems will be described in this paper, highlighting both their diagnostic use and the therapeutic possibilities. Because wounded warriors often have an impaired gait due to amputations or other extremity trauma, these systems are very useful for military rehabilitative efforts. Additionally, the virtual reality environment creates a very challenging situation for the patient, enhancing their rehabilitation experience. For that reason several Armed Forces have these systems already in use. The most recent experiences will be discussed; including new developments both in the extension of the range of systems and the improvement and adaptation of the software. A new and promising development, the use of CAREN in a special application for patients with post-traumatic stress disorder (PTSD), will also be reviewed.

  4. Reach and get capability in a computing environment

    Science.gov (United States)

    Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM

    2012-06-05

    A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

  5. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  6. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  7. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  8. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  9. Aluminium in Biological Environments: A Computational Approach

    Science.gov (United States)

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  10. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    Science.gov (United States)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  11. Extreme Environments Facilitate Hybrid Superiority – The Story of a Successful Daphnia galeata × longispina Hybrid Clone

    Science.gov (United States)

    Griebel, Johanna; Gießler, Sabine; Poxleitner, Monika; Navas Faria, Amanda; Yin, Mingbo; Wolinska, Justyna

    2015-01-01

    Hybridization within the animal kingdom has long been underestimated. Hybrids have often been considered less fit than their parental species. In the present study, we observed that the Daphnia community of a small lake was dominated by a single D. galeata × D. longispina hybrid clone, during two consecutive years. Notably, in artificial community set-ups consisting of several clones representing parental species and other hybrids, this hybrid clone took over within about ten generations. Neither the fitness assay conducted under different temperatures, or under crowded and non-crowded environments, nor the carrying capacity test revealed any outstanding life history parameters of this hybrid clone. However, under simulated winter conditions (i.e. low temperature, food and light), the hybrid clone eventually showed a higher survival probability and higher fecundity compared to parental species. Hybrid superiority in cold-adapted traits leading to an advantage of overwintering as parthenogenetic lineages might consequently explain the establishment of successful hybrids in natural communities of the D. longispina complex. In extreme cases, like the one reported here, a superior hybrid genotype might be the only clone alive after cold winters. Overall, superiority traits, such as enhanced overwintering here, might explain hybrid dominance in nature, especially in extreme and rapidly changing environments. Although any favoured gene complex in cyclic parthenogens could be frozen in successful clones independent of hybridization, we did not find similarly successful clones among parental species. We conclude that the emergence of the observed trait is linked to the production of novel recombined hybrid genotypes. PMID:26448651

  12. A Review of Hybrid Brain-Computer Interface Systems

    Directory of Open Access Journals (Sweden)

    Setare Amiri

    2013-01-01

    Full Text Available Increasing number of research activities and different types of studies in brain-computer interface (BCI systems show potential in this young research area. Research teams have studied features of different data acquisition techniques, brain activity patterns, feature extraction techniques, methods of classifications, and many other aspects of a BCI system. However, conventional BCIs have not become totally applicable, due to the lack of high accuracy, reliability, low information transfer rate, and user acceptability. A new approach to create a more reliable BCI that takes advantage of each system is to combine two or more BCI systems with different brain activity patterns or different input signal sources. This type of BCI, called hybrid BCI, may reduce disadvantages of each conventional BCI system. In addition, hybrid BCIs may create more applications and possibly increase the accuracy and the information transfer rate. However, the type of BCIs and their combinations should be considered carefully. In this paper, after introducing several types of BCIs and their combinations, we review and discuss hybrid BCIs, different possibilities to combine them, and their advantages and disadvantages.

  13. Cultivating Curiosity: Integrating Hybrid Teaching in Courses in Human Behavior in the Social Environment

    Science.gov (United States)

    Rodriguez-Keyes, Elizabeth; Schneider, Dana A.

    2013-01-01

    This study illustrates an experience of implementing a hybrid model for teaching human behavior in the social environment in an urban university setting. Developing a hybrid model in a BSW program arose out of a desire to reach students in a different way. Designed to promote curiosity and active learning, this particular hybrid model has students…

  14. Supporting localized activities in ubiquitous computing environments

    OpenAIRE

    Pinto, Helder

    2004-01-01

    The design of pervasive and ubiquitous computing systems must be centered on users' activity in order to bring computing systems closer to people. Adopting an activity-centered approach to the design of pervasive and ubiquitous computing systems leads us to seek to understand: a) how humans naturally accomplish an activity; and b) how computing artifacts from both the environmental and personal domains may contribute to the accomplishment of an activity. This work particularly focuses o...

  15. Computing Environments for Data Analysis. Part 3. Programming Environments.

    Science.gov (United States)

    1986-05-21

    Environments, ACM Trans. on Pro- gramming Languages and Systems. 7. pp. 183-213. [21] KERNIGHAN , B.W. and MASHEY, J.R. (1981) The Unix Programming...example Unix ). Keywords:Data Analysis,’ Workstations, Programming Environments *This research was supported by a 1985 Office of Naval Research Young...used on multi-user minicomputers like PDP ll’s and Vaxes running the Unix operating sys- tem, with a pen-plotter or graphics terminal for viewing

  16. Hybrid female mate choice as a species isolating mechanism: environment matters.

    Science.gov (United States)

    Schmidt, E M; Pfennig, K S

    2016-04-01

    A fundamental goal of biology is to understand how new species arise and are maintained. Female mate choice is potentially critical to the speciation process: mate choice can prevent hybridization and thereby generate reproductive isolation between potentially interbreeding groups. Yet, in systems where hybridization occurs, mate choice by hybrid females might also play a key role in reproductive isolation by affecting hybrid fitness and contributing to patterns of gene flow between species. We evaluated whether hybrid mate choice behaviour could serve as such an isolating mechanism using spadefoot toad hybrids of Spea multiplicata and Spea bombifrons. We assessed the mate preferences of female hybrid spadefoot toads for sterile hybrid males vs. pure-species males in two alternative habitat types in which spadefoots breed: deep or shallow water. We found that, in deep water, hybrid females preferred the calls of sterile hybrid males to those of S. multiplicata males. Thus, maladaptive hybrid mate preferences could serve as an isolating mechanism. However, in shallow water, the preference for hybrid male calls was not expressed. Moreover, hybrid females did not prefer hybrid calls to those of S. bombifrons in either environment. Because hybrid female mate choice was context-dependent, its efficacy as a reproductive isolating mechanism will depend on both the environment in which females choose their mates as well as the relative frequencies of males in a given population. Thus, reproductive isolation between species, as well as habitat specific patterns of gene flow between species, might depend critically on the nature of hybrid mate preferences and the way in which they vary across environments.

  17. InSAR Scientific Computing Environment

    Science.gov (United States)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and

  18. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  19. Computational fluid dynamics challenges for hybrid air vehicle applications

    Science.gov (United States)

    Carrin, M.; Biava, M.; Steijl, R.; Barakos, G. N.; Stewart, D.

    2017-06-01

    This paper begins by comparing turbulence models for the prediction of hybrid air vehicle (HAV) flows. A 6 : 1 prolate spheroid is employed for validation of the computational fluid dynamics (CFD) method. An analysis of turbulent quantities is presented and the Shear Stress Transport (SST) k-ω model is compared against a k-ω Explicit Algebraic Stress model (EASM) within the unsteady Reynolds-Averaged Navier-Stokes (RANS) framework. Further comparisons involve Scale Adaptative Simulation models and a local transition transport model. The results show that the flow around the vehicle at low pitch angles is sensitive to transition effects. At high pitch angles, the vortices generated on the suction side provide substantial lift augmentation and are better resolved by EASMs. The validated CFD method is employed for the flow around a shape similar to the Airlander aircraft of Hybrid Air Vehicles Ltd. The sensitivity of the transition location to the Reynolds number is demonstrated and the role of each vehicle£s component is analyzed. It was found that the ¦ns contributed the most to increase the lift and drag.

  20. Extreme Environment Silicon Carbide Hybrid Temperature & Pressure Optical Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Nabeel Riza

    2010-09-01

    This final report contains the main results from a 3-year program to further investigate the merits of SiC-based hybrid sensor designs for extreme environment measurements in gas turbines. The study is divided in three parts. Part 1 studies the material properties of SiC such as temporal response, refractive index change with temperature, and material thermal response reversibility. Sensor data from a combustion rig-test using this SiC sensor technology is analyzed and a robust distributed sensor network design is proposed. Part 2 of the study focuses on introducing redundancy in the sensor signal processing to provide improved temperature measurement robustness. In this regard, two distinct measurement methods emerge. A first method uses laser wavelength sensitivity of the SiC refractive index behavior and a second method that engages the Black-Body (BB) radiation of the SiC package. Part 3 of the program investigates a new way to measure pressure via a distance measurement technique that applies to hot objects including corrosive fluids.

  1. Universal quantum computation using all-optical hybrid encoding

    Institute of Scientific and Technical Information of China (English)

    郭奇; 程留永; 王洪福; 张寿

    2015-01-01

    By employing displacement operations, single-photon subtractions, and weak cross-Kerr nonlinearity, we propose an alternative way of implementing several universal quantum logical gates for all-optical hybrid qubits encoded in both single-photon polarization state and coherent state. Since these schemes can be straightforwardly implemented only using local operations without teleportation procedure, therefore, less physical resources and simpler operations are required than the existing schemes. With the help of displacement operations, a large phase shift of the coherent state can be obtained via currently available tiny cross-Kerr nonlinearity. Thus, all of these schemes are nearly deterministic and feasible under current technology conditions, which makes them suitable for large-scale quantum computing.

  2. "Hybrids" and the Gendering of Computing Jobs in Australia

    Directory of Open Access Journals (Sweden)

    Gillian Whitehouse

    2005-05-01

    Full Text Available This paper presents recent Australian evidence on the extent to which women are entering “hybrid” computing jobs combining technical and communication or “people management” skills, and the way these skill combinations are valued at organisational level. We draw on a survey of detailed occupational roles in large IT firms to examine the representation of women in a range of jobs consistent with the notion of “hybrid”, and analyse the discourse around these sorts of skills in a set of organisational case studies. Our research shows a traditional picture of labour market segmentation, with limited representation of women in high status jobs, and their relatively greater prevalence in more routine areas of the industry. While our case studies highlight perceptions of the need for hybrid roles and assumptions about the suitability of women for such jobs, the ongoing masculinity of core development functions appears untouched by this discourse.

  3. Computational Fluid Dynamics In GARUDA Grid Environment

    CERN Document Server

    Roy, Chandra Bhushan

    2011-01-01

    GARUDA Grid developed on NKN (National Knowledge Network) network by Centre for Development of Advanced Computing (C-DAC) hubs High Performance Computing (HPC) Clusters which are geographically separated all over India. C-DAC has been associated with development of HPC infrastructure since its establishment in year 1988. The Grid infrastructure provides a secure and efficient way of accessing heterogeneous resource . Enabling scientific applications on Grid has been researched for some time now. In this regard we have successfully enabled Computational Fluid Dynamics (CFD) application which can help CFD community as a whole in effective manner to carry out computational research which requires huge compuational resource beyond once in house capability. This work is part of current on-going project Grid GARUDA funded by Department of Information Technology.

  4. Authenticating Devices in Ubiquitous Computing Environment

    Directory of Open Access Journals (Sweden)

    Kamarularifin Abd Jalil

    2015-05-01

    Full Text Available The deficient of a good authentication protocol in a ubiquitous application environment has made it a good target for adversaries. As a result, all the devices which are participating in such environment are said to be exposed to attacks such as identity impostor, man-in-the-middle attacks and also unauthorized attacks. Thus, this has created skeptical among the users and has resulted them of keeping their distance from such applications. For this reason, in this paper, we are proposing a new authentication protocol to be used in such environment. Unlike other authentication protocols which can be adopted to be used in such environment, our proposed protocol could avoid a single point of failures, implements trust level in granting access and also promotes decentralization. It is hoped that the proposed authentication protocol can reduce or eliminate the problems mentioned.

  5. A Hybrid Approach for Scheduling and Replication based on Multi-criteria Decision Method in Grid Computing

    Directory of Open Access Journals (Sweden)

    Nadia Hadi

    2012-09-01

    Full Text Available Grid computing environments have emerged following the demand of scientists to have a very high computing power and storage capacity. One among the challenges imposed in the use of these environments is the performance problem. To improve performance, scheduling and replicating techniques are used. In this paper we propose an approach to task scheduling combined with data replication decision based on multi criteria principle. This is to improve performance by reducing the response time of tasks and the load of system. This hybrid approach is based on a non-hierarchical model that allows scalability.

  6. A new self-consistent hybrid chemistry model for Mars and cometary environments

    Science.gov (United States)

    Wedlund, Cyril Simon; Kallio, Esa; Jarvinen, Riku; Dyadechkin, Sergey; Alho, Markku

    2014-05-01

    Over the last 15 years, a 3-D hybrid-PIC planetary plasma interaction modelling platform, named HYB, has been developed, which was applied to several planetary environment such as those of Mars, Venus, Mercury, and more recently, the Moon. We present here another evolution of HYB including a fully consistent ionospheric-chemistry package designed to reproduce the main ions in the lower boundary of the model. This evolution, also permitted by the increase in computing power and the switch to spherical coordinates for higher spatial resolution (Dyadechkin et al., 2013), is motivated by the imminent arrival of the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. In this presentation we show the application of the new HYB-ionosphere model to 1D and 2D hybrid simulations at Mars above 100 km altitude and demonstrate that with a limited number of chemical reactions, good agreement with 1D kinetic models may be found. This is a first validation step before applying the model to the 67P/CG comet environment, which, like Mars, is expected be rich in carbon oxide compounds.

  7. Ubiquitous Computing in Physico-Spatial Environments

    DEFF Research Database (Denmark)

    Dalsgård, Peter; Eriksson, Eva

    2007-01-01

    Interaction design of pervasive and ubiquitous computing (UC) systems must take into account physico-spatial issues as technology is implemented into our physical surroundings. In this paper we discuss how one conceptual framework for understanding interaction in context, Activity Theory (AT...

  8. Plasma environment of Titan: a 3-D hybrid simulation study

    Directory of Open Access Journals (Sweden)

    S. Simon

    2006-05-01

    Full Text Available Titan possesses a dense atmosphere, consisting mainly of molecular nitrogen. Titan's orbit is located within the Saturnian magnetosphere most of the time, where the corotating plasma flow is super-Alfvénic, yet subsonic and submagnetosonic. Since Titan does not possess a significant intrinsic magnetic field, the incident plasma interacts directly with the atmosphere and ionosphere. Due to the characteristic length scales of the interaction region being comparable to the ion gyroradii in the vicinity of Titan, magnetohydrodynamic models can only offer a rough description of Titan's interaction with the corotating magnetospheric plasma flow. For this reason, Titan's plasma environment has been studied by using a 3-D hybrid simulation code, treating the electrons as a massless, charge-neutralizing fluid, whereas a completely kinetic approach is used to cover ion dynamics. The calculations are performed on a curvilinear simulation grid which is adapted to the spherical geometry of the obstacle. In the model, Titan's dayside ionosphere is mainly generated by solar UV radiation; hence, the local ion production rate depends on the solar zenith angle. Because the Titan interaction features the possibility of having the densest ionosphere located on a face not aligned with the ram flow of the magnetospheric plasma, a variety of different scenarios can be studied. The simulations show the formation of a strong magnetic draping pattern and an extended pick-up region, being highly asymmetric with respect to the direction of the convective electric field. In general, the mechanism giving rise to these structures exhibits similarities to the interaction of the ionospheres of Mars and Venus with the supersonic solar wind. The simulation results are in agreement with data from recent Cassini flybys.

  9. Implementation of Computational Grid Services in Enterprise Grid Environments

    Directory of Open Access Journals (Sweden)

    R. J.A. Richard

    2008-01-01

    Full Text Available Grid Computing refers to the development of high performance computing environment or virtual super computing environment by utilizing available computing resources in a LAN, WAN and Internet. This new emerging research field offers enormous opportunities for e-Science applications such as astrophysics, bioinformatics, aerospace modeling, cancer research etc. Grid involves coordinating and sharing of computing power, application, data storage, network resources etc., across dynamically and geographically dispersed organizations. Most Grid environments are developed using Globus toolkit which is a UNIX/Linux based middleware to integrate computational resources over the network. The emergence of Global Grid concept provides an excellent opportunity for Grid based e-Science applications to use high performance super computing environments. Thus windows based enterprise grid environments can't be neglected in the development of Global Grids. This study discusses the basics of enterprise grids and the implementation of enterprise computational grids using Alchemi Tool Kit. This review study is organized into three parts. They are (i Introduction of Grid Technologies, (ii Design Concepts of Enterprise Grids and (iii Implementation of Computational Grid Services.

  10. A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    CERN Document Server

    Blazewicz, Marek; Diener, Peter; Koppelman, David M; Kurowski, Krzysztof; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2012-01-01

    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first...

  11. Research computing in a distributed cloud environment

    Energy Technology Data Exchange (ETDEWEB)

    Fransham, K; Agarwal, A; Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J, E-mail: fransham@uvic.ca

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  12. Research computing in a distributed cloud environment

    Science.gov (United States)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  13. Pervasive Emotions in Pervasive Computing Environments

    CERN Document Server

    Goyal, Vishal

    2009-01-01

    The capability of an intelligent environment to connect and adapt to real internal sates, needs and behavioral meaning of humans can be made possible by considering peoples emotional states as contextual parameters. In this paper, we build on enactive psychology and investigate the incorporation of emotions in pervasive systems. We redefine emotions, and discuss the coding of emotional human markers by smart environments. In addition, we compare several existing works and identify how emotions can be detected and modeled by a pervasive system in order to enhance its service and response to users. Finally, we analyze comprehensively an XML based language for representing and annotating emotions known as EARL and raise two important issues which pertain to emotion representation and modeling in XML based languages.

  14. Developing a Collaborative and Autonomous Training and Learning Environment for Hybrid Wireless Networks

    CERN Document Server

    Lobo, Jose Eduardo M; Brust, Matthias R; Rothkugel, Steffen; Adriano, Christian M

    2007-01-01

    With larger memory capacities and the ability to link into wireless networks, more and more students uses palmtop and handheld computers for learning activities. However, existing software for Web-based learning is not well-suited for such mobile devices, both due to constrained user interfaces as well as communication effort required. A new generation of applications for the learning domain that is explicitly designed to work on these kinds of small mobile devices has to be developed. For this purpose, we introduce CARLA, a cooperative learning system that is designed to act in hybrid wireless networks. As a cooperative environment, CARLA aims at disseminating teaching material, notes, and even components of itself through both fixed and mobile networks to interested nodes. Due to the mobility of nodes, CARLA deals with upcoming problems such as network partitions and synchronization of teaching material, resource dependencies, and time constraints.

  15. Triangular Dynamic Architecture for Distributed Computing in a LAN Environment

    CERN Document Server

    Hossain, M Shahriar; Fuad, M Muztaba; Deb, Debzani

    2011-01-01

    A computationally intensive large job, granulized to concurrent pieces and operating in a dynamic environment should reduce the total processing time. However, distributing jobs across a networked environment is a tedious and difficult task. Job distribution in a Local Area Network based on Triangular Dynamic Architecture (TDA) is a mechanism that establishes a dynamic environment for job distribution, load balancing and distributed processing with minimum interaction from the user. This paper introduces TDA and discusses its architecture and shows the benefits gained by utilizing such architecture in a distributed computing environment.

  16. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  17. Advanced Scientific Computing Environment Team new scientific database management task

    Energy Technology Data Exchange (ETDEWEB)

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  18. Hybrid location determination technology within urban and indoor environment based on Cell-ID and path loss

    Institute of Scientific and Technical Information of China (English)

    HU Sheng; LEI Jian-jun; XIA Ying; GE Jun-wei; BAE Hae-young

    2004-01-01

    The location determination technology based on simple delay evaluations or GPS is not accurate enough or even impossible in urban and indoor environments due to the multi-path propagation. To enhance the location accuracy and reduce the operation cost within these environments, this paper proposes a novel hybrid location determination technology which combines CELL-ID with the database correlation method. The proposed method generate the prediction database of path loss according to CELL-ID, and after the computation the smallest squared error of measured path loss and the prediction path loss,the location of the mobile terminal is decided by the coordinates of the best matching matrix entry.

  19. Interspecific pine hybrids II. genotype by environment interactions across Australia, Swaziland, and Zimbabwe

    Science.gov (United States)

    H. S. Dungey; M. J. Dieters; D. P. Gwaze; P. G. Toon; D. G. Nikles

    2000-01-01

    Collaborative research trials of Queensland-bred pine hybrids have been established in many sites outside Australia. These trials enable the estimation of genotype x environment effects, which are important in determining the level of regionalisation needed in any breeding program. Correlations across sites testing hybrids between Pinus caribaea var...

  20. Reliability of computer memories in radiation environment

    Directory of Open Access Journals (Sweden)

    Fetahović Irfan S.

    2016-01-01

    Full Text Available The aim of this paper is examining a radiation hardness of the magnetic (Toshiba MK4007 GAL and semiconductor (AT 27C010 EPROM and AT 28C010 EEPROM computer memories in the field of radiation. Magnetic memories have been examined in the field of neutron radiation, and semiconductor memories in the field of gamma radiation. The obtained results have shown a high radiation hardness of magnetic memories. On the other side, it has been shown that semiconductor memories are significantly more sensitive and a radiation can lead to an important damage of their functionality. [Projekat Ministarstva nauke Republike Srbije, br. 171007

  1. Computational Tool for Aerothermal Environment Around Transatmospheric Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this Project is to develop a high-fidelity computational tool for accurate prediction of aerothermal environment on transatmospheric vehicles. This...

  2. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  3. Computer Aided Software Engineering (CASE) Environment Issues.

    Science.gov (United States)

    1987-06-01

    evelopment process for the understanding required. to change a sof’tw-are ~.se. Thev need to be ablc to repeat testing and co mpare results to origenal tests...dccade software engineering methodologies, tools and environments I-a-,e e- plcded on the market offering and delivering partial solutions to the soft\\ are...hegan conu’nd:::e a laree market share. As software eneineers came to grips with tlhe so:w,are problem. more compiex intercperable toolsets appeared

  4. Secured Authorized Data Using Hybrid Encryption in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dinesh Shinde

    2017-03-01

    Full Text Available In today’s world to provide a security to a public network like a cloud network is become a toughest task however more likely to reduce the cost at the time of providing security using cryptographic technique to delegate the mask of the decryption task to the cloud servers to reduce the computing cost. As a result, attributebased encryption with delegation emerges. Still, there are caveats and questions remaining in the previous relevant works. For to solution to all problems the cloud servers could tamper or replace the delegated cipher text and respond a forged computing result with malicious intent. They may also cheat the eligible users by responding them that they are ineligible for the purpose of cost saving. Furthermore, during the encryption, the access policies may not be flexible enough as well. Since policy for general circuits enables to achieve the strongest form of access control, a construction for realizing circuit cipher text-policy attribute-based hybrid encryption with verifiable delegation has been considered in our work. In such a system, combined with verifiable computation and encrypt-then-mac mechanism, the data confidentiality, the fine-grained access control and the correctness of the delegated computing results are well guaranteed at the same time. Besides, our scheme achieves security against chosen-plaintext attacks under the k-multilinear Decisional Diffie-Hellman assumption. Moreover, an extensive simulation campaign confirms the feasibility and efficiency of the proposed solution. There are two complementary forms of attribute-based encryption. One is key-policy attribute-based encryption (KP-ABE [8], [9], [10], and the other is cipher text-policy attribute-based encryption. In a KP-ABE system, the decision of access policy is made by the key distributor instead of the enciphered, which limits the practicability and usability for the system in practical applicationsthe access policy for general circuits could be

  5. Shifting Contexts in Invisible Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    McGee, David R.; Pavel, Misha; Cohen, Philip R.; A K Dey, P Ljungstrand, A Schmidt.

    2001-03-30

    Invisible computing systems are highly context-dependent. Consequently, the influence that language has on contextual interpretation cannot be ignored by such systems. Rather, once language and other forms of human action are perceived by a system, its interpretative processes will of necessity be context-dependent. As an example, we illustrate how people simply and naturally create new contexts for interpretation by creating new names and referring expressions. We then describe Rasa, a mixed-reality system that invisibly observes and understands how users in a military command post create such contexts as part of the process of maintaining situational awareness. Rasa augments both the commander?s map and the Post-it notes pasted on it, which represent units in the field, with multimodal language, thereby allowing paper-based tools to interact with digital information. Finally, we argue that architectures for such context-aware systems must reduce the inherent ambiguity and uncertainty through fusion and other means.

  6. An overview of computer viruses in a research environment

    Science.gov (United States)

    Bishop, Matt

    1991-01-01

    The threat of attack by computer viruses is in reality a very small part of a much more general threat, specifically threats aimed at subverting computer security. Here, computer viruses are examined as a malicious logic in a research and development environment. A relation is drawn between the viruses and various models of security and integrity. Current research techniques aimed at controlling the threats posed to computer systems by threatening viruses in particular and malicious logic in general are examined. Finally, a brief examination of the vulnerabilities of research and development systems that malicious logic and computer viruses may exploit is undertaken.

  7. A Hybrid Segmentation Framework for Computer-Assisted Dental Procedures

    Science.gov (United States)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza; Reza Asharif, Mohammad

    Teeth segmentation in computed tomography (CT) images is a major and challenging task for various computer assisted procedures. In this paper, we introduced a hybrid method for quantification of teeth in CT volumetric dataset inspired by our previous experiences and anatomical knowledge of teeth and jaws. In this regard, we propose a novel segmentation technique using an adaptive thresholding, morphological operations, panoramic re-sampling and variational level set algorithm. The proposed method consists of several steps as follows: first, we determine the operation region in CT slices. Second, the bony tissues are separated from other tissues by utilizing an adaptive thresholding technique based on the 3D pulses coupled neural networks (PCNN). Third, teeth tissue is classified from other bony tissues by employing panorex lines and anatomical knowledge of teeth in the jaws. In this case, the panorex lines are estimated using Otsu thresholding and mathematical morphology operators. Then, the proposed method is followed by calculating the orthogonal lines corresponding to panorex lines and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the integral projections of the panoramic dataset. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a variational level set to refine initial teeth boundaries to final contour. In the last step a surface rendering algorithm known as marching cubes (MC) is applied to volumetric visualization. The proposed algorithm was evaluated in the presence of 30 cases. Segmented images were compared with manually outlined contours. We compared the performance of segmentation method using ROC analysis of the thresholding, watershed and our previous works. The proposed method performed best. Also, our algorithm has the advantage of high speed compared to our previous works.

  8. Data Dissemination in Mobile Computing Environment

    Directory of Open Access Journals (Sweden)

    Dr A. Venugopal Reddy

    2009-07-01

    Full Text Available Data dissemination in asymmetrical communicationenvironment, where the downlink communication capacity ismuch greater than the uplink communication capacity, is bestsuited for mobile environment. In this architecture there will bea stationary server continuously broadcasting different dataitems over the air. The mobile clients continuously listen to thechannel and access the data of their interest whenever itappears on the channel and download the same. The typicalapplications of such architecture are stock market information,weather information, traffic information etc. The importantissue that is to be addressed in this type of data disseminationis – how quickly the mobile clients access the data item of theirinterest i.e. minimum access time so that the mobile clients savethe precious battery power while they are on mobile. Thispaper reviews the various techniques for achieving theminimum access time. The advantages and disadvantages arediscussed and explored different research areas for achievingthe minimum access time.

  9. Purple Computational Environment With Mappings to ACE Requirements for the General Availability User Environment Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Barney, B; Shuler, J

    2006-08-21

    Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally, the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.

  10. Computational analysis on plug-in hybrid electric motorcycle chassis

    Science.gov (United States)

    Teoh, S. J.; Bakar, R. A.; Gan, L. M.

    2013-12-01

    Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.

  11. The ICAAP Project, Part Three: OSF Distributed Computing Environment.

    Science.gov (United States)

    Cantor, Scott

    1997-01-01

    DCE (Distributed Computing Environment) is a collection of services, tools, and libraries for building the infrastructure necessary for distributed computing within an enterprise. This articles discusses the Open Software Foundation (OSF); the components of DCE, including the Directory and Security Services, the Distributed Time Service, and the…

  12. Distributed Computing: Considerations for Its Use within Educational Environments.

    Science.gov (United States)

    Pratt, S. J.

    1985-01-01

    Emphasizing more effective use of existing equipment, this article highlights distributed computing design considerations applicable to educational environments; identifies potential roles of networking in the provision of adequate teaching aids; presents a networking model; and describes the development of a distributed computing configuration at…

  13. QCMPI: A parallel environment for quantum computing

    Science.gov (United States)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    QCMPI is a quantum computer (QC) simulation package written in Fortran 90 with parallel processing capabilities. It is an accessible research tool that permits rapid evaluation of quantum algorithms for a large number of qubits and for various "noise" scenarios. The prime motivation for developing QCMPI is to facilitate numerical examination of not only how QC algorithms work, but also to include noise, decoherence, and attenuation effects and to evaluate the efficacy of error correction schemes. The present work builds on an earlier Mathematica code QDENSITY, which is mainly a pedagogic tool. In that earlier work, although the density matrix formulation was featured, the description using state vectors was also provided. In QCMPI, the stress is on state vectors, in order to employ a large number of qubits. The parallel processing feature is implemented by using the Message-Passing Interface (MPI) protocol. A description of how to spread the wave function components over many processors is provided, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors. These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates and also Quantum Fourier transformation. These operators make up the actions needed in QC. Codes for Grover's search and Shor's factoring algorithms are provided as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to alternate noise effects, which corresponds to the idea of solving a stochastic Schrödinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its eigenvalues and associated entropy. Potential applications of this powerful tool include studies of the stability and correction of QC processes using Hamiltonian based dynamics. Program summaryProgram title: QCMPI Catalogue identifier: AECS_v1_0 Program summary URL

  14. Computer-aided design development transition for IPAD environment

    Science.gov (United States)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  15. The computer revolution in science: steps towards the realization of computer-supported discovery environments

    NARCIS (Netherlands)

    Jong, de Hidde; Rip, Arie

    1997-01-01

    The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful compute

  16. Learning with Computer-Based Learning Environments: A Literature Review of Computer Self-Efficacy

    Science.gov (United States)

    Moos, Daniel C.; Azevedo, Roger

    2009-01-01

    Although computer-based learning environments (CBLEs) are becoming more prevalent in the classroom, empirical research has demonstrated that some students have difficulty learning with these environments. The motivation construct of computer-self efficacy plays an integral role in learning with CBLEs. This literature review synthesizes research…

  17. Environments for online maritime simulators with cloud computing capabilities

    Science.gov (United States)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  18. Combining Online and Hybrid Teaching Environments in German Courses

    Science.gov (United States)

    Keim, Lucrecia

    2015-01-01

    In this article, we briefly offer the main characteristics of a hybrid design for Face-to-Face (FtF) and online German courses in the degree of Translation and Interpreting that combines the textbook with activities moderated with technology. We particularly focus on the activities designed for practicing oral production at level A2.2., where we…

  19. Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2011-01-01

    Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

  20. Effects of Hybrid and Non-hybrid Epichloë Endophytes and Their Associated Host Genotypes on the Response of a Native Grass to Varying Environments.

    Science.gov (United States)

    Jia, Tong; Oberhofer, Martina; Shymanovich, Tatsiana; Faeth, Stanley H

    2016-07-01

    Asexual Epichloë endophytes are prevalent in cool season grasses, and many are of hybrid origin. Hybridization of asexual endophytes is thought to provide a rapid influx of genetic variation that may be adaptive to endophyte-host grass symbiota in stressful environments. For Arizona fescue (Festuca arizonica), hybrid symbiota are commonly found in resource-poor environments, whereas non-hybrid symbiota are more common in resource-rich environments. There have been very few experimental tests where infection, hybrid and non-hybrid status, and plant genotype have been controlled to tease apart their effects on host phenotype and fitness in different environments. We conducted a greenhouse experiment where hybrid (H) and non-hybrid (NH) endophytes were inoculated into plant genotypes that were originally uninfected (E-) or once infected with either the H or NH endophytes. Nine endophyte and plant genotypic group combinations were grown under low and high water and nutrient treatments. Inoculation with the resident H endophyte enhanced growth and altered allocation to roots and shoots, but these effects were greatest in resource-rich environments, contrary to expectations. We found no evidence of co-adaptation between endophyte species and their associated host genotypes. However, naturally E- plants performed better when inoculated with the hybrid endophyte, suggesting these plants were derived from H infected lineages. Our results show complex interactions between endophyte species of hybrid and non-hybrid origin with their host plant genotypes and environmental factors.

  1. ANIBAL - a Hybrid Computer Language for EAI 680-PDP 8/I, FPP 12

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe

    1974-01-01

    A hybrid programming language ANIBAL has been developed for use in an open-shop computing centre with an EAI-680 analog computer, a PDP8/I digital computer, and a FFP-12 floating point processor. An 8K core memory and 812k disk memory is included. The new language consists of standard FORTRAN IV...

  2. Intrusion Detection System Inside Grid Computing Environment (IDS-IGCE

    Directory of Open Access Journals (Sweden)

    Basappa B. Kodada

    2012-01-01

    Full Text Available Grid Computing is a kind of important information technology which enables resource sharing globally to solve the large scale problem. It is based on networks and able to enable large scale aggregation and sharing of computational, data, sensors and other resources across institutional boundaries. Integrated Globus Tool Kit with Web services is to present OGSA (Open Grid Services Architecture as the standardservice grid architecture. In OGSA, everything is abstracted as a service, including computers, applications, data as well as instruments. The services and resources in Grid are heterogeneous and dynamic, and they also belong to different domains. Grid Services are still new to business system & asmore systems are being attached to it, any threat to it could bring collapse and huge harm. May be intruder come with a new form of attack. Grid Computing is a Global Infrastructure on the internet has led to asecurity attacks on the Computing Infrastructure. The wide varieties of IDS (Intrusion Detection System are available which are designed to handle the specific types of attacks. The technique of [27] will protect future attacks in Service Grid Computing Environment at the Grid Infrastructure but there is no technique can protect these types of attacks inside the grid at the node level. So this paper proposes the Architecture of IDS-IGCE (Intrusion Detection System – Inside Grid Computing Environment which can provide the protection against the complete threats inside the Grid Environment.

  3. A Survey on Interoperability in the Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Bahman Rashidi

    2013-07-01

    Full Text Available In the recent years, Cloud Computing has been one of the top ten new technologies which provides various services such as software, platform and infrastructure for internet users. The Cloud Computing is a promising IT paradigm which enables the Internet evolution into a global market of collaborating services. In order to provide better services for cloud customers, cloud providers need services that are in cooperation with other services. Therefore, Cloud Computing semantic interoperability plays a key role in Cloud Computing services. In this paper, we address interoperability issues in Cloud Computing environments. After a description of Cloud Computing interoperability from different aspects and references, we describe two architectures of cloud service interoperability. Architecturally, we classify existing interoperability challenges and we describe them. Moreover, we use these aspects to discuss and compare several interoperability approaches.

  4. A hybrid matching method for geospatial services in a composition-oriented environment

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    With the development of Internet and GIS,large volumes of spatial data,powerful computing resources and many spatial data processing functions are published in the form of Web services.Finding suitable geospatial services in the composition-oriented environment is a crucial task.The semantic Web provides a kind of technology to find and compose various service resources automatically through the Web.This paper proposes a hybrid method for the semantic matching of geospatial services.The method includes two parts.Part 1 puts forward a multi-level semantic matching approach,which matches single geospatial service at four levels:classification,input/output,precondition/effect and the quality of service(QoS).This multi-level matching approach makes single service matching quicker and more accurate.Part 2 puts forward a matching algorithm for a geospatial service chain based on the context.The algorithm adopts a trace algorithm,taking account of the effect of the context.It restricts the input/output parameters of the current service by the input/output parameters of service chain,pre-service and sub-service.It matches the atomic service dynamically in a composition-oriented environment,and accurately converts the abstract model of geospatial services into an executable geospatial service chain.A case study of the flood analysis for the Poyang Lake illustrates the effectiveness of our context-based matching method for geospatial services.

  5. Building an Advanced Computing Environment with SAN Support

    Institute of Scientific and Technical Information of China (English)

    DajianYANG; MeiMA; 等

    2001-01-01

    The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can't meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.

  6. Proposed congestion control method for cloud computing environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...

  7. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  8. Hybrid data storage system in an HPC exascale environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  9. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  10. An Indoor Ubiquitous Computing Environment Based on Location awareness

    Institute of Scientific and Technical Information of China (English)

    PU Fang; SUN Dao-qing; CAO Qi-ying; CAI Hai-bin; LI Yong-ning

    2006-01-01

    To provide the right services or information to the right users, at the right time and in the right place in ubiquitous computing environment, an Indoor Ubiquitous Computing Environment based on Location-Awareness, IUCELA, is presented in this paper. A general architecture of IUCELA is designed to connect multiple sensing devices with locationaware applications. Then the function of location-aware middleware which is the core component of the proposed architecture is elaborated. Finally an indoor forum is taken as an example scenario to demonstrate the security,usefulness, flexibility and robustness of IUCELA.

  11. Computer support system for residential environment evaluation for citizen participation

    Institute of Scientific and Technical Information of China (English)

    GE Jian; TEKNOMO Kardi; LU Jiang; HOKAO Kazunori

    2005-01-01

    Though the method of citizen participation in urban planning is quite well established, for a specific segment of residential environment, however, existing participation system has not coped adequately with the issue. The specific residential environment has detailed aspects that need positive and high level involvement of the citizens in participating in all stages and every field of the plan. One of the best and systematic methods to obtain a more involved citizen is through a citizen workshop. To get a more "educated" citizen who participates in the workshop, a special session to inform the citizen on what was previously gathered through a survey was revealed to be prerequisite before the workshop. The computer support system is one of the best tools for this purpose. This paper describes the development of the computer support system for residential environment evaluation system, which is an essential tool to give more information to the citizens before their participation in public workshop. The significant contribution of this paper is the educational system framework involved in the workshop on the public participation system through computer support, especially for residential environment. The framework, development and application of the computer support system are described. The application of a workshop on the computer support system was commented on as very valuable and helpful by the audience as it resulted in greater benefit to have wider range of participation, and deeper level of citizen understanding.

  12. Collaborative virtual reality environments for computational science and design.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M. E.

    1998-02-17

    The authors are developing a networked, multi-user, virtual-reality-based collaborative environment coupled to one or more petaFLOPs computers, enabling the interactive simulation of 10{sup 9} atom systems. The purpose of this work is to explore the requirements for this coupling. Through the design, development, and testing of such systems, they hope to gain knowledge that allows computational scientists to discover and analyze their results more quickly and in a more intuitive manner.

  13. Networked Environments that Create Hybrid Spaces for Learning Science

    Science.gov (United States)

    Otrel-Cass, Kathrin; Khoo, Elaine; Cowie, Bronwen

    2014-01-01

    Networked learning environments that embed the essence of the Community of Inquiry (CoI) framework utilise pedagogies that encourage dialogic practices. This can be of significance for classroom teaching across all curriculum areas. In science education, networked environments are thought to support student investigations of scientific problems,…

  14. Optimize the Security Performance of the Computing Environment of IHEP

    Institute of Scientific and Technical Information of China (English)

    Rong-shengXU; Bao-XuLIU

    2001-01-01

    This paper gives a background of crackers,then some attack events that have happened in IHEP networks are enumerated and introduced.At last a highly efficient defence system that integrates author's experience,research results and have put in practice in IHEP networks environment is described in detail,This paper also gives network and information security advice and process for high energy physics computing environment in the Institute of High Energy Physics that will implement in the future.

  15. Understanding the Offender/Environment Dynamic for Computer Crimes

    DEFF Research Database (Denmark)

    Willison, Robert Andrew

    2005-01-01

    practices by possiblyhighlighting new areas for safeguard implementation. To help facilitate a greaterunderstanding of the offender/environment dynamic, this paper assesses the feasibilityof applying criminological theory to the IS security context. More specifically, threetheories are advanced, which focus......There is currently a paucity of literature focusing on the relationship between theactions of staff members, who perpetrate some form of computer abuse, and theorganisational environment in which such actions take place. A greater understandingof such a relationship may complement existing security...

  16. Towards Self Configured Multi-Agent Resource Allocation Framework for Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    M.N.Faruk

    2014-05-01

    Full Text Available The construction of virtualization and Cloud computing environment to assure numerous features such as improved flexibility, stabilized energy efficiency with minimal operating costs for IT industry. However, highly unpredictable workloads can create demands to promote quality-of-service assurance in the mean while promising competent resource utilization. To evade breach on SLA’s (Service-Level Agreements or may have unproductive resource utilization, In a virtual environment resource allocations must be tailored endlessly during the execution for the dynamic application workloads. In this proposed work, we described a hybrid approach on self-configured resource allocation model in cloud environments based on dynamic workloads application models. We narrated a comprehensive setup of a delegate stimulated enterprise application, the new Virtenterprise_Cloudapp benchmark, deployed on dynamic virtualized cloud platform.

  17. Toward an automated parallel computing environment for geosciences

    Science.gov (United States)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  18. Hybrid NN/SVM Computational System for Optimizing Designs

    Science.gov (United States)

    Rai, Man Mohan

    2009-01-01

    A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily

  19. Study on Human-Computer Interaction in Immersive Virtual Environment

    Institute of Scientific and Technical Information of China (English)

    段红; 黄柯棣

    2002-01-01

    Human-computer interaction is one of the most important issues in research of Virtual Environments. This paper introduces interaction software developed for a virtual operating environment for space experiments. Core components of the interaction software are: an object-oriented database for behavior management of virtual objects, a software agent called virtual eye for viewpoint control, and a software agent called virtual hand for object manipulation. Based on the above components, some instance programs for object manipulation have been developed. The user can observe the virtual environment through head-mounted display system, control viewpoint by head tracker and/or keyboard, and select and manipulate virtual objects by 3D mouse.

  20. Context Management Middleware in Heterogeneous Mobile Computing Environments

    Science.gov (United States)

    Qureshi, Salim Raza

    2010-11-01

    Mobile computing environments are characterized by heterogeneity—systems consisting of different device types, operating systems, network interfaces, and communication protocols. Such heterogeneity calls for middleware that can adapt to different execution contexts, hide heterogeneity from applications, and transparently and dynamically switch between network and sensor technologies. Additionally, middleware for context-aware systems must keep a context model (a model of their environment), taking into account several aspects of the environment. The more complex and heterogeneous an execution environment is, the more complicated its underlying context model. Moreover, because systems can evolve, context management must also support model evolution without restarting, reconfiguring, or redeploying applications and services. We describe a context management middleware that can efficiently handle context despite the execution environment's heterogeneity and evolution. It uses context meta-information to improve a context-aware system's overall performance.

  1. Student Perspectives of Computer Literacy Education in an International Environment

    Science.gov (United States)

    Vasilache, Simona

    2016-01-01

    Computer literacy education is an integral part of early university education (it often starts at the high school level). A wide variety of university course structures and teaching styles exist and, at the same time, the knowledge levels of incoming students are varied. This issue is even more pressing in an international environment. This paper…

  2. CMS Monte Carlo production operations in a distributed computing environment

    CERN Document Server

    Mohapatra, A; Khomich, A; Lazaridis, C; Hernández, J M; Caballero, J; Hof, C; Kalinin, S; Flossdorf, A; Abbrescia, M; De Filippis, N; Donvito, G; Maggi, G; My, S; Pompili, A; Sarkar, S; Maes, J; Van Mulders, P; Villella, I; De Weirdt, S; Hammad, G; Wakefield, S; Guan, W; Lajas, J A S; Elmer, P; Evans, D; Fanfani, A; Bacchi, W; Codispoti, G; Van Lingen, F; Kavka, C; Eulisse, G

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  3. Delivering Interactive Multimedia Services in Dynamic Pervasive Computing Environments

    NARCIS (Netherlands)

    Hesselman, C.; Cesar Garcia, P.S.; Vaishnavi, I.; Boussard, M.; Kernchen, R.; Meissner, S.; Spedalieri, A.; Sinfreu, A.; Raeck, C.

    2008-01-01

    This paper introduces a distributed system for next generation multimedia support in dynamically changing pervasive computing environments. The overall goal is to enhance the experience of mobile users by intelligently adapting the way a service is presented, in particular by adapting the way the us

  4. The Hyper Apuntes Interactive Learning Environment for Computer Programming Teaching.

    Science.gov (United States)

    Sommaruga, Lorenzo; Catenazzi, Nadia

    1998-01-01

    Describes the "Hyper Apuntes" interactive learning environment, used as a didactic support to a computer programming course taught at the University Carlos III of Madrid, Spain. The system allows students to study the material and see examples, edit, compile and run programs, and evaluate their learning degree. It is installed on a Web server,…

  5. Learning To Read in Culturally Responsive Computer Environments. CIERA Report.

    Science.gov (United States)

    Pinkard, Nichole

    This report is a description and evaluation of two computer-based learning environments, Rappin' Reader and Say Say Oh Playmate, that build upon the lived literacy experiences African-American children bring to classrooms as scaffolds for early literacy instruction. When Rappin' Reader and Say Say Oh Playmate were used with…

  6. Enhancing Computer Science Education with a Wireless Intelligent Simulation Environment

    Science.gov (United States)

    Cook, Diane J.; Huber, Manfred; Yerraballi, Ramesh; Holder, Lawrence B.

    2004-01-01

    The goal of this project is to develop a unique simulation environment that can be used to increase students' interest and expertise in Computer Science curriculum. Hands-on experience with physical or simulated equipment is an essential ingredient for learning, but many approaches to training develop a separate piece of equipment or software for…

  7. Human-Computer Interaction (HCI) in Educational Environments: Implications of Understanding Computers as Media.

    Science.gov (United States)

    Berg, Gary A.

    2000-01-01

    Reviews literature in the field of human-computer interaction (HCI) as it applies to educational environments. Topics include the origin of HCI; human factors; usability; computer interface design; goals, operations, methods, and selection (GOMS) models; command language versus direct manipulation; hypertext; visual perception; interface…

  8. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  9. A Hierarchical Load Balancing Policy for Grid Computing Environment

    Directory of Open Access Journals (Sweden)

    Said Fathy El Zoghdy

    2012-06-01

    Full Text Available With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. It provides resources for solving large scientific applications. It is typically composed of heterogeneous resources such as clusters or sites at different administrative domains connected by networks with widely varying performance characteristics. The service level of the grid software infrastructure provides two essential functions for workload and resource management. To efficiently utilize the resources at these environments, effective load balancing and resource management policies are fundamentally important. This paper addresses the problem of load balancing and task migration in grid computing environments. We propose a fully decentralized two-level load balancing policy for computationally intensive tasks on a heterogeneous multi-cluster grid environment. It resolves the single point of failure problem which many of the current policies suffer from. In this policy, any site manager receives two kinds of tasks namely, remote tasks arriving from its associated local grid manager, and local tasks submitted directly to the site manager by local users in its domain, which makes this policy closer to reality and distinguishes it from any other similar policy. It distributes the grid workload based on the resources occupation ratio and the communication cost. The grid overall mean task response time is considered as the main performance metric that need to be minimized. The simulation results show that the proposed load balancing policy improves the grid overall mean task response time.

  10. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  11. Reducing Total Power Consumption Method in Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

  12. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    Science.gov (United States)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  13. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    Science.gov (United States)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  14. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction.

    Science.gov (United States)

    Agulleiro, J I; Vázquez, F; Garzón, E M; Fernández, J J

    2012-04-01

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. GATE Monte Carlo simulation in a cloud computing environment

    Science.gov (United States)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  16. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come for p...

  17. Proposal of an Effective Computation Environment for the Traveling Salesman Problem Using Cloud Computing

    Science.gov (United States)

    Mizuno, Shinya; Iwamoto, Shogo; Yamaki, Naokazu

    Various methods have been proposed to solve the traveling salesman problem, referred to as the TSP. In order to solve the TSP, the cost metric (e.g., the travel time and distance) between nodes is needed. As we do not always have specific criterion for the cost metric we are proposing using a new computation environment that is used all over the world—Google Maps. We think a cost metric obtained from Google maps is a good, impartial value with little room for variation, making it easier and more efficient to make map information visible. Moreover, a scalable computation environment can be prepared by using cloud computing technology. We can even expand the TSP and calculate routes taken by multiple people. The numerical results show this computation environment to be effective.

  18. EVALUATION & TRENDS OF SURVEILLANCE SYSTEM NETWORK IN UBIQUITOUS COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-03-01

    Full Text Available With the emergence of ubiquitous computing, whole scenario of computing has been changed. It affected many inter disciplinary fields. This paper visions the impact of ubiquitous computing on video surveillance system. With increase in population and highly specific security areas, intelligent monitoring is the major requirement of modern world .The paper describes the evolution of surveillance system from analog to multi sensor ubiquitous system. It mentions the demand of context based architectures. It draws the benefit of merging of cloud computing to boost the surveillance system and at the same time reducing cost and maintenance. It analyzes some surveillance system architectures which are made for ubiquitous deployment. It provides major challenges and opportunities for the researchers to make surveillance system highly efficient and make them seamlessly embed in our environments.

  19. Comparison of visual programming and hybrid programming environments in transferring programming skills

    Science.gov (United States)

    Alrubaye, Hussein

    Teaching students programming skills at an early age is one of the most important aspects for researchers in recent decades. It may seem more practical to leverage an existing reservoir of knowledge by extending the block-based environment; which uses blocks to build apps towards text-based. Text-based is text code only, rather than starting to learn a whole new programming language. To simplify the learning process, there is a new coding environment that's been introduced named block-based environment (Pencil Code, Scratch, App inventor) that is used by millions of students. There are some challenges teachers are facing to bring the text-based environment to the classroom. One is block-based tools do not allow students to write real-world programs, which limit the student's abilities to writing only simple programs. Also, there is a big gap between the block and text-based environments. When students want to transfer from block-based to text-based, they feel that they are in a totally new environment. Since block-code transition involves movement between different code styles and code representations with different syntax. They move from commands with nice shapes and colors to new environments with only commands, also they have to memorize all the commands and learn the programming syntax. We want to bridge the gap between the block-based and text-based by developing a new environment named hybrid-based, that allows the student to drag and drop block code and see real code instead of seeing it blocks only. The study was done on 18 students, by dividing them into two groups. One group used block-based, and another group used hybrid-based, then both groups learned to write code with text-based. We found that hybrid-based environments are better than block-based environments in transferring programming skills to text-based because hybrid-based enhances the students' abilities to learn programming foundations, code modification, memorizing commands, and syntax error

  20. Evaluation of genotype x environment interactions in maize hybrids using GGE biplot analysis

    Directory of Open Access Journals (Sweden)

    Fatma Aykut Tonk

    2011-01-01

    Full Text Available Seventeen hybrid maize genotypes were evaluated at four different locations in 2005 and 2006 cropping seasonsunder irrigated conditions in Turkey. The analysis of variance showed that mean squares of environments (E, genotypes (G andGE interactions (GEI were highly significant and accounted for 74, 7 and 19 % of treatment combination sum squares, respectively.To determine the effects of GEI on grain yield, the data were subjected to the GGE biplot analysis. Maize hybrid G16 can be proposedas reliably growing in test locations for high grain yield. Also, only the Yenisehir location could be best representative of overalllocations for deciding about which experimental hybrids can be recommended for grain yield in this study. Consequently, using ofgrain yield per plant instead of grain yield per plot in hybrid maize breeding programs could be preferred by private companies dueto some advantages.

  1. Activity recognition using hybrid generative/discriminative models on home environments using binary sensors.

    Science.gov (United States)

    Ordóñez, Fco Javier; de Toledo, Paula; Sanchis, Araceli

    2013-04-24

    Activities of daily living are good indicators of elderly health status, and activity recognition in smart environments is a well-known problem that has been previously addressed by several studies. In this paper, we describe the use of two powerful machine learning schemes, ANN (Artificial Neural Network) and SVM (Support Vector Machines), within the framework of HMM (Hidden Markov Model) in order to tackle the task of activity recognition in a home setting. The output scores of the discriminative models, after processing, are used as observation probabilities of the hybrid approach. We evaluate our approach by comparing these hybrid models with other classical activity recognition methods using five real datasets. We show how the hybrid models achieve significantly better recognition performance, with significance level p < 0.05, proving that the hybrid approach is better suited for the addressed domain.

  2. Activity Recognition Using Hybrid Generative/Discriminative Models on Home Environments Using Binary Sensors

    Directory of Open Access Journals (Sweden)

    Araceli Sanchis

    2013-04-01

    Full Text Available Activities of daily living are good indicators of elderly health status, and activity recognition in smart environments is a well-known problem that has been previously addressed by several studies. In this paper, we describe the use of two powerful machine learning schemes, ANN (Artificial Neural Network and SVM (Support Vector Machines, within the framework of HMM (Hidden Markov Model in order to tackle the task of activity recognition in a home setting. The output scores of the discriminative models, after processing, are used as observation probabilities of the hybrid approach. We evaluate our approach by comparing these hybrid models with other classical activity recognition methods using five real datasets. We show how the hybrid models achieve significantly better recognition performance, with significance level p < 0:05, proving that the hybrid approach is better suited for the addressed domain.

  3. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  4. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    Science.gov (United States)

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  5. InSAR Scientific Computing Environment on the Cloud

    Science.gov (United States)

    Rosen, P. A.; Shams, K. S.; Gurrola, E. M.; George, B. A.; Knight, D. S.

    2012-12-01

    In response to the needs of the international scientific and operational Earth observation communities, spaceborne Synthetic Aperture Radar (SAR) systems are being tasked to produce enormous volumes of raw data daily, with availability to scientists to increase substantially as more satellites come online and data becomes more accessible through more open data policies. The availability of these unprecedentedly dense and rich datasets has led to the development of sophisticated algorithms that can take advantage of them. In particular, interferometric time series analysis of SAR data provides insights into the changing earth and requires substantial computational power to process data across large regions and over large time periods. This poses challenges for existing infrastructure, software, and techniques required to process, store, and deliver the results to the global community of scientists. The current state-of-the-art solutions employ traditional data storage and processing applications that require download of data to the local repositories before processing. This approach is becoming untenable in light of the enormous volume of data that must be processed in an iterative and collaborative manner. We have analyzed and tested new cloud computing and virtualization approaches to address these challenges within the context of InSAR in the earth science community. Cloud computing is democratizing computational and storage capabilities for science users across the world. The NASA Jet Propulsion Laboratory has been an early adopter of this technology, successfully integrating cloud computing in a variety of production applications ranging from mission operations to downlink data processing. We have ported a new InSAR processing suite called ISCE (InSAR Scientific Computing Environment) to a scalable distributed system running in the Amazon GovCloud to demonstrate the efficacy of cloud computing for this application. We have integrated ISCE with Polyphony to

  6. Hybrid Models for Trajectory Error Modelling in Urban Environments

    Science.gov (United States)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  7. EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Jasmin James

    2012-09-01

    Full Text Available Cloud computing is a fast growing area in computing research and industry today. With the advancement of the Cloud, there are new possibilities opening up on how applications can be built and how different services can be offered to the end user through Virtualization, on the internet. There are the cloud service providers who provide large scaled computing infrastructure defined on usage, and provide the infrastructure services in a very flexiblemanner which the users can scale up or down at will. The establishment of an effective load balancing algorithm and how to use Cloud computing resources efficiently for effective and efficient cloud computing is one of the Cloud computing service providers’ultimate goals. In this paper firstly analysis of different Virtual Machine (VM load balancing algorithms is done. Secondly, a new VM load balancing algorithm has been proposed and implemented for an IaaS framework in Simulated cloud computing environment; i.e. ‘Weighted Active Monitoring Load Balancing Algorithm’ using CloudSimtools, for the Datacenter to effectively load balance requests between the available virtual machines assigning a weight, in order to achieve better performance parameters such as response time and Data processing time.

  8. Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas

    Science.gov (United States)

    Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi

    2016-08-01

    A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.

  9. Performance Comparison of Hybrid Signed Digit Arithmetic in Efficient Computing

    Directory of Open Access Journals (Sweden)

    VISHAL AWASTHI

    2011-10-01

    Full Text Available In redundant representations, addition can be carried out in a constant time independent of the word length of the operands. Adder forms a fundamental building block in almost majority of VLSI designs. A hybrid adder can add an unsigned number to a signed-digit number and hence their efficient performance greatly determinesthe quality of the final output of the concerned circuit. In this paper we designed and compared the speed of adders by reducing the carry propagation time with the help of combined effect of improved architectures of adders and signed digit representation of number systems. The key idea is to draw out a compromise between execution time of fast adding process and area available which is often very limited. In this paper we also tried to verify the various algorithms of signed digit and hybrid signed digit adders.

  10. Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas

    Science.gov (United States)

    Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi

    2016-11-01

    A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.

  11. Computational and experimental study of air hybrid engine concepts

    OpenAIRE

    Lee, Cho-Yu

    2011-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University The air hybrid engine absorbs the vehicle kinetic energy during braking, stores it in an air tank in the form of compressed air, and reuses it to start the engine and to propel a vehicle during cruising and acceleration. Capturing, storing and reusing this braking energy to achieve stop-start operation and to give additional power can therefore improve fuel economy, particularly in cities and ...

  12. Nonlinear mechanics of hybrid polymer networks that mimic the complex mechanical environment of cells

    Science.gov (United States)

    Jaspers, Maarten; Vaessen, Sarah L.; van Schayik, Pim; Voerman, Dion; Rowan, Alan E.; Kouwer, Paul H. J.

    2017-05-01

    The mechanical properties of cells and the extracellular environment they reside in are governed by a complex interplay of biopolymers. These biopolymers, which possess a wide range of stiffnesses, self-assemble into fibrous composite networks such as the cytoskeleton and extracellular matrix. They interact with each other both physically and chemically to create a highly responsive and adaptive mechanical environment that stiffens when stressed or strained. Here we show that hybrid networks of a synthetic mimic of biological networks and either stiff, flexible and semi-flexible components, even very low concentrations of these added components, strongly affect the network stiffness and/or its strain-responsive character. The stiffness (persistence length) of the second network, its concentration and the interaction between the components are all parameters that can be used to tune the mechanics of the hybrids. The equivalence of these hybrids with biological composites is striking.

  13. Hybrid computer techniques for solving partial differential equations

    Science.gov (United States)

    Hammond, J. L., Jr.; Odowd, W. M.

    1971-01-01

    Techniques overcome equipment limitations that restrict other computer techniques in solving trivial cases. The use of curve fitting by quadratic interpolation greatly reduces required digital storage space.

  14. A generalized hybrid transfinite element computational approach for nonlinear/linear unified thermal/structural analysis

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.

  15. Evaluating hybrid poplar rooting. I. genotype x environment interactions in three contrasting sites

    Science.gov (United States)

    Ronald S., Jr. Zalesny; Don E. Riemenschneider; Richard B. Hall

    2002-01-01

    We need to learn more about environmental conditions that promote or hinder rooting of unrooted dormant hybrid poplar cuttings. Planting cuttings and recording survival after the growing season is not suitable to keep up with industrial demands for improved stock. This method does not provide information about specific genotype x environment interactions. We know very...

  16. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Agulleiro, J.I.; Vazquez, F.; Garzon, E.M. [Supercomputing and Algorithms Group, Associated Unit CSIC-UAL, University of Almeria, 04120 Almeria (Spain); Fernandez, J.J., E-mail: JJ.Fernandez@csic.es [National Centre for Biotechnology, National Research Council (CNB-CSIC), Campus UAM, C/Darwin 3, Cantoblanco, 28049 Madrid (Spain)

    2012-04-15

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. -- Highlights: Black-Right-Pointing-Pointer Hybrid computing allows full exploitation of the power (CPU+GPU) in a computer. Black-Right-Pointing-Pointer Proper orchestration of workload is managed by an on-demand strategy. Black-Right-Pointing-Pointer Total number of threads running in the system should be limited to the number of CPUs.

  17. Distance Based Asynchronous Recovery Approach In Mobile Computing Environment

    Directory of Open Access Journals (Sweden)

    Yogita Khatri,

    2012-06-01

    Full Text Available A mobile computing system is a distributed system in which at least one of the processes is mobile. They are constrained by lack of stable storage, low network bandwidth, mobility, frequent disconnection andlimited battery life. Checkpointing is one of the commonly used techniques to provide fault tolerance in mobile computing environment. In order to suit the mobile environment a distance based recovery schemeis proposed which is based on checkpointing and message logging. After the system recovers from failures, only the failed processes rollback and restart from their respective recent checkpoints, independent of the others. The salient feature of this scheme is to reduce the transfer and recovery cost. While the mobile host moves with in a specific range, recovery information is not moved and thus only be transferred nearby if the mobile host moves out of certain range.

  18. Using Virtual Environments in the Teaching of Computer Graphics

    OpenAIRE

    Bowman, Doug A.; Chennupati, Balaprasuna; Gracey, Matthew; Pinho, Marcio S.; Wheeler, Kristin J.

    2003-01-01

    Education has long been touted as an appropriate application area for immersive virtual environments (VEs), but few immersive applications have actually been used in the classroom, and even fewer have been compared empirically with other teaching methods. This paper presents VENTS, a novel immersive VE application intended to teach the concept of the three-dimensional (3D) normalizing transformation in an undergraduate computer graphics class. VENTS was developed based...

  19. Quality control of computational fluid dynamics in indoor environments

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft; Nielsen, P. V.

    2003-01-01

    Computational fluid dynamics (CFD) is used routinely to predict air movement and distributions of temperature and concentrations in indoor environments. Modelling and numerical errors are inherent in such studies and must be considered when the results are presented. Here, we discuss modelling as...... the quality of CFD calculations, as well as guidelines for the minimum information that should accompany all CFD-related publications to enable a scientific judgment of the quality of the study....

  20. A distributed spatial computing prototype system in grid environment

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Digital Earth has been a hot topic and research trend since it was proposed,and Digital China has drawn much attention in China.As a key technique to implement Digital China,grid is an excellent and promising concept to construct a dynamic,inter-domain and distributed computing environment.It is appropriate to process geographic information across dispersed computing resources in networks effectively and cooperatively.A distributed spatial computing prototype system is designed and implemented with the Globus Toolkit.Several important aspects are discussed in detail.The architecture is proposed according to the characteristics of grid firstly,and then the spatial resource query and access interfaces are designed for heterogeneous data sources.An open-up hierarchical architecture for resource discovery and management is represented to detect spatial and computing resources in grid.A standard spatial job management mechanism is implemented by grid service for convenient use.In addition,the control mechanism of spatial datasets access is developed based on GSI.The prototype system utilizes the Globus Toolkit to implement a common distributed spatial computing framework,and it reveals the spatial computing ability of grid to support Digital China.

  1. A Hybrid Circular Queue Method for Iterative Stencil Computations on GPUs

    Institute of Scientific and Technical Information of China (English)

    Yang Yang; Hui-Min Cui; Xiao-Bing Feng; Jing-Ling Xue

    2012-01-01

    In this paper,we present a hybrid circular queue method that can significantly boost the performance of stencil computations on GPU by carefully balancing usage of registers and shared-memory.Unlike earlier methods that rely on circular queues predominantly implemented using indirectly addressable shared memory,our hybrid method exploits a new reuse pattern spanning across the multiple time steps in stencil computations so that circular queues can be implemented by both shared memory and registers effectively in a balanced manner.We describe a framework that automatically finds the best placement of data in registers and shared memory in order to maximize the performance of stencil computations.Validation using four different types of stencils on three different GPU platforms shows that our hybrid method achieves speedups up to 2.93X over methods that use circular queues implemented with shared-memory only.

  2. Effective hybrid evolutionary computational algorithms for global optimization and applied to construct prion AGAAAAGA fibril models

    CERN Document Server

    Zhang, Jiapu

    2010-01-01

    Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid Self-Adaptive Evolutionary Strategy $\\mu+\\lambda$ method and a hybrid Self-Adaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. Lennard-Jones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date, there is little molecular structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins.This region belongs to the N-terminal unstructured region (1-123) of prion proteins, the structure of which has proved hard to determine using NMR spectroscopy or X-ray crystallography ...

  3. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  4. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  5. CAVE2: a hybrid reality environment for immersive simulation and information analysis

    Science.gov (United States)

    Febretti, Alessandro; Nishimoto, Arthur; Thigpen, Terrance; Talandis, Jonas; Long, Lance; Pirtle, J. D.; Peterka, Tom; Verlo, Alan; Brown, Maxine; Plepys, Dana; Sandin, Dan; Renambot, Luc; Johnson, Andrew; Leigh, Jason

    2013-03-01

    Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world's first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.

  6. High-fidelity quantum memory using nitrogen-vacancy center ensemble for hybrid quantum computation

    CERN Document Server

    Yang, W L; Hu, Y; Feng, M; Du, J F

    2011-01-01

    We study a hybrid quantum computing system using nitrogen-vacancy center ensemble (NVE) as quantum memory, current-biased Josephson junction (CBJJ) superconducting qubit fabricated in a transmission line resonator (TLR) as quantum computing processor and the microwave photons in TLR as quantum data bus. The storage process is seriously treated by considering all kinds of decoherence mechanisms. Such a hybrid quantum device can also be used to create multi-qubit W states of NVEs through a common CBJJ. The experimental feasibility and challenge are justified using currently available technology.

  7. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  8. Hybrid PSO-MOBA for Profit Maximization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dr. Salu George

    2015-02-01

    Full Text Available Cloud service provider, infrastructure vendor and clients/Cloud user’s are main actors in any cloud enterprise like Amazon web service’s cloud or Google’s cloud. Now these enterprises take care in infrastructure deployment and cloud services management (IaaS/PaaS/SaaS. Cloud user ‘s need to provide correct amount of services needed and characteristic of workload in order to avoid over – provisioning of resources and it’s the important pricing factor. Cloud service provider need to manage the resources and as well as optimize the resources to maximize the profit. To manage the profit we consider the M/M/m queuing model which manages the queue of job and provide average execution time. Resource Scheduling is one of the main concerns in profit maximization for which we take HYBRID PSO-MOBA as it resolves the global convergence problem, faster convergence, less parameter to tune, easier searching in very large problem spaces and locating the right resource. In HYBRID PSO-MOBA we are combining the features of PSO and MOBA to achieve the benefits of both PSO and MOBA and have greater compatibility.

  9. Fracture Toughness Evaluation of Hybrid and Nano-hybrid Resin Composites after Ageing under Acidic Environment

    Directory of Open Access Journals (Sweden)

    Ferooz M

    2015-03-01

    Full Text Available Statement of Problem: Tooth-coloured restorative materials are brittle with the major shortcomings of sensitivity to flaws and defects. Although various mechanical properties of resin composites have been studied, no fracture toughness test data for nano-hybrid composites under acidic condition for a long period of time has been published. Objectives: To compare the fracture toughness (KIc of two types of resin composites under tensile loading and to assess the effect of distilled water and lactic acid on the resistance of the restoratives to fracture after three months of immersion. Materials and Methods: Four resin composites were used: three nanohybrids [EsteliteSigma Quick (Kuraray, Luna (SDI, Paradigm (3M/ESPE] and one hybrid, Rok (SDI. The specimens were prepared using a custom-made polytetrafluorethylene split mould, stored in distilled water (pH 6.8 or 0.01mol/L lactic acid (pH 4 and conditioned at 37°C for 24 hours, 1 or 3 months. They were loaded under tensile stress using a universal testing machine; the maximum load (N to the specimen failure was recorded and the fracture toughness (KIc was calculated. Data were analysed by ANOVA and Tukey’s test using SPSS, version 18. Results: The results of two-way ANOVA did not show a significant combined effect of material, time, and storage medium on fracture toughness (p= 0.056. However, there was a strong interaction between materials and time (p=0.001 when the storage medium were ignored. After 24 h of immersion in distilled water, Paradigm revealed the highest KIc values followed by Rok, Luna and Estelite. Immersion in either distilled water or lactic acid significantly decreased the fracture toughness of almost all materials as time interval increased. Conclusions: Paradigm showed the highest fracture toughness followed by Rok, Luna and Estelite respectively. As time increased, KIc significantly decreased for almost all resin composites except for Luna which showed a slight decrease

  10. An Introduction to Computer Forensics: Gathering Evidence in a Computing Environment

    Directory of Open Access Journals (Sweden)

    Henry B. Wolfe

    2001-01-01

    Full Text Available Business has become increasingly dependent on the Internet and computing to operate. It has become apparent that there are issues of evidence gathering in a computing environment, which by their nature are technical and different to other forms of evidence gathering, that must be addressed. This paper offers an introduction to some of the technical issues surrounding this new and specialized field of Computer Forensics. It attempts to identify and describe sources of evidence that can be found on disk data storage devices in the course of an investigation. It also considers sources of copies of email, which can be used in evidence, as well as case building.

  11. Optimized distributed computing environment for mask data preparation

    Science.gov (United States)

    Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung

    2005-11-01

    As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.

  12. Architecture independent environment for developing engineering software on MIMD computers

    Science.gov (United States)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  13. InSAR Scientific Computing Environment - The Home Stretch

    Science.gov (United States)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.

    2011-12-01

    The Interferometric Synthetic Aperture Radar (InSAR) Scientific Computing Environment (ISCE) is a software development effort in its third and final year within the NASA Advanced Information Systems and Technology program. The ISCE is a new computing environment for geodetic image processing for InSAR sensors enabling scientists to reduce measurements directly from radar satellites to new geophysical products with relative ease. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. Upcoming international SAR missions will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment has the functionality to become a key element in processing data from NASA's proposed DESDynI mission into higher level data products, supporting a new class of analyses that take advantage of the long time and large spatial scales of these new data. At the core of ISCE is a new set of efficient and accurate InSAR algorithms. These algorithms are placed into an object-oriented, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. ISCE supports data from nearly all of the available satellite platforms, including ERS, EnviSAT, Radarsat-1, Radarsat-2, ALOS, TerraSAR-X, and Cosmo-SkyMed. The code applies a number of parallelization techniques and sensible approximations for speed. It is configured to work on modern linux-based computers with gcc compilers and python

  14. Soft computing applications: the advent of hybrid systems

    Science.gov (United States)

    Bonissone, Piero P.

    1998-10-01

    Soft computing is a new field of computer sciences that deals with the integration of problem- solving technologies such as fuzzy logic, probabilistic reasoning, neural networks, and genetic algorithms. Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. We will analyze some of the most synergistic combinations of self computing technologies, with an emphasis on the development of smart algorithm-controllers, such as the use of FL to control GAs and NNs parameters. We will also discuss the application of GAs to evolve NNs or tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. We will conclude with a detailed description of a GA-tuned fuzzy controller to implement a train handling control.

  15. Massive XML Data Mining in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Zhao Li

    2014-08-01

    Full Text Available This paper concentrates on how to mine useful information from massive XML documents in cloud computing environment. The structure of the Cloud computing and the corresponding tree data model of a XML document are analyzed in advance. Afterwards, structure of the proposed XML data mining system is illustrated, which is made up of three layers, such as “Application layer”, “Data processing layer”, and “XML Data converting layer”. In the XML Data converting layer, XML data are collected from databases and documents, and then the source data can be converted to XML file effectively. In the data processing layer, the process of data selection, cleaning and standardization for XML data set is implemented, moreover, a XML data set with higher degree of structure and rich semantics are obtained. In the application layer, “the results report module”, “data query module” and “results analysis module” are included. Next, massive XML data mining algorithm is proposed. The main innovations of this algorithm lie in that 1 the structure of a XML document is represented as an unordered tree, 2 the sub-structures of a XML document are modeled as sub-trees, and XML trees are regarded as a forest which is made up of all the sub-trees. Experimental results show that the proposed method can effectively mine useful information from massive XML documents in cloud computing environment with high efficiency.

  16. Computational Pollutant Environment Assessment from Propulsion-System Testing

    Science.gov (United States)

    Wang, Ten-See; McConnaughey, Paul; Chen, Yen-Sen; Warsi, Saif

    1996-01-01

    An asymptotic plume growth method based on a time-accurate three-dimensional computational fluid dynamics formulation has been developed to assess the exhaust-plume pollutant environment from a simulated RD-170 engine hot-fire test on the F1 Test Stand at Marshall Space Flight Center. Researchers have long known that rocket-engine hot firing has the potential for forming thermal nitric oxides, as well as producing carbon monoxide when hydrocarbon fuels are used. Because of the complex physics involved, most attempts to predict the pollutant emissions from ground-based engine testing have used simplified methods, which may grossly underpredict and/or overpredict the pollutant formations in a test environment. The objective of this work has been to develop a computational fluid dynamics-based methodology that replicates the underlying test-stand flow physics to accurately and efficiently assess pollutant emissions from ground-based rocket-engine testing. A nominal RD-170 engine hot-fire test was computed, and pertinent test-stand flow physics was captured. The predicted total emission rates compared reasonably well with those of the existing hydrocarbon engine hot-firing test data.

  17. Hydrotalcite Intercalated siRNA: Computational Characterization of the Interlayer Environment

    Directory of Open Access Journals (Sweden)

    Sean C. Smith

    2012-06-01

    Full Text Available Using molecular dynamics (MD simulations, we explore the structural and dynamical properties of siRNA within the intercalated environment of a Mg:Al 2:1 Layered Double Hydroxide (LDH nanoparticle. An ab initio force field (Condensed-phase Optimized Molecular Potentials for Atomistic Simulation Studies: COMPASS is used for the MD simulations of the hybrid organic-inorganic systems. The structure, arrangement, mobility, close contacts and hydrogen bonds associated with the intercalated RNA are examined and contrasted with those of the isolated RNA. Computed powder X-ray diffraction patterns are also compared with related LDH-DNA experiments. As a method of probing whether the intercalated environment approximates the crystalline or rather the aqueous state, we explore the stability of the principle parameters (e.g., the major groove width that differentiate both A- and A'- crystalline forms of siRNA and contrast this with recent findings for the same siRNA simulated in water. We find the crystalline forms remain structurally distinct when intercalated, whereas this is not the case in water. Implications for the stability of hybrid LDH-RNA systems are discussed.

  18. Automatic artefact removal in a self-paced hybrid brain- computer interface system

    Directory of Open Access Journals (Sweden)

    Yong Xinyi

    2012-07-01

    Full Text Available Abstract Background A novel artefact removal algorithm is proposed for a self-paced hybrid brain-computer interface (BCI system. This hybrid system combines a self-paced BCI with an eye-tracker to operate a virtual keyboard. To select a letter, the user must gaze at the target for at least a specific period of time (dwell time and then activate the BCI by performing a mental task. Unfortunately, electroencephalogram (EEG signals are often contaminated with artefacts. Artefacts change the quality of EEG signals and subsequently degrade the BCI’s performance. Methods To remove artefacts in EEG signals, the proposed algorithm uses the stationary wavelet transform combined with a new adaptive thresholding mechanism. To evaluate the performance of the proposed algorithm and other artefact handling/removal methods, semi-simulated EEG signals (i.e., real EEG signals mixed with simulated artefacts and real EEG signals obtained from seven participants are used. For real EEG signals, the hybrid BCI system’s performance is evaluated in an online-like manner, i.e., using the continuous data from the last session as in a real-time environment. Results With semi-simulated EEG signals, we show that the proposed algorithm achieves lower signal distortion in both time and frequency domains. With real EEG signals, we demonstrate that for dwell time of 0.0s, the number of false-positives/minute is 2 and the true positive rate (TPR achieved by the proposed algorithm is 44.7%, which is more than 15.0% higher compared to other state-of-the-art artefact handling methods. As dwell time increases to 1.0s, the TPR increases to 73.1%. Conclusions The proposed artefact removal algorithm greatly improves the BCI’s performance. It also has the following advantages: a it does not require additional electrooculogram/electromyogram channels, long data segments or a large number of EEG channels, b it allows real-time processing, and c it reduces signal distortion.

  19. Modeling and Simulation of Renewable Hybrid Power System using Matlab Simulink Environment

    Directory of Open Access Journals (Sweden)

    Cristian Dragoş Dumitru

    2010-12-01

    Full Text Available The paper presents the modeling of a solar-wind-hydroelectric hybrid system in Matlab/Simulink environment. The application is useful for analysis and simulation of a real hybrid solar-wind-hydroelectric system connected to a public grid. Application is built on modular architecture to facilitate easy study of each component module influence. Blocks like wind model, solar model, hydroelectric model, energy conversion and load are implemented and the results of simulation are also presented. As an example, one of the most important studies is the behavior of hybrid system which allows employing renewable and variable in time energy sources while providing a continuous supply. Application represents a useful tool in research activity and also in teaching

  20. The Design and Implementation of Middleware for Application Development within Honeybee Computing Environment

    Directory of Open Access Journals (Sweden)

    Nur Husna Azizul

    2016-12-01

    Full Text Available Computing technology is now moving from ubiquitous computing into advanced ubiquitous computing environment. An advanced ubiquitous environment is an extension of ubiquitous environment that improve connectivity between devices. This computing environment has five major characteristics, namely: large number of heterogeneous devices; new communication technology; mobile ad hoc network (MANET; peer-to-peer communication; and Internet of Things. Honeybee computing is a concept based on advanced ubiquitous computing technology to support Smart City Smart Village (SCSV initiatives, which is a project initiated within Digital Malaysia. This paper describes the design and implementation of a middleware to support application development within Honeybee computing environment.

  1. Computing membrane-AQP5-phosphatidylserine binding affinities with hybrid steered molecular dynamics approach.

    Science.gov (United States)

    Chen, Liao Y

    2015-01-01

    In order to elucidate how phosphatidylserine (PS6) interacts with AQP5 in a cell membrane, we developed a hybrid steered molecular dynamics (hSMD) method that involved: (1) Simultaneously steering two centers of mass of two selected segments of the ligand, and (2) equilibrating the ligand-protein complex with and without biasing the system. Validating hSMD, we first studied vascular endothelial growth factor receptor 1 (VEGFR1) in complex with N-(4-Chlorophenyl)-2-((pyridin-4-ylmethyl)amino)benzamide (8ST), for which the binding energy is known from in vitro experiments. In this study, our computed binding energy well agreed with the experimental value. Knowing the accuracy of this hSMD method, we applied it to the AQP5-lipid-bilayer system to answer an outstanding question relevant to AQP5's physiological function: Will the PS6, a lipid having a single long hydrocarbon tail that was found in the central pore of the AQP5 tetramer crystal, actually bind to and inhibit AQP5's central pore under near-physiological conditions, namely, when AQP5 tetramer is embedded in a lipid bilayer? We found, in silico, using the CHARMM 36 force field, that binding PS6 to AQP5 was a factor of 3 million weaker than "binding" it in the lipid bilayer. This suggests that AQP5's central pore will not be inhibited by PS6 or a similar lipid in a physiological environment.

  2. Hybrid and hierarchical nanoreinforced polymer composites: Computational modelling of structure–properties relationships

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Dai, Gaoming

    2014-01-01

    Hybrid and hierarchical polymer composites represent a promising group of materials for engineering applications. In this paper, computational studies of the strength and damage resistance of hybrid and hierarchical composites are reviewed. The reserves of the composite improvement are explored...... by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....

  3. Hybrid Computation Model for Intelligent System Design by Synergism of Modified EFC with Neural Network

    OpenAIRE

    2015-01-01

    In recent past, it has been seen in many applications that synergism of computational intelligence techniques outperforms over an individual technique. This paper proposes a new hybrid computation model which is a novel synergism of modified evolutionary fuzzy clustering with associated neural networks. It consists of two modules: fuzzy distribution and neural classifier. In first module, mean patterns are distributed into the number of clusters based on the modified evolutionary fuzzy cluste...

  4. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    Science.gov (United States)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  5. Decomposition for optimal synthesis in a parallel computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bhatt, V.D.

    1989-01-01

    Many practical problems in science and engineering require extensive simplification and abstraction in order to be tractable in currently existing computing environments. Obvious examples are: computational fluid dynamics; nuclear and plasma physics; petroleum reservoir modelling; computer graphics and image processing; and structural synthesis. This research will be useful in these (as well as, other) application fields, but structural synthesis has been chosen as the area of application. The research has been inspired by a philosophy of design called multilevel decomposition, which enables solution of these problems in a reasonable time. In this approach a complex system (for example, a modern aircraft or automobile) is decomposed into several manageably smaller subsystems. These subsystems are solved independently without losing integrity with the main or parent system and ultimately achieving satisfactory results for the large scale system. In addition to making problem solution more tractable, the decomposition approach is compatible with a typical design office multidisciplinary organization and the parallel or distributed computing technology existing today. Several example problems (including classical problems in the field and practical application from industry) have been used to check the validity of the approach.

  6. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    Science.gov (United States)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  7. Collaborative editing within the pervasive collaborative computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Perry, Marcia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2003-09-11

    Scientific collaborations are established for a wide variety of tasks for which several communication modes are necessary, including messaging, file-sharing, and collaborative editing. In this position paper, we describe our work on the Pervasive Collaborative Computing Environment (PCCE) which aims to facilitate scientific collaboration within widely distributed environments. The PCCE provides a persistent space in which collaborators can locate each other, exchange messages synchronously and asynchronously and archive conversations. Our current interest is in exploring research and development of shared editing systems with the goal of integrating this technology into the PCCE. We hope to inspire discussion of technology solutions for an integrated approach to synchronous and asynchronous communication and collaborative editing.

  8. Computational Chemotaxis in Ants and Bacteria over Dynamic Environments

    CERN Document Server

    Ramos, Vitorino; Rosa, A C; Abraham, A

    2007-01-01

    Chemotaxis can be defined as an innate behavioural response by an organism to a directional stimulus, in which bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming towards the highest concentration of food molecules, or to flee from poisons. Based on self-organized computational approaches and similar stigmergic concepts we derive a novel swarm intelligent algorithm. What strikes from these observations is that both eusocial insects as ant colonies and bacteria have similar natural mechanisms based on stigmergy in order to emerge coherent and sophisticated patterns of global collective behaviour. Keeping in mind the above characteristics we will present a simple model to tackle the collective adaptation of a social swarm based on real ant colony behaviors (SSA algorithm) for tracking extrema in dynamic environments and highly multimodal complex functions des...

  9. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    Science.gov (United States)

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified.

  10. Heterosis in locally adapted sorghum genotypes and potential of hybrids for increased productivity in contrasting environments in Ethiopia

    Institute of Scientific and Technical Information of China (English)

    Taye T. Mindaye; Emma S. Mace; Ian D. Godwin; David R. Jordan

    2016-01-01

    Increased productivity in sorghum has been achieved in the developed world using hybrids. Despite their yield advantage, introduced hybrids have not been adopted in Ethiopia due to the lack of adaptive traits, their short plant stature and small grain size. This study was conducted to investigate hybrid performance and the magnitude of heterosis of locally adapted genotypes in addition to introduced hybrids in three contrasting environments in Ethiopia. In total, 139 hybrids, derived from introduced seed parents crossed with locally adapted genotypes and introduced R lines, were evaluated. Overall, the hybrids matured earlier than the adapted parents, but had higher grain yield, plant height, grain number and grain weight in all environments. The lowland adapted hybrids displayed a mean better parent heterosis (BPH) of 19%, equating to 1160 kg ha−1 and a 29%mean increase in grain yield, in addition to increased plant height and grain weight, in comparison to the hybrids derived from the introduced R lines. The mean BPH for grain yield for the highland adapted hybrids was 16%in the highland and 52%in the intermediate environment equating to 698 kg ha−1 and 2031 kg ha−1, respectively, in addition to increased grain weight. The magnitude of heterosis observed for each hybrid group was related to the genetic distance between the parental lines. The majority of hybrids also showed superiority over the standard check varieties. In general, hybrids from locally adapted genotypes were superior in grain yield, plant height and grain weight compared to the high parents and introduced hybrids indicating the potential for hybrids to increase productivity while addressing farmers' required traits.

  11. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  12. Distributed computations in a dynamic, heterogeneous Grid environment

    Science.gov (United States)

    Dramlitsch, Thomas

    2003-06-01

    In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and

  13. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    Science.gov (United States)

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  14. Hybrid slime mould-based system for unconventional computing

    Science.gov (United States)

    Berzina, T.; Dimonte, A.; Cifarelli, A.; Erokhin, V.

    2015-04-01

    Physarum polycephalum is considered to be promising for the realization of unconventional computational systems. In this work, we present results of three slime mould-based systems. We have demonstrated the possibility of transporting biocompatible microparticles using attractors, repellents and a DEFLECTOR. The latter is an external tool that enables to conduct Physarum motion. We also present interactions between slime mould and conducting polymers, resulting in a variation of their colour and conductivity. Finally, incorporation of the Physarum into the organic memristive device resulted in a variation of its electrical characteristics due to the slime mould internal activity.

  15. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  16. All-optical quantum computing with a hybrid solid-state processing unit

    CERN Document Server

    Pei, Pei; Li, Chong

    2011-01-01

    We develop an architecture of hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have prominent advantage of the insensitivity to dissipation process due to the virtual excitation of subsystems. Moreover, the QND measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid systems can merge and be integrated into one quantum processor afterwards.

  17. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    Science.gov (United States)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  18. Application of Computational Intelligence in Order to Develop Hybrid Orbit Propagation Methods

    Directory of Open Access Journals (Sweden)

    Iván Pérez

    2013-01-01

    Full Text Available We present a new approach in astrodynamics and celestial mechanics fields, called hybrid perturbation theory. A hybrid perturbation theory combines an integrating technique, general perturbation theory or special perturbation theory or semianalytical method, with a forecasting technique, statistical time series model or computational intelligence method. This combination permits an increase in the accuracy of the integrating technique, through the modeling of higher-order terms and other external forces not considered in the integrating technique. In this paper, neural networks have been used as time series forecasters in order to help two economic general perturbation theories describe the motion of an orbiter only perturbed by the Earth’s oblateness.

  19. Introduction and evaluation of a novel hybrid brattice for improved dust control in underground mining faces:A computational study

    Institute of Scientific and Technical Information of China (English)

    Kurnia Jundika C.; Sasmito Agus P.; Hassani Ferri P.; Mujumdar Arun S.

    2015-01-01

    A proper control and management of dust dispersion is essential to ensure safe and productive under-ground working environment. Brattice installation to direct the flow from main shaft to the mining face was found to be the most effective method to disperse dust particle away from the mining face. However, it limits the movement and disturbs the flexibility of the mining fleets and operators at the tunnel. This study proposes a hybrid brattice system-a combination of a physical brattice together with suitable and flexible directed and located air curtains-to mitigate dust dispersion from the mining face and reduce dust concentration to a safe level for the working operators. A validated three-dimensional computa-tional fluid dynamic model utilizing Eulerian–Lagrangian approach is employed to track the dispersion of dust particle. Several possible hybrid brattice scenarios are evaluated with the objective to improve dust management in underground mine. The results suggest that implementation of hybrid brattice is beneficial for the mining operation:up to three times lower dust concentration is achieved as compared to that of the physical brattice without air curtain.

  20. Team Robot Motion Planning in Dynamics Environments Using a New Hybrid Algorithm (Honey Bee Mating Optimization-Tabu List

    Directory of Open Access Journals (Sweden)

    Mohammad Abaee Shoushtary

    2014-01-01

    Full Text Available This paper describes a new hybrid algorithm extracted from honey bee mating optimization (HBMO algorithm (for robot travelling distance minimization and tabu list technique (for obstacle avoidance for team robot system. This algorithm was implemented in a C++ programming language on a Pentium computer and simulated on simple cylindrical robots in a simulation software. The environment in this simulation was dynamic with moving obstacles and goals. The results of simulation have shown validity and reliability of new algorithm. The outcomes of simulation have shown better performance than ACO and PSO algorithm (society, nature algorithms with respect to two well-known metrics included, ATPD (average total path deviation and AUTD (average uncovered target distance.

  1. A Computational Model for Pedestrian Level Wind Environment Around Tall Buildings

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A computational model has been developed for the simulation of pedestrian level wind environment around tall buildings by coupling the numerical simulation of the full-scale site and meteorological station materials. In the first step, the hybrid/mixed finite element method is employed to solve the two dimensional Navier-Stokes equation for the flow field around tall buildings, in view of the influence of fluctuating wind, the flow field is revised with the effective wind velocity. The velocity ratio is defined in order to relate numerical wind velocity to oncoming reference wind velocity. In the second step, the frequency occurred discomfort wind velocity as a suitable criterion is calculated by use of the coupling between the numerical wind velocity and the wind velocity at the nearest meteorological station. The prediction accuracy of the wind environment simulation by use of the computation model will be discussed. Using the available wind data at the nearest meteorological station as well as the established criteria of wind discomfort, the frequency of wind discomfort can be predicted. A numerical example is given to illustrate the application of the proposed method.

  2. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  3. Secure Data Sharing in Cloud Computing using Hybrid cloud

    Directory of Open Access Journals (Sweden)

    Er. Inderdeep Singh

    2015-06-01

    Full Text Available Cloud computing is fast growing technology that enables the users to store and access their data remotely. Using cloud services users can enjoy the benefits of on-demand cloud applications and data with limited local infrastructure available with them. While accessing the data from cloud, different users may have relationship among them depending on some attributes, and thus sharing of data along with user privacy and data security becomes important to get effective results. Most of the research has been done to secure the data authentication so that user’s don’t lose their private data stored on public cloud. But still data sharing is a significant hurdle to overcome by researchers. Research is going on to provide secure data sharing with enhanced user privacy and data access security. In this paper various research and challenges in this area are discussed in detail. It will definitely help the cloud users to understand the topic and researchers to develop a method to overcome these challenges.

  4. Fatigue of hybrid glass/carbon composites: 3D computational studies

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2014-01-01

    3D computational simulations of fatigue of hybrid carbon/glass fiber reinforced composites is carried out using X-FEM and multifiber unit cell models. A new software code for the automatic generation of unit cell multifiber models of composites with randomly misaligned fibers of various properties...... and geometrical parameters is developed. With the use of this program code and the X-FEM method, systematic investigations of the effect of microstructure of hybrid composites (fraction of carbon versus glass fibers, misalignment, and interface strength) and the loading conditions (tensile versus compression...... cyclic loading effects) on fatigue behavior of the materials are carried out. It was demonstrated that the higher fraction of carbon fibers in hybrid composites is beneficial for the fatigue lifetime of the composites under tension-tension cyclic loading, but might have negative effect on the lifetime...

  5. Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface

    Directory of Open Access Journals (Sweden)

    M. Jawad Khan

    2014-04-01

    Full Text Available The hybrid brain-computer interface (BCI’s multimodal technology enables precision brain-signal classification that can be used in the formulation of control commands. In the present study, an experimental hybrid near-infrared spectroscopy-electroencephalography (NIRS-EEG technique was used to extract and decode four different types of brain signals. The NIRS setup was positioned over the prefrontal brain region, and the EEG over the left and right motor cortex regions. Twelve subjects participating in the experiment were shown four direction symbols, namely, forward, backward, left and right. The control commands for forward and backward movement were estimated by performing arithmetic mental tasks related to oxy-hemoglobin (HbO changes. The left and right directions commands were associated with right and left hand tapping, respectively. The high classification accuracies achieved showed that the four different control signals can be accurately estimated using the hybrid NIRS-EEG technology.

  6. Carbon nanotube reinforced hybrid composites: Computational modeling of environmental fatigue and usability for wind blades

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2015-01-01

    The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with and without secondary CNT reinforcement is simulated using multiscale 3D unit cells. The materials behavior under both mechanical cyclic loading and combined mechanical and environmental loading (with phase properties degraded due to the moisture effects) is studied. The multiscale unit cells are generated...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....

  7. Research into display sharing techniques for distributed computing environments

    Science.gov (United States)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  8. Computational Investigation of Microstrip Antennas in Plasma Environment

    CERN Document Server

    Vyas, Hardik; Gupta, Sanjeev

    2016-01-01

    Microstrip antennas are extensively used in spacecraft systems and other applications where they encounter a plasma environment. A detailed computational investigation of change in antenna radiation properties in the presence of plasma has been presented in this paper. The study shows antenna properties such as the resonant frequency, return loss, radiation properties and the different characteristics of the antenna changes when it is surrounded by plasma. Particular focus of the work is to understand the causes behind these changes by correlating the complex propagation constant in the plasma medium, field distribution on the patch and effective dielectric of the antenna substrate with antenna parameter variations. The study also provides important insights to explore the possibilities of designing tunable microstrip antenna where the substrate can be replaced with plasma and important antenna characteristics can be controlled by varying the plasma density.

  9. Computational Intelligence for Deepwater Reservoir Depositional Environments Interpretation

    CERN Document Server

    Yu, Tina; Clark, Julian; Sullivan, Morgan; 10.1016/j.jngse.2011.07.014

    2013-01-01

    Predicting oil recovery efficiency of a deepwater reservoir is a challenging task. One approach to characterize a deepwater reservoir and to predict its producibility is by analyzing its depositional information. This research proposes a deposition-based stratigraphic interpretation framework for deepwater reservoir characterization. In this framework, one critical task is the identification and labeling of the stratigraphic components in the reservoir, according to their depositional environments. This interpretation process is labor intensive and can produce different results depending on the stratigrapher who performs the analysis. To relieve stratigrapher's workload and to produce more consistent results, we have developed a novel methodology to automate this process using various computational intelligence techniques. Using a well log data set, we demonstrate that the developed methodology and the designed workflow can produce finite state transducer models that interpret deepwater reservoir depositional...

  10. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  11. Adaptive Execution of Jobs in Computational Grid Environment

    Institute of Scientific and Technical Information of China (English)

    Sarbani Roy; Nandini Mukherjee

    2009-01-01

    In a computational grid,jobs must adapt to the dynamically changing heterogeneous environment with an objective of maintaining the quality of service.In order to enable adaptive execution of multiple jobs running concurrently in a computational grid,we propose all integrated performance-based resource management framework that is supported by a multi-agent system(MAS).The multi-agent system initially allocates the jobs onto different resource providers based on a resource selection algorithm.Later,during runtime,if performance of any job degrades or quality of service cannot be maintained for some reason(resource failure or overloading),the multi-agent system assists the job to adapt to the system. This paper focuses on a part of our framework in which adaptive execution facility is supported.Adaptive execution facility is avmled by reallocation and local tuning of jobs.Mobile,as well as static agents are employed for this purpose.The paper provides a summary of the design and implementation and demonstrates the efficiency of the framework by conducting experiments on a local grid test bed.

  12. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  13. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  14. Learning styles: individualizing computer-based learning environments

    Directory of Open Access Journals (Sweden)

    Tim Musson

    1995-12-01

    Full Text Available While the need to adapt teaching to the needs of a student is generally acknowledged (see Corno and Snow, 1986, for a wide review of the literature, little is known about the impact of individual learner-differences on the quality of learning attained within computer-based learning environments (CBLEs. What evidence there is appears to support the notion that individual differences have implications for the degree of success or failure experienced by students (Ford and Ford, 1992 and by trainee end-users of software packages (Bostrom et al, 1990. The problem is to identify the way in which specific individual characteristics of a student interact with particular features of a CBLE, and how the interaction affects the quality of the resultant learning. Teaching in a CBLE is likely to require a subset of teaching strategies different from that subset appropriate to more traditional environments, and the use of a machine may elicit different behaviours from those normally arising in a classroom context.

  15. FAULT TOLERANT SCHEDULING STRATEGY FOR COMPUTATIONAL GRID ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    MALARVIZHI NANDAGOPAL,

    2010-09-01

    Full Text Available Computational grids have the potential for solving large-scale scientific applications using heterogeneous and geographically distributed resources. In addition to the challenges of managing and scheduling these applications, reliability challenges arise because of the unreliable nature of grid infrastructure. Two major problems that are critical to the effective utilization of computational resources are efficient scheduling of jobs and providing fault tolerance in a reliable manner. This paper addresses these problems by combining the checkpoint replication based fault tolerance echanism with Minimum Total Time to Release (MTTR job scheduling algorithm. TTR includes the service time of the job, waiting time in the queue, transfer of input and output data to and from the resource. The MTTR algorithm minimizes the TTR by selecting a computational resource based on job requirements, job characteristics and hardware features of the resources. The fault tolerance mechanism used here sets the job checkpoints based on the resource failure rate. If resource failure occurs, the job is restarted from its last successful state using a checkpoint file from another grid resource. Acritical aspect for an automatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Replica Resource Selection Algorithm (RRSA is proposed to provide Checkpoint Replication Service (CRS. Globus Tool Kit is used as the grid middleware to set up a grid environment and evaluate the performance of the proposed approach. The monitoring tools Ganglia and NWS (Network Weather Service are used to gather hardware and network details respectively. The experimental results demonstrate that, the proposed approach effectively schedule the grid jobs with fault tolerant way thereby reduces TTR of the jobs submitted in the grid. Also, it increases the percentage of jobs completed within specified deadline and making the grid

  16. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    OpenAIRE

    Lukas Falat; Dusan Marcek; Maria Durisova

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the sug...

  17. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  18. Performance of hybrid programming models for multiscale cardiac simulations: preparing for petascale computation.

    Science.gov (United States)

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-10-01

    Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

  19. Computationally efficient double hybrid density functional theory using dual basis methods

    CERN Document Server

    Byrd, Jason N

    2015-01-01

    We examine the application of the recently developed dual basis methods of Head-Gordon and co-workers to double hybrid density functional computations. Using the B2-PLYP, B2GP-PLYP, DSD-BLYP and DSD-PBEP86 density functionals, we assess the performance of dual basis methods for the calculation of conformational energy changes in C$_4$-C$_7$ alkanes and for the S22 set of noncovalent interaction energies. The dual basis methods, combined with resolution-of-the-identity second-order M{\\o}ller-Plesset theory, are shown to give results in excellent agreement with conventional methods at a much reduced computational cost.

  20. Welfare Mix and Hybridity. Flexible Adjustments to Changed Environments. Introduction to the Special Issue

    DEFF Research Database (Denmark)

    Henriksen, Lars Skov; Smith, Steven Rathgeb; Zimmer, Annette

    2015-01-01

    Present day welfare societies rely on a complex mix of different providers ranging from the state, markets, family, and non-profit organizations to unions, grassroots organizations, and informal networks. At the same time changing welfare discourses have opened up space for new partnerships...... organizations and organizational fields adjust to a new environment that is increasingly dominated by the logic of the market, and how in particular nonprofit organizations, as hybrids by definition, are able to cope with new demands, funding structures, and control mechanisms....

  1. 16th International Conference on Hybrid Intelligent Systems and the 8th World Congress on Nature and Biologically Inspired Computing

    CERN Document Server

    Haqiq, Abdelkrim; Alimi, Adel; Mezzour, Ghita; Rokbani, Nizar; Muda, Azah

    2017-01-01

    This book presents the latest research in hybrid intelligent systems. It includes 57 carefully selected papers from the 16th International Conference on Hybrid Intelligent Systems (HIS 2016) and the 8th World Congress on Nature and Biologically Inspired Computing (NaBIC 2016), held on November 21–23, 2016 in Marrakech, Morocco. HIS - NaBIC 2016 was jointly organized by the Machine Intelligence Research Labs (MIR Labs), USA; Hassan 1st University, Settat, Morocco and University of Sfax, Tunisia. Hybridization of intelligent systems is a promising research field in modern artificial/computational intelligence and is concerned with the development of the next generation of intelligent systems. The conference’s main aim is to inspire further exploration of the intriguing potential of hybrid intelligent systems and bio-inspired computing. As such, the book is a valuable resource for practicing engineers /scientists and researchers working in the field of computational intelligence and artificial intelligence.

  2. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  3. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  4. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  5. A Computational Workbench Environment For Virtual Power Plant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Bockelie, Michael J.; Swensen, David A.; Denison, Martin K.; Sarofim, Adel F.

    2001-11-06

    In this paper we describe our progress toward creating a computational workbench for performing virtual simulations of Vision 21 power plants. The workbench provides a framework for incorporating a full complement of models, ranging from simple heat/mass balance reactor models that run in minutes to detailed models that can require several hours to execute. The workbench is being developed using the SCIRun software system. To leverage a broad range of visualization tools the OpenDX visualization package has been interfaced to the workbench. In Year One our efforts have focused on developing a prototype workbench for a conventional pulverized coal fired power plant. The prototype workbench uses a CFD model for the radiant furnace box and reactor models for downstream equipment. In Year Two and Year Three, the focus of the project will be on creating models for gasifier based systems and implementing these models into an improved workbench. In this paper we describe our work effort for Year One and outline our plans for future work. We discuss the models included in the prototype workbench and the software design issues that have been addressed to incorporate such a diverse range of models into a single software environment. In addition, we highlight our plans for developing the energyplex based workbench that will be developed in Year Two and Year Three.

  6. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  7. Learning Design Patterns for Hybrid Synchronous Video-Mediated Learning Environments

    DEFF Research Database (Denmark)

    Weitze, Charlotte Lærke

    2016-01-01

    of their daily practices and also participated in a design-based research project exploring new learning designs for this environment (Weitze, 2015). The teachers’ traditional learning designs were challenged, and this led to altered pedagogical approaches with less group-work and an extensive use of monologue......This article describes an innovative learning environment where remote and face-to-face full-time general upper secondary adult students jointly participate in the same live classes at VUC Storstrøm, an adult learning centre in Denmark. The teachers developed new learning designs as a part......-based teaching. The findings were, however, that the teachers, through pedagogically innovative strategies, developed knowledge about how their pedagogical patterns in this hybrid synchronous learning situation could be supported by an array of additional educational technologies and strategies to create...

  8. Learning Design Patterns for Hybrid Synchronous Video-Mediated Learning Environments

    DEFF Research Database (Denmark)

    Weitze, Charlotte Lærke

    2016-01-01

    -based teaching. The findings were, however, that the teachers, through pedagogically innovative strategies, developed knowledge about how their pedagogical patterns in this hybrid synchronous learning situation could be supported by an array of additional educational technologies and strategies to create......This article describes an innovative learning environment where remote and face-to-face full-time general upper secondary adult students jointly participate in the same live classes at VUC Storstrøm, an adult learning centre in Denmark. The teachers developed new learning designs as a part...... of their daily practices and also participated in a design-based research project exploring new learning designs for this environment (Weitze, 2015). The teachers’ traditional learning designs were challenged, and this led to altered pedagogical approaches with less group-work and an extensive use of monologue...

  9. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    Science.gov (United States)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-06-01

    This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

  10. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  11. Numerical approach for solving kinetic equations in two-dimensional case on hybrid computational clusters

    Science.gov (United States)

    Malkov, Ewgenij A.; Poleshkin, Sergey O.; Kudryavtsev, Alexey N.; Shershnev, Anton A.

    2016-10-01

    The paper presents the software implementation of the Boltzmann equation solver based on the deterministic finite-difference method. The solver allows one to carry out parallel computations of rarefied flows on a hybrid computational cluster with arbitrary number of central processor units (CPU) and graphical processor units (GPU). Employment of GPUs leads to a significant acceleration of the computations, which enables us to simulate two-dimensional flows with high resolution in a reasonable time. The developed numerical code was validated by comparing the obtained solutions with the Direct Simulation Monte Carlo (DSMC) data. For this purpose the supersonic flow past a flat plate at zero angle of attack is used as a test case.

  12. Hybrid annealing using a quantum simulator coupled to a classical computer

    CERN Document Server

    Graß, Tobias

    2016-01-01

    Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Simulated annealing is a computational technique which explores the configuration space by mimicking thermal noise. By slow cooling, it freezes the system in a low-energy configuration, but the algorithm often gets stuck in local minima. In quantum annealing, the thermal noise is replaced by controllable quantum fluctuations, and the technique can be implemented in modern quantum simulators. However, quantum-adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configurati...

  13. Step Response Enhancement of Hybrid Stepper Motors Using Soft Computing Techniques

    Directory of Open Access Journals (Sweden)

    Amged S. El-Wakeel

    2014-05-01

    Full Text Available This paper presents the use of different soft computing techniques for step response enhancement of Hybrid Stepper Motors. The basic differential equations of hybrid stepper motor are used to build up a model using MATLAB software package. The implementation of Fuzzy Logic (FL and Proportional-Integral-Derivative (PID controllers are used to improve the motor performance. The numerical simulations by a PC-based controller show that the PID controller tuned by Genetic Algorithm (GA produces better performance than that tuned by Fuzzy controller. They show that, the Fuzzy PID-like controller produces better performance than the other linear Fuzzy controllers. Finally, the comparison between PID controllers tuned by genetic algorithm and the Fuzzy PID-like controller shows that, the Fuzzy PID-like controller produces better performance.

  14. Theoretical and computational studies of the interactions between small nanoparticles and with aqueous environments

    Science.gov (United States)

    Villarreal, Oscar D.

    Interactions between nanoparticles (metallic, biological or a hybrid mix of the two) in aqueous solutions can have multiple biological applications. In some of them their tendency towards aggregation can be desirable (e.g. self-assembly), while in others it may impact negatively on their reliability (e.g. drug delivery). A realistic model of these systems contains about a million or more degrees of freedom, but their study has become feasible with today's high performance computing. In particular, nanoparticles of a few nanometers in size interacting at sub-nanometer distances have become a novel area of research. The standard mean-field model of colloid science, the Derjaguin-Landau-Verwey-Overbeak (DLVO) theory, and even the extended version (XDLVO) have encountered multiple challenges when attempting to understand the interactions of small nanoparticles in the short range, since assumptions of continuous effects no longer apply. Because the region of the interaction is in the angstrom scale, the effects of atomic finite sizes and unique entropic interactions cannot be described through simple analytical formulae corresponding to generalized interaction potentials. In this work, all-atom molecular dynamics simulations have been performed on small nanoparticles in order to provide a theoretical background for their interactions with various liquid environments as well as with each other. Such interactions have been quantified and visualized as the processes occur. Potentials of mean force have been computed as functions of the separation distances in order to obtain the binding affinities. The atomistic details of how a nanoparticle interacts with its aqueous environments and with another nanoparticle have been understood for various ligands and aqueous solutions.

  15. Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments.

    OpenAIRE

    Dewiyanti, Silvia; Brand-Gruwel, Saskia; Jochems, Wim; Broers, Nick

    2008-01-01

    Dewiyanti, S., Brand-Gruwel, S., Jochems, W., & Broers, N. (2007). Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments. Computers in Human Behavior, 23, 496-514.

  16. Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments.

    NARCIS (Netherlands)

    Dewiyanti, Silvia; Brand-Gruwel, Saskia; Jochems, Wim; Broers, Nick

    2008-01-01

    Dewiyanti, S., Brand-Gruwel, S., Jochems, W., & Broers, N. (2007). Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments. Computers in Human Behavior, 23, 496-514.

  17. Interactive Computational Algorithms for Acoustic Simulation in Complex Environments

    Science.gov (United States)

    2015-07-19

    Heo, Ruo-Feng Tong. VolCCD, ACM Transactions on Graphics , (10 2011): 0. doi: 10.1145/2019627.2019630 Charlie C.L. Wang, Dinesh Manocha. GPU -based...Visualization and Computer Graphics (TVCG). , . : , JIa Pan, Dinesh Manocha. GPU -based Parallel Collision Detection for Real-Time Motion Planning...Computer Graphics (02 2011) Jie-yi Zhao, Min Tang, Ruo-feng Tong, Dinesh Manocha. GPU accelerated convex hull computation, Computers & Graphics (8 2012

  18. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    Science.gov (United States)

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  19. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)

    2011-08-07

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations

  20. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Science.gov (United States)

    Maynard, Matthew R.; Geyer, John W.; Aris, John P.; Shifrin, Roger Y.; Bolch, Wesley

    2011-08-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR™ and then imported to the 3D modeling software package Rhinoceros™ for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in

  1. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  2. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  3. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    Science.gov (United States)

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  4. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  5. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  6. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    Directory of Open Access Journals (Sweden)

    Alejandro Canovas

    2009-05-01

    Full Text Available Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  7. An Approach for Location privacy in Pervasive Computing Environment

    Directory of Open Access Journals (Sweden)

    Sudheer Kumar Singh

    2010-05-01

    Full Text Available This paper focus on location privacy in location based services, Location privacy is a particular type of information privacy that can be defined as the ability to prevent others from learning one’s current or past location. Many systems such as GPS implicitly and automatically give its users location privacy. Once user sends his or her current location to the application server, Application server stores current locations of users in application server database. User can not delete or modify his or her location data after sending once to application server. Addressing this problem, Here in this paper, we are giving theoretical concept for protecting location privacy in pervasive computing environment. This approach based on user anonymity based location privacy. Going through the basic user anonymity based a location privacy approach that uses trusted proxy. By analysis of this approach, we propose an improvement over it using dummy-locations of users and also dummies of requested services by users from the application server. In this paper, this approach reduces the user’s overheads to extracting necessary information from reply message coming from application server. In this approach, user send a message having (current location and ID+ requested service to the trusted proxy and trusted proxy generates dummies location related to current location and also generates temporary pseudonym corresponding to real ID of users. After Analysis of this approach wehave found on problem with requested service. Addressing this problem, we improve our method by using dummies of requested service generated by trusted proxy. Trusted proxy generated Dummies (false position by dummies location algorithms.

  8. The Integrated Computational Environment for Airbreathing Hypersonic Flight Vehicle Modeling and Design Evaluation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated computational environment for multidisciplinary, physics-based simulation and analyses of airbreathing hypersonic flight vehicles will be developed....

  9. A HYBRID METHOD FOR AUTOMATIC SPEECH RECOGNITION PERFORMANCE IMPROVEMENT IN REAL WORLD NOISY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Urmila Shrawankar

    2013-01-01

    Full Text Available It is a well known fact that, speech recognition systems perform well when the system is used in conditions similar to the one used to train the acoustic models. However, mismatches degrade the performance. In adverse environment, it is very difficult to predict the category of noise in advance in case of real world environmental noise and difficult to achieve environmental robustness. After doing rigorous experimental study it is observed that, a unique method is not available that will clean the noisy speech as well as preserve the quality which have been corrupted by real natural environmental (mixed noise. It is also observed that only back-end techniques are not sufficient to improve the performance of a speech recognition system. It is necessary to implement performance improvement techniques at every step of back-end as well as front-end of the Automatic Speech Recognition (ASR model. Current recognition systems solve this problem using a technique called adaptation. This study presents an experimental study that aims two points, first is to implement the hybrid method that will take care of clarifying the speech signal as much as possible with all combinations of filters and enhancement techniques. The second point is to develop a method for training all categories of noise that can adapt the acoustic models for a new environment that will help to improve the performance of the speech recognizer under real world environmental mismatched conditions. This experiment confirms that hybrid adaptation methods improve the ASR performance on both levels, (Signal-to-Noise Ratio SNR improvement as well as word recognition accuracy in real world noisy environment.

  10. Back to the future: virtualization of the computing environment at the W. M. Keck Observatory

    Science.gov (United States)

    McCann, Kevin L.; Birch, Denny A.; Holt, Jennifer M.; Randolph, William B.; Ward, Josephine A.

    2014-07-01

    Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory's diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.

  11. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  12. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  13. Computational and experimental determinations of the UV adsorption of polyvinylsilsesquioxane-silica and titanium dioxide hybrids.

    Science.gov (United States)

    Wang, Haiyan; Lin, Derong; Wang, Di; Hu, Lijiang; Huang, Yudong; Liu, Li; Loy, Douglas A

    2014-01-01

    Sunscreens that absorb UV light without photodegradation could reduce skin cancer. Polyvinyl silsesquioxanes are known to have greater thermal and photochemical stability than organic compounds, such as those in sunscreens. This paper evaluates the UV transparency of vinyl silsesquioxanes (VS) and its hybrids with SiO2(VSTE) and TiO2(VSTT) experimentally and computationally. Based on films of VS prepared by sol-gel polymerization, using benzoyl peroxide as an initiator, vinyltrimethoxysilane (VMS) formulated oligomer through thermal curing. Similarly, VSTE films were prepared from VMS and 5-25 wt-% tetraethoxysilane (TEOS) and VSTT films were prepared from VMS and 5-25 wt-% titanium tetrabutoxide (TTB). Experimental average transparencies of the modified films were found to be about 9-14% between 280-320 nm, 67-73% between 320-350nm, and 86-89% between 350-400nm. Computation of the band gap was absorption edges for the hybrids in excellent agreement with experimental data. VS, VSTE and VSTT showed good absorption in UV-C and UV-B range, but absorbed virtually no UV-A. Addition of SiO2 or TiO2 does not improve UV-B absorption, but on the opposite increases transparency of thin films to UV. This increase was validated with molecular simulations. Results show computational design can predict better sunscreens and reduce the effort of creating sunscreens that are capable of absorbing more UV-B and UV-A.

  14. Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields

    Science.gov (United States)

    Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.

    2000-01-01

    Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.

  15. Public vs Private vs Hybrid vs Community - Cloud Computing: A Critical Review

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2014-02-01

    Full Text Available These days cloud computing is booming like no other technology. Every organization whether it's small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.

  16. Quantum computation in a quantum-dot-Majorana-fermion hybrid system

    CERN Document Server

    Xue, Zheng-Yuan

    2012-01-01

    We propose a scheme to implement universal quantum computation in a quantum-dot-Majorana-fermion hybrid system. Quantum information is encoded on pairs of Majorana fermions, which live on the the interface between topologically trivial and nontrivial sections of a quantum nanowire deposited on an s-wave superconductor. Universal single-qubit gates on topological qubit can be achieved. A measurement-based two-qubit Controlled-Not gate is produced with the help of parity measurements assisted by the quantum-dot and followed by prescribed single-qubit gates. The parity measurement, on the quantum-dot and a topological qubit, is achieved by the Aharonov- Casher effect.

  17. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    Science.gov (United States)

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  18. Treatment of early and late reflections in a hybrid computer model for room acoustics

    DEFF Research Database (Denmark)

    Naylor, Graham

    1992-01-01

    The ODEON computer model for acoustics in large rooms is intended for use both in design (by predicting room acoustical indices quickly and easily) and in research (by forming the basis of an auralization system and allowing study of various room acoustical phenomena). These conflicting demands...... preclude the use of both ``pure'' image source and ``pure'' particle tracing methods. A hybrid model has been developed, in which rays discover potential image sources up to a specified order. Thereafter, the same ray tracing process is used in a different way to rapidly generate a dense reverberant decay...

  19. Assessment of asthmatic inflammation using hybrid fluorescence molecular tomography-x-ray computed tomography

    Science.gov (United States)

    Ma, Xiaopeng; Prakash, Jaya; Ruscitti, Francesca; Glasl, Sarah; Stellari, Fabio Franco; Villetti, Gino; Ntziachristos, Vasilis

    2016-01-01

    Nuclear imaging plays a critical role in asthma research but is limited in its readings of biology due to the short-lived signals of radio-isotopes. We employed hybrid fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) for the assessment of asthmatic inflammation based on resolving cathepsin activity and matrix metalloproteinase activity in dust mite, ragweed, and Aspergillus species-challenged mice. The reconstructed multimodal fluorescence distribution showed good correspondence with ex vivo cryosection images and histological images, confirming FMT-XCT as an interesting alternative for asthma research.

  20. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    Science.gov (United States)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced

  1. Toward Distributed Service Discovery in Pervasive Computing Environments

    Science.gov (United States)

    2006-02-01

    Library at www.computer.org/publications/ dlib . 112 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 5, NO. 2, FEBRUARY 2006 ... Library for Parallel Simulation of Large-Scale Wireless Networks,” Proc. 12th Workshop Parallel and Distributed Simulations, 1998. CHAKRABORTY ET AL...computing, digital libraries , electronic commerce, and trusted information systems. She has published eight books and more than 100 refereed articles in

  2. Computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Shuk, K.

    1995-11-01

    The current computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory is that of a central server that is accessed by a terminal or terminal emulator. The initiative to move to a client/server environment is strong, backed by desktop machines becoming more and more powerful. The desktop machines can now take on parts of tasks once run entirely on the central server, making the whole environment computationally more efficient as a result. Services are tasks that are repeated throughout the environment such that it makes sense to share them; tasks such as email, user authentication and file transfer are services. The new client/;server environment needs to determine which services must be included in the environment for basic functionality. These services then unify the computing environment, not only for the forthcoming ASSIST+, but for Administrative Information Systems as a whole, joining various server platforms with heterogeneous desktop computing platforms.

  3. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    Science.gov (United States)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  4. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto

    2015-01-01

    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  5. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    Science.gov (United States)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  6. Hybrid ecologies: interactions between artificial and natural organisms in telematic environments

    Directory of Open Access Journals (Sweden)

    Guto Nóbrega

    2011-12-01

    Full Text Available This paper reports and analyses two projects in telematic art realized in 2011 that had the participation of NANO – Nucleus of Art and New Organisms - School of Fine Arts - UFRJ, research laboratory coordinated by Dr. Carlos (Guto Nobrega and Dr. Maria Luisa Fragoso, as part of the Post Graduate Program in Visual Arts. Both projects involved the creation of artificial systems for interactivity in telematic environments. The text will present relevant points of the two projects, their relations, resonances and unfoldings. The focus of our analysis is the process of invention of artificial interfaces, their hybridizations, complexity and modes of interaction and presence in the context of works of telematic art.

  7. Project Scheduling Using Hybrid Genetic Algorithm with Fuzzy Logic Controller in SCM Environment

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In supply chain management (SCM) environment, we consider a resource-constrained project scheduling problem (rcPSP) model as one of advanced scheduling problems considered by a constraint programming technique. We develop a hybrid genetic algorithm (hGA) with a fuzzy logic controller (FLC) to solve the rcPSP which is the well known NP-hard problem. This new approach is based on the design of genetic operators with FLC through initializing the serial method which is superior for a large rcPSP scale. For solving these rcPSP problems, we first demonstrate that our hGA with FLC (flc-hGA) yields better results than several heuristic procedures presented in the literature. We have revealed a fact that flc-hGA has the evolutionary behaviors of average fitness better than hGA without FLC.

  8. A Semantic Based Policy Management Framework for Cloud Computing Environments

    Science.gov (United States)

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  9. A Semantic Based Policy Management Framework for Cloud Computing Environments

    Science.gov (United States)

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  10. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan;

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...... diagnoses. The final result is a short but robust rule based classification scheme, achieving high degree of classification accuracy (exceeding 90% of accuracy for most classes) in a meaningful and user-friendly representation form for the medical expert. The domain of application analyzed through the paper...... is the well-known Pap-Test problem, corresponding to a numerical database, which consists of 450 medical records, 25 diagnostic attributes and 5 different diagnostic classes. Experimental data are divided in two equal parts for the training and testing phase, and 8 mutually dependent rules for diagnosis...

  11. PWR hybrid computer model for assessing the safety implications of control systems

    Energy Technology Data Exchange (ETDEWEB)

    Smith, O L; Renier, J P; Difilippo, F C; Clapp, N E; Sozer, A; Booth, R S; Craddick, W G; Morris, D G

    1986-03-01

    The ORNL study of safety-related aspects of nuclear power plant control systems consists of two interrelated tasks: (1) failure mode and effects analysis (FMEA) that identified single and multiple component failures that might lead to significant plant upsets and (2) computer models that used these failures as initial conditions and traced the dynamic impact on the control system and remainder of the plant. This report describes the simulation of Oconee Unit 1, the first plant analyzed. A first-principles, best-estimate model was developed and implemented on a hybrid computer consisting of AD-4 analog and PDP-10 digital machines. Controls were placed primarily on the analog to use its interactive capability to simulate operator action. 48 refs., 138 figs., 15 tabs.

  12. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  13. Libraries and Development Environments for Monte Carlo Simulations of Lattice Gauge Theories on Parallel Computers

    Science.gov (United States)

    Decker, K. M.; Jayewardena, C.; Rehmann, R.

    We describe the library lgtlib, and lgttool, the corresponding development environment for Monte Carlo simulations of lattice gauge theory on multiprocessor vector computers with shared memory. We explain why distributed memory parallel processor (DMPP) architectures are particularly appealing for compute-intensive scientific applications, and introduce the design of a general application and program development environment system for scientific applications on DMPP architectures.

  14. The UF family of reference hybrid phantoms for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choonsik [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, MD 20852 (United States); Lodwick, Daniel; Hurtado, Jorge; Pafundi, Deanna [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Williams, Jonathan L [Department of Radiology, University of Florida, Gainesville, FL 32611 (United States); Bolch, Wesley E [Departments of Nuclear and Radiological and Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)], E-mail: wbolch@ufl.edu

    2010-01-21

    Computational human phantoms are computer models used to obtain dose distributions within the human body exposed to internal or external radiation sources. In addition, they are increasingly used to develop detector efficiencies for in vivo whole-body counters. Two classes of computational human phantoms have been widely utilized for dosimetry calculation: stylized and voxel phantoms that describe human anatomy through mathematical surface equations and 3D voxel matrices, respectively. Stylized phantoms are flexible in that changes to organ position and shape are possible given avoidance of region overlap, while voxel phantoms are typically fixed to a given patient anatomy, yet can be proportionally scaled to match individuals of larger or smaller stature, but of equivalent organ anatomy. Voxel phantoms provide much better anatomical realism as compared to stylized phantoms which are intrinsically limited by mathematical surface equations. To address the drawbacks of these phantoms, hybrid phantoms based on non-uniform rational B-spline (NURBS) surfaces have been introduced wherein anthropomorphic flexibility and anatomic realism are both preserved. Researchers at the University of Florida have introduced a series of hybrid phantoms representing the ICRP Publication 89 reference newborn, 15 year, and adult male and female. In this study, six additional phantoms are added to the UF family of hybrid phantoms-those of the reference 1 year, 5 year and 10 year child. Head and torso CT images of patients whose ages were close to the targeted ages were obtained under approved protocols. Major organs and tissues were segmented from these images using an image processing software, 3D-DOCTOR(TM). NURBS and polygon mesh surfaces were then used to model individual organs and tissues after importing the segmented organ models to the 3D NURBS modeling software, Rhinoceros(TM). The phantoms were matched to four reference datasets: (1) standard anthropometric data, (2) reference

  15. Construction of a Digital Learning Environment Based on Cloud Computing

    Science.gov (United States)

    Ding, Jihong; Xiong, Caiping; Liu, Huazhong

    2015-01-01

    Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…

  16. Construction of a Digital Learning Environment Based on Cloud Computing

    Science.gov (United States)

    Ding, Jihong; Xiong, Caiping; Liu, Huazhong

    2015-01-01

    Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…

  17. A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets

    Science.gov (United States)

    Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele

    2016-12-01

    Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.

  18. Analytical Study of Object Components for Distributed and Ubiquitous Computing Environment

    CERN Document Server

    Batra, Usha; Bhardwaj, Sachin

    2011-01-01

    The Distributed object computing is a paradigm that allows objects to be distributed across a heterogeneous network, and allows each of the components to interoperate as a unified whole. A new generation of distributed applications, such as telemedicine and e-commerce applications, are being deployed in heterogeneous and ubiquitous computing environments. The objective of this paper is to explore an applicability of a component based services in ubiquitous computational environment. While the fundamental structure of various distributed object components is similar, there are differences that can profoundly impact an application developer or the administrator of a distributed simulation exercise and to implement in Ubiquitous Computing Environment.

  19. Dynamic tracking of elementary preservice teachers' experiences with computer-based mathematics learning environments

    Science.gov (United States)

    Campbell, Stephen R.

    2003-05-01

    A challenging task in educational research today is to understand the implications of recent developments in computer-based learning environments. On the other hand, questions regarding learning and mathematical cognition have long been a central focus of research in mathematics education. Adding technology compounds an already complex problematic. Fortunately, computer-based technology also provides researchers with new ways of studying cognition and instruction. This paper introduces a new method for dynamically tracking learners' experiences in computer-based learning environments. Dynamic tracking is illustrated in both a classroom and a clinical setting by drawing on two studies with elementary preservice teachers working in computer-based mathematics learning environments.

  20. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    Science.gov (United States)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  1. The study of hybrid model identification,computation analysis and fault location for nonlinear dynamic circuits and systems

    Institute of Scientific and Technical Information of China (English)

    XIE Hong; HE Yi-gang; ZENG Guan-da

    2006-01-01

    This paper presents the hybrid model identification for a class of nonlinear circuits and systems via a combination of the block-pulse function transform with the Volterra series.After discussing the method to establish the hybrid model and introducing the hybrid model identification,a set of relative formulas are derived for calculating the hybrid model and computing the Volterra series solution of nonlinear dynamic circuits and systems.In order to significantly reduce the computation cost for fault location,the paper presents a new fault diagnosis method based on multiple preset models that can be realized online.An example of identification simulation and fault diagnosis are given.Results show that the method has high accuracy and efficiency for fault location of nonlinear dynamic circuits and systems.

  2. Hydrodynamics and Water Quality forecasting over a Cloud Computing environment: INDIGO-DataCloud

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; García, Daniel; Monteoliva, Agustín

    2017-04-01

    Algae Bloom due to eutrophication is an extended problem for water reservoirs and lakes that impacts directly in water quality. It can create a dead zone that lacks enough oxygen to support life and it can also be human harmful, so it must be controlled in water masses for supplying, bathing or other uses. Hydrodynamic and Water Quality modelling can contribute to forecast the status of the water system in order to alert authorities before an algae bloom event occurs. It can be used to predict scenarios and find solutions to reduce the harmful impact of the blooms. High resolution models need to process a big amount of data using a robust enough computing infrastructure. INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is an European Commission funded project that aims at developing a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The project addresses the development of solutions for different Case Studies using different Cloud-based alternatives. In the first INDIGO software release, a set of components are ready to manage the deployment of services to perform N number of Delft3D simulations (for calibrating or scenario definition) over a Cloud Computing environment, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator, AAI (Authorization, Authentication) and OneData (Distributed Storage System). Moreover, the Future Gateway portal based on Liferay, provides an user-friendly interface where the user can configure the simulations. Due to the data approach of INDIGO, the developed solutions can contribute to manage the full data life cycle of a project, thanks to different tools to manage datasets or even metadata. Furthermore, the cloud environment contributes to provide a dynamic, scalable and easy-to-use framework for non-IT experts users. This framework is potentially capable to automatize the processing of

  3. Computer Aided Design Tools for Extreme Environment Electronics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to provide Computer Aided Design (CAD) tools for radiation-tolerant, wide-temperature-range digital, analog, mixed-signal, and radio-frequency...

  4. Distributed metadata in a high performance computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  5. Redesigning Computer-based Learning Environments: Evaluation as Communication

    CERN Document Server

    Brust, Matthias R; Ricarte, Ivan M L

    2007-01-01

    In the field of evaluation research, computer scientists live constantly upon dilemmas and conflicting theories. As evaluation is differently perceived and modeled among educational areas, it is not difficult to become trapped in dilemmas, which reflects an epistemological weakness. Additionally, designing and developing a computer-based learning scenario is not an easy task. Advancing further, with end-users probing the system in realistic settings, is even harder. Computer science research in evaluation faces an immense challenge, having to cope with contributions from several conflicting and controversial research fields. We believe that deep changes must be made in our field if we are to advance beyond the CBT (computer-based training) learning model and to build an adequate epistemology for this challenge. The first task is to relocate our field by building upon recent results from philosophy, psychology, social sciences, and engineering. In this article we locate evaluation in respect to communication s...

  6. Imaging SKA-Scale data in three different computing environments

    CERN Document Server

    Dodson, Richard; Wu, Chen; Popping, Attila; Meyer, Martin; Wicenec, Andreas; Quinn, Peter; van Gorkom, Jacqueline; Momjian, Emmanuel

    2015-01-01

    We present the results of our investigations into options for the computing platform for the imaging pipeline in the CHILES project, an ultra-deep HI pathfinder for the era of the Square Kilometre Array. CHILES pushes the current computing infrastructure to its limits and understanding how to deliver the images from this project is clarifying the Science Data Processing requirements for the SKA. We have tested three platforms: a moderately sized cluster, a massive High Performance Computing (HPC) system, and the Amazon Web Services (AWS) cloud computing platform. We have used well-established tools for data reduction and performance measurement to investigate the behaviour of these platforms for the complicated access patterns of real-life Radio Astronomy data reduction. All of these platforms have strengths and weaknesses and the system tools allow us to identify and evaluate them in a quantitative manner. With the insights from these tests we are able to complete the imaging pipeline processing on both the ...

  7. Efficient Approach for Load Balancing in Virtual Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Harvinder singh

    2014-10-01

    Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.

  8. EFFICIENT APPROACH FOR LOAD BALANCING IN VIRTUAL CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Harvinder Singh

    2015-10-01

    Full Text Available Cloud computing technology is changing the focus of IT world and it is becoming famous because of its great characteristics. Load balancing is one of the main challenges in cloud computing for distributing workloads across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources. Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. The objective of this paper to propose an approach for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The results discussed in this paper, based on existing round robin, least connection, throttled load balance, fastest response time and a new proposed algorithm fastest with least connection scheduling algorithms. This new algorithm identifies the overall response time and data centre processing time is improved as well as cost is reduced in comparison to the existing scheduling parameters.

  9. Computer vision and laser scanner road environment perception

    OpenAIRE

    García, Fernando; Ponz Vila, Aurelio; Martín Gómez, David; Escalera, Arturo de la; Armingol, José M.

    2014-01-01

    Data fusion procedure is presented to enhance classical Advanced Driver Assistance Systems (ADAS). The novel vehicle safety approach, combines two classical sensors: computer vision and laser scanner. Laser scanner algorithm performs detection of vehicles and pedestrians based on pattern matching algorithms. Computer vision approach is based on Haar-Like features for vehicles and Histogram of Oriented Gradients (HOG) features for pedestrians. The high level fusion procedure uses Kalman Filter...

  10. Providing a computing environment for a high energy physics workshop

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, C.; Butler, J.; Carter, T.; DeMar, P.; Fagan, D.; Gibbons, R.; Grigaliunas, V.; Haibeck, M.; Haring, P.; Horvath, C.; Hughart, N.; Johnstad, H.; Jones, S.; Kreymer, A.; LeBrun, P.; Lego, A.; Leninger, M.; Loebel, L.; McNamara, S.; Nguyen, T.; Nicholls, J.; O' Reilly, C.; Pabrai, U.; Pfister, J.; Ritchie, D.; Roberts, L.; Sazama, C.; Wohlt, D. (Fermi National Accelerator Lab., Batavia, IL (USA)); Carven, R. (Wiscons

    1989-12-01

    Although computing facilities have been provided at conferences and workshops remote from the host institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail. This report documents the effort involved in providing a local computing facility with world-wide networking capability for a physics workshop so that we and others can benefit from the knowledge gained through the experience.

  11. A hybrid multi-scale computational scheme for advection-diffusion-reaction equation

    Science.gov (United States)

    Karimi, S.; Nakshatrala, K. B.

    2016-12-01

    Simulation of transport and reaction processes in porous media and subsurface science has become more vital than ever. Over the past few decades, a variety of mathematical models and numerical methodologies for porous media simulations have been developed. As the demand for higher accuracy and validity of the models grows, the issue of disparate temporal and spatial scales becomes more problematic. The variety of reaction processes and complexity of pore geometry poses a huge computational burden in a real-world or reservoir scale simulation. Meanwhile, methods based on averaging or up- scaling techniques do not provide reliable estimates to pore-scale processes. To overcome this problem, development of hybrid and multi-scale computational techniques is considered a promising approach. In these methods, pore-scale and continuum-scale models are combined, hence, a more reliable estimate to pore-scale processes is obtained without having to deal with the tremendous computational overhead of pore-scale methods. In this presentation, we propose a computational framework that allows coupling of lattice Boltzmann method (for pore-scale simulation) and finite element method (for continuum-scale simulation) for advection-diffusion-reaction equations. To capture disparate in time and length events, non-matching grid and time-steps are allowed. Apart from application of this method to benchmark problems, multi-scale simulation of chemical reactions in porous media is also showcased.

  12. Grouping Based Job Scheduling Algorithm Using Priority Queue and Hybrid Algorithm in Grid Computing

    Directory of Open Access Journals (Sweden)

    Pinky Rosemarry

    2013-01-01

    Full Text Available Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machinesinto consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized byusing various scheduling algorithms depending upon the performance criteria to be improved e.g. response time, throughput. The work has been done in MATLAB using the parallel computing toolbox.

  13. Imaging SKA-scale data in three different computing environments

    Science.gov (United States)

    Dodson, R.; Vinsen, K.; Wu, C.; Popping, A.; Meyer, M.; Wicenec, A.; Quinn, P.; van Gorkom, J.; Momjian, E.

    2016-01-01

    We present the results of our investigations into options for the computing platform for the imaging pipeline in the CHILES project, an ultra-deep HI pathfinder for the era of the Square Kilometre Array. CHILES pushes the current computing infrastructure to its limits and understanding how to deliver the images from this project is clarifying the Science Data Processing requirements for the SKA. We have tested three platforms: a moderately sized cluster, a massive High Performance Computing (HPC) system, and the Amazon Web Services (AWS) cloud computing platform. We have used well-established tools for data reduction and performance measurement to investigate the behaviour of these platforms for the complicated access patterns of real-life Radio Astronomy data reduction. All of these platforms have strengths and weaknesses and the system tools allow us to identify and evaluate them in a quantitative manner. With the insights from these tests we are able to complete the imaging pipeline processing on both the HPC platform and also on the cloud computing platform, which paves the way for meeting big data challenges in the era of SKA in the field of Radio Astronomy. We discuss the implications that all similar projects will have to consider, in both performance and costs, to make recommendations for the planning of Radio Astronomy imaging workflows.

  14. Team-computer interfaces in complex task environments

    Energy Technology Data Exchange (ETDEWEB)

    Terranova, M.

    1990-09-01

    This research focused on the interfaces (media of information exchange) teams use to interact about the task at hand. This report is among the first to study human-system interfaces in which the human component is a team, and the system functions as part of the team. Two operators dynamically shared a simulated fluid flow process, coordinating control and failure detection responsibilities through computer-mediated communication. Different computer interfaces representing the same system information were used to affect the individual operators' mental models of the process. Communication was identified as the most critical variable, consequently future research is being designed to test effective modes of communication. The results have relevance for the development of team-computer interfaces in complex systems in which responsibility must be shared dynamically among all members of the operation.

  15. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    Science.gov (United States)

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  16. Performance Assessment of OVERFLOW on Distributed Computing Environment

    Science.gov (United States)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  17. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  18. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  19. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  20. Creating the Environment for the Prosperity of Cloud Computing Technology

    Directory of Open Access Journals (Sweden)

    Wen-Hsing Lai

    2012-08-01

    Full Text Available The key point for the success of clouding computing technology in terms of application is whether such operation can produce the sense of trustworthiness to its users. Technical measurement has always been the fundamental preventive precaution, no matter what kind of aspect in dealing with producing the sense of trustworthiness.  Except technical measurement, there are two developing issues surrounding the central idea worth notice, which are the protection of information privacy and the jurisdictional issue. The main purpose of this article is to focus on the issue of protecting information privacy and jurisdictional problem through the newly developed cloud computing technology. This article will first introduce the characteristics of cloud computing technology in order to pave the way for further discussion. Then the issue of protecting information privacy and jurisdictional problem will be discussed through disparity of legal protection of information privacy and principles for asserting jurisdiction within Internet. The personal observation and suggestion will be made at the end of this article for a future possible adjustment of infrastructure for the protection of information privacy and jurisdictional decision within cyberspace in order to promote the sense of trustworthiness of the cloud computing technology user.

  1. Music Teachers' Experiences in One-to-One Computing Environments

    Science.gov (United States)

    Dorfman, Jay

    2016-01-01

    Ubiquitous computing scenarios such as the one-to-one model, in which every student is issued a device that is to be used across all subjects, have increased in popularity and have shown both positive and negative influences on education. Music teachers in schools that adopt one-to-one models may be inadequately equipped to integrate this kind of…

  2. Cloud Computing E-Communication Services in the University Environment

    Science.gov (United States)

    Babin, Ron; Halilovic, Branka

    2017-01-01

    The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…

  3. Hybrid Simulation Environment for Construction Projects: Identification of System Design Criteria

    Directory of Open Access Journals (Sweden)

    Mohamed Moussa

    2014-01-01

    Full Text Available Large construction projects are complex, dynamic, and unpredictable. They are subject to external and uncontrollable events that affect their schedule and financial outcomes. Project managers take decisions along the lifecycle of the projects to align with projects objectives. These decisions are data dependent where data change over time. Simulation-based modeling and experimentation of such dynamic environment are a challenge. Modeling of large projects or multiprojects is difficult and impractical for standalone computers. This paper presents the criteria required in a simulation environment suitable for modeling large and complex systems such as construction projects to support their lifecycle management. Also presented is a platform that encompasses the identified criteria. The objective of the platform is to facilitate and simplify the simulation and modeling process and enable the inclusion of complexity in simulation models.

  4. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    OpenAIRE

    Williams, Samuel; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Irvine, CA

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to con...

  5. Detecting awareness in patients with disorders of consciousness using a hybrid brain-computer interface

    Science.gov (United States)

    Pan, Jiahui; Xie, Qiuyou; He, Yanbin; Wang, Fei; Di, Haibo; Laureys, Steven; Yu, Ronghao; Li, Yuanqing

    2014-10-01

    Objective. The bedside detection of potential awareness in patients with disorders of consciousness (DOC) currently relies only on behavioral observations and tests; however, the misdiagnosis rates in this patient group are historically relatively high. In this study, we proposed a visual hybrid brain-computer interface (BCI) combining P300 and steady-state evoked potential (SSVEP) responses to detect awareness in severely brain injured patients. Approach. Four healthy subjects, seven DOC patients who were in a vegetative state (VS, n = 4) or minimally conscious state (MCS, n = 3), and one locked-in syndrome (LIS) patient attempted a command-following experiment. In each experimental trial, two photos were presented to each patient; one was the patient's own photo, and the other photo was unfamiliar. The patients were instructed to focus on their own or the unfamiliar photos. The BCI system determined which photo the patient focused on with both P300 and SSVEP detections. Main results. Four healthy subjects, one of the 4 VS, one of the 3 MCS, and the LIS patient were able to selectively attend to their own or the unfamiliar photos (classification accuracy, 66-100%). Two additional patients (one VS and one MCS) failed to attend the unfamiliar photo (50-52%) but achieved significant accuracies for their own photo (64-68%). All other patients failed to show any significant response to commands (46-55%). Significance. Through the hybrid BCI system, command following was detected in four healthy subjects, two of 7 DOC patients, and one LIS patient. We suggest that the hybrid BCI system could be used as a supportive bedside tool to detect awareness in patients with DOC.

  6. Social Knowledge Awareness Map for Computer Supported Ubiquitous Learning Environment

    Science.gov (United States)

    El-Bishouty, Moushir M.; Ogata, Hiroaki; Rahman, Samia; Yano, Yoneo

    2010-01-01

    Social networks are helpful for people to solve problems by providing useful information. Therefore, the importance of mobile social software for learning has been supported by many researches. In this research, a model of personalized collaborative ubiquitous learning environment is designed and implemented in order to support learners doing…

  7. Social Knowledge Awareness Map for Computer Supported Ubiquitous Learning Environment

    Science.gov (United States)

    El-Bishouty, Moushir M.; Ogata, Hiroaki; Rahman, Samia; Yano, Yoneo

    2010-01-01

    Social networks are helpful for people to solve problems by providing useful information. Therefore, the importance of mobile social software for learning has been supported by many researches. In this research, a model of personalized collaborative ubiquitous learning environment is designed and implemented in order to support learners doing…

  8. Multiscale Computing with the Multiscale Modeling Library and Runtime Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Groen, D.; Ben Belgacem, M.; Kurowski, K.; Hoekstra, A.G.

    2013-01-01

    We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We pre

  9. Paper, Piles, and Computer Files: Folklore of Information Work Environments.

    Science.gov (United States)

    Neumann, Laura J.

    1999-01-01

    Reviews literature to form a folklore of information workspace and emphasizes the importance of studying folklore of information work environments in the context of the current shift toward removing work from any particular place via information systems, e-mail, and the Web. Discusses trends in workplace design and corporate culture. Contains 84…

  10. Near-term hybrid vehicle program, phase 1. Appendix B: Design trade-off studies report. Volume 3: Computer program listings

    Science.gov (United States)

    1979-01-01

    A description and listing is presented of two computer programs: Hybrid Vehicle Design Program (HYVELD) and Hybrid Vehicle Simulation Program (HYVEC). Both of the programs are modifications and extensions of similar programs developed as part of the Electric and Hybrid Vehicle System Research and Development Project.

  11. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  12. Cluster implementation for parallel computation within MATLAB software environment

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Antonio O. de; Dantas, Carlos C.; Charamba, Luiz G. da R.; Souza Neto, Wilson F. de; Melo, Silvio B. Melo; Lima, Emerson A. de O., E-mail: mailto.aos@ufpe.br, E-mail: ccd@ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: emathematics@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2013-07-01

    A cluster for parallel computation with MATLAB software the COCGT - Cluster for Optimizing Computing in Gamma ray Transmission methods, is implemented. The implementation correspond to creation of a local net of computers, facilities and configurations of software, as well as the accomplishment of cluster tests for determine and optimizing of performance in the data processing. The COCGT implementation was required by data computation from gamma transmission measurements applied to fluid dynamic and tomography reconstruction in a FCC-Fluid Catalytic Cracking cold pilot unity, and simulation data as well. As an initial test the determination of SVD - Singular Values Decomposition - of random matrix with dimension (n , n), n=1000, using the Girco's law modified, revealed that COCGT was faster in comparison to the literature [1] cluster, which is similar and operates at the same conditions. Solution of a system of linear equations provided a new test for the COCGT performance by processing a square matrix with n=10000, computing time was 27 s and for square matrix with n=12000, computation time was 45 s. For determination of the cluster behavior in relation to 'parfor' (parallel for-loop) and 'spmd' (single program multiple data), two codes were used containing those two commands and the same problem: determination of SVD of a square matrix with n= 1000. The execution of codes by means of COCGT proved: 1) for the code with 'parfor', the performance improved with the labs number from 1 to 8 labs; 2) for the code 'spmd', just 1 lab (core) was enough to process and give results in less than 1 s. In similar situation, with the difference that now the SVD will be determined from square matrix with n1500, for code with 'parfor', and n=7000, for code with 'spmd'. That results take to conclusions: 1) for the code with 'parfor', the behavior was the same already described above; 2) for code with &apos

  13. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  14. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  15. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    Energy Technology Data Exchange (ETDEWEB)

    Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  16. Optimization of a Continuous Hybrid Impeller Mixer via Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    N. Othman

    2014-01-01

    Full Text Available This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD using computational fluid dynamics (CFD. In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT. Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.

  17. SWNT-DNA and SWNT-polyC hybrids: AFM study and computer modeling.

    Science.gov (United States)

    Karachevtsev, M V; Lytvyn, O S; Stepanian, S G; Leontiev, V S; Adamowicz, L; Karachevtsev, V A

    2008-03-01

    Hybrids of carbon single-walled nanotubes (SWNT) with fragmented single or double-stranded DNA (fss- or fds-DNA) or polyC were studied by Atom Force Microscopy (AFM) and computer modeling. It was found that fragments of the polymer wrap in several layers around the nanotube, forming a strand-like spindle. In contrast to the fss-DNA, the fds-DNA also forms compact structures near the tube surface due to the formation of self-assembly structures consisting of a few DNA fragments. The hybrids of SWNT with wrapped single-, double- or triple strands of the biopolymer were simulated, and it was shown that such structures are stable. To explain the reason of multi-layer polymeric coating of the nanotube surface, the energy of the intermolecular interactions between different components of polyC was calculated at the MP2/6-31++G** level as well as the interaction energy in the SWNT-cytosine complex.

  18. Feasibility of a Hybrid Brain-Computer Interface for Advanced Functional Electrical Therapy

    Directory of Open Access Journals (Sweden)

    Andrej M. Savić

    2014-01-01

    Full Text Available We present a feasibility study of a novel hybrid brain-computer interface (BCI system for advanced functional electrical therapy (FET of grasp. FET procedure is improved with both automated stimulation pattern selection and stimulation triggering. The proposed hybrid BCI comprises the two BCI control signals: steady-state visual evoked potentials (SSVEP and event-related desynchronization (ERD. The sequence of the two stages, SSVEP-BCI and ERD-BCI, runs in a closed-loop architecture. The first stage, SSVEP-BCI, acts as a selector of electrical stimulation pattern that corresponds to one of the three basic types of grasp: palmar, lateral, or precision. In the second stage, ERD-BCI operates as a brain switch which activates the stimulation pattern selected in the previous stage. The system was tested in 6 healthy subjects who were all able to control the device with accuracy in a range of 0.64–0.96. The results provided the reference data needed for the planned clinical study. This novel BCI may promote further restoration of the impaired motor function by closing the loop between the “will to move” and contingent temporally synchronized sensory feedback.

  19. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  20. Sentiment analysis and ontology engineering an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...

  1. Single-Board-Computer-Based Traffic Generator for a Heterogeneous and Hybrid Smart Grid Communication Network

    Directory of Open Access Journals (Sweden)

    Do Nguyet Quang

    2014-02-01

    Full Text Available In smart grid communication implementation, network traffic pattern is one of the main factors that affect the system’s performance. Examining different traffic patterns in smart grid is therefore crucial when analyzing the network performance. Due to the heterogeneous and hybrid nature of smart grid, the type of traffic distribution in the network is still unknown. The traffic that popularly used for simulation and analysis no longer reflects the real traffic in a multi-technology and bi-directional communication system. Hence, in this study, a single-board computer is implemented as a traffic generator which can generate network traffic similar to those generated by various applications in the fully operational smart grid. By placing in a strategic and appropriate position, a collection of traffic generators allow network administrators to investigate and test the effect of heavy traffic on performance of smart grid communication system.

  2. Creating Virtual Experiences in Computer-Mediated Environments

    OpenAIRE

    Lisa Klein

    2002-01-01

    Although much excitement has arisen over the potential for "interactivity" on the Web, very little is understood about what exactly creates a sense of interactivity and what impact it has on user behavior. Businesses are spending millions of dollars to add interactivity to their Web sites, in the form of games, animated pictures, and personalization tools, without knowing exactly what impact this has on their customers. In this research, the critical components of this computer-mediated inter...

  3. Personal Semantic Web Through A Space Based Computing Environment

    CERN Document Server

    Oliver, Ian

    2008-01-01

    The Semantic Web through technologies such to support the canonical representation information and presenting it to users in a method by which its meaning can be understood or at least communi- cated and interpreted by all parties. As the Semantic Web evolves into more of a computing platform rather than an information platform more dynamic structures, interactions and behaviours will evolve leading to systems which localise and personalise this Dynamic Semantic Web.

  4. Cost Optimization of Cloud Computing Services in a Networked Environment

    Directory of Open Access Journals (Sweden)

    Eli WEINTRAUB

    2015-04-01

    Full Text Available Cloud computing service providers' offer their customers' services maximizing their revenues, whereas customers wish to minimize their costs. In this paper we shall concentrate on consumers' point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles consisting of services which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. This bundling policy is likely to change in the long run since it contradicts economic competition theory, causing an unfair pricing model and locking-in consumers to specific service providers. In this paper we assume the existence of a free competitive market, in which consumers are free to switch their services among providers. We assume that free market competition will enforce vendors to adopt open standards, improve the quality of their services and suggest a large variety of cloud services in all layers. Our model is aimed at the potential customer who wishes to find the optimal combination of service providers which minimizes his costs. We propose three possible strategies for implementation of the model in organizations. We formulate the mathematical model and illustrate its advantages compared to existing pricing practices used by cloud computing consumers.

  5. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  6. Advanced Scientific Computing Environment Team new scientific database management task. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future ``computer`` will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This ``network computer`` will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of ``Jvv`` concepts and capabilities to distributed and/or parallel computing environments.

  7. An Efficient Framework for EEG Analysis with Application to Hybrid Brain Computer Interfaces Based on Motor Imagery and P300

    Directory of Open Access Journals (Sweden)

    Jinyi Long

    2017-01-01

    Full Text Available The hybrid brain computer interface (BCI based on motor imagery (MI and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI.

  8. An Efficient Framework for EEG Analysis with Application to Hybrid Brain Computer Interfaces Based on Motor Imagery and P300

    Science.gov (United States)

    Wang, Jue; Yu, Tianyou

    2017-01-01

    The hybrid brain computer interface (BCI) based on motor imagery (MI) and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI. PMID:28316617

  9. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  10. A hybrid search algorithm for swarm robots searching in an unknown environment.

    Science.gov (United States)

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.

  11. Communicator Style as a Predictor of Cyberbullying in a Hybrid Learning Environment

    Directory of Open Access Journals (Sweden)

    Özcan Özgür Dursun

    2012-07-01

    Full Text Available This study aimed to describe the characteristics of undergraduate students in a hybrid learning environment with regard to their communicator styles and cyberbullying behaviors. Moreover, relationships between cyberbullying victimization and learners’ perceived communicator styles were investigated. Cyberbullying victimization was measured through a recently developed 28-item scale with a single-factor structure, whereas the communicator styles were measured through Norton’s (1983 scale which was recently validated in Turkey. Participants were a total of 59 undergraduate Turkish students enrolled in an effective communication course in 2010 spring and fall semesters. Face-to-face instruction was supported through web 2.0 tools where learners’ hid their real identities through nicknames. Participants used personal blogs in addition to the official online platform of the course. Their posts on these platforms were used as the source of the qualitative data. Descriptive analyses were followed by the investigation of qualitative and quantitative interrelationships between the cyberbullying variable and the components of the communicator style measure. Correlations among victimization and communicator style variables were not significant. However, qualitative analysis revealed that cyberbullying instances varied with regard to discussion topics, nature of the discussions and communicator styles. Example patterns from the log files were presented accompanied with suggestions for further implementations.

  12. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

    2004-01-28

    This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

  13. A USER PROTECTION MODEL FOR THE TRUSTED COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Marwan Ibrahim Alshar’e

    2014-01-01

    Full Text Available Information security presents a huge challenge for both individuals and organizations. The Trusted Computing Group (TCG has introduced the Trusted Platform Module (TPM as a solution to end-users to ensure their privacy and confidentiality. TPM has the role of being the root of trust for systems and users by providing protected storage that is accessible only within TPM and thus, protects computers against unwanted access. TPM is designed to prevent software attacks with minimal consideration being given toward physical attacks. Therefore, TPM focus on PIN password identification to control the physical presence of a user. The PIN Password method is not the ideal user verification method. Evil Maid is one of the attacking methods where a piece of code can be loaded and hidden in the boot loader before loading TPM. The code will then collects confidential information at the next boot and store it or send it to attackers via the network. In order to solve this problem, a number of solutions have been proposed. However, most of these solutions does not provide sufficient level of protection to TPM. In this study we introduce the TPM User Authentication Model (TPM-UAM that could assist in protecting TPM against physical attack and thus increase the security of the computer system. The proposed model has been evaluated through a focus group discussion consisting of a number of experts. The expert panel has confirmed that the proposed model is sufficient to provide expected level of protection to the TPM and to assist in preventing physical attack against TPM.

  14. Tablet computers and eBooks. Unlocking the potential for personal learning environments?

    NARCIS (Netherlands)

    Kalz, Marco

    2012-01-01

    Kalz, M. (2012, 9 May). Tablet computers and eBooks. Unlocking the potential for personal learning environments? Invited presentation during the annual conference of the European Association for Distance Learning (EADL), Noordwijkerhout, The Netherlands.

  15. Tablet computers and eBooks. Unlocking the potential for personal learning environments?

    NARCIS (Netherlands)

    Kalz, Marco

    2012-01-01

    Kalz, M. (2012, 9 May). Tablet computers and eBooks. Unlocking the potential for personal learning environments? Invited presentation during the annual conference of the European Association for Distance Learning (EADL), Noordwijkerhout, The Netherlands.

  16. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-04-25

    This is the tenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two gasifier types. An improved process model for simulating entrained flow gasifiers has been implemented into the workbench. Model development has focused on: a pre-processor module to compute global gasification parameters from standard fuel properties and intrinsic rate information; a membrane based water gas shift; and reactors to oxidize fuel cell exhaust gas. The data visualization capabilities of the workbench have been extended by implementing the VTK visualization software that supports advanced visualization methods, including inexpensive Virtual Reality techniques. The ease-of-use, functionality and plug-and-play features of the workbench were highlighted through demonstrations of the workbench at a DOE sponsored coal utilization conference. A white paper has been completed that contains recommendations on the use of component architectures, model interface protocols and software frameworks for developing a Vision 21 plant simulator.

  17. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-01-25

    This is the eighth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two coal types and two gasifier types. Good agreement with DOE computed values has been obtained for the Vision 21 configuration under ''baseline'' conditions. Additional model verification has been performed for the flowing slag model that has been implemented into the CFD based gasifier model. Comparisons for the slag, wall and syngas conditions predicted by our model versus values from predictive models that have been published by other researchers show good agreement. The software infrastructure of the Vision 21 workbench has been modified to use a recently released, upgraded version of SCIRun.

  18. A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Cloud Computing is a new trend emerging in IT environment with huge requirements of infrastructure and resources. Load Balancing is an important aspect of cloud computing environment. Efficient load balancing scheme ensures efficient resource utilization by provisioning of resources to cloud users on demand basis in pay as you say manner. Load Balancing may even support prioritizing users by applying appropriate scheduling criteria. This paper presents various load balancing schemes in differ...

  19. Bridging Theory and Practice: Developing Guidelines to Facilitate the Design of Computer-based Learning Environments

    Directory of Open Access Journals (Sweden)

    Lisa D. Young

    2003-10-01

    Full Text Available Abstract. The design of computer-based learning environments has undergone a paradigm shift; moving students away from instruction that was considered to promote technical rationality grounded in objectivism, to the application of computers to create cognitive tools utilized in constructivist environments. The goal of the resulting computer-based learning environment design principles is to have students learn with technology, rather than from technology. This paper reviews the general constructivist theory that has guided the development of these environments, and offers suggestions for the adaptation of modest, generic guidelines, not mandated principles, that can be flexibly applied and allow for the expression of true constructivist ideals in online learning environments.

  20. Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research

    NARCIS (Netherlands)

    Kreijns, K.; Kirschner, P.A.; Jochems, W.

    2003-01-01

    Computer-mediated world-wide networks have enabled a shift from contiguous learning groups to asynchronous distributed learning groups utilizing computer-supported collaborative learning environments. Although these environments can support communication and collaboration, both research and field ob

  1. COED Transactions, Vol. IX, No. 3, March 1977. Evaluation of a Complex Variable Using Analog/Hybrid Computation Techniques.

    Science.gov (United States)

    Marcovitz, Alan B., Ed.

    Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…

  2. The GOSTT concept and hybrid mixed/virtual/augmented reality environment radioguided surgery.

    Science.gov (United States)

    Valdés Olmos, R A; Vidal-Sicart, S; Giammarile, F; Zaknun, J J; Van Leeuwen, F W; Mariani, G

    2014-06-01

    The popularity gained by the sentinel lymph node (SLN) procedure in the last two decades did increase the interest of the surgical disciplines for other applications of radioguided surgery. An example is the gamma-probe guided localization of occult or difficult to locate neoplastic lesions. Such guidance can be achieved by intralesional delivery (ultrasound, stereotaxis or CT) of a radiolabelled agent that remains accumulated at the site of the injection. Another possibility rested on the use of systemic administration of a tumour-seeking radiopharmaceutical with favourable tumour accumulation and retention. On the other hand, new intraoperative imaging devices for radioguided surgery in complex anatomical areas became available. All this a few years ago led to the delineation of the concept Guided intraOperative Scintigraphic Tumour Targeting (GOSTT) to include the whole spectrum of basic and advanced nuclear medicine procedures required for providing a roadmap that would optimise surgery. The introduction of allied signatures using, e.g. hybrid tracers for simultaneous detection of the radioactive and fluorescent signals did amply the GOSTT concept. It was now possible to combine perioperative nuclear medicine imaging with the superior resolution of additional optical guidance in the operating room. This hybrid approach is currently in progress and probably will become an important model to follow in the coming years. A cornerstone in the GOSTT concept is constituted by diagnostic imaging technologies like SPECT/CT. SPECT/CT was introduced halfway the past decade and was immediately incorporated into the SLN procedure. Important reasons attributing to the success of SPECT/CT were its combination with lymphoscintigraphy, and the ability to display SLNs in an anatomical environment. This latter aspect has significantly been improved in the new generation of SPECT/CT cameras and provides the base for the novel mixed reality protocols of image-guided surgery. In

  3. Adaptive quantum computation in changing environments using projective simulation

    Science.gov (United States)

    Tiersch, M.; Ganahl, E. J.; Briegel, H. J.

    2015-08-01

    Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks.

  4. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Adel Sarofim; Bene Risio

    2002-07-28

    This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No.: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of the IGCC workbench. A series of parametric CFD simulations for single stage and two stage generic gasifier configurations have been performed. An advanced flowing slag model has been implemented into the CFD based gasifier model. A literature review has been performed on published gasification kinetics. Reactor models have been developed and implemented into the workbench for the majority of the heat exchangers, gas clean up system and power generation system for the Vision 21 reference configuration. Modifications to the software infrastructure of the workbench have been commenced to allow interfacing to the workbench reactor models that utilize the CAPE{_}Open software interface protocol.

  5. Parallel Implementation of Classification Algorithms Based on Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wenbo Wang

    2012-09-01

    Full Text Available As an important task of data mining, Classification has been received considerable attention in many applications, such as information retrieval, web searching, etc. The enlarging volumes of information emerging by the progress of technology and the growing individual needs of data mining, makes classifying of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel classification algorithms. This paper introduces the classification algorithms and cloud computing briefly, based on it analyses the bad points of the present parallel classification algorithms, then addresses a new model of parallel classifying algorithms. And it mainly introduces a parallel Naïve Bayes classification algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm improves the original algorithm performance, and it can process large datasets efficiently on commodity hardware.

  6. Computer modeling for investigating the stress-strainstate of beams with hybrid reinforcement

    Directory of Open Access Journals (Sweden)

    Rakhmonov Ahmadzhon Dzhamoliddinovich

    2014-01-01

    Full Text Available In this article the operation of a continuous double-span beam with hybrid reinforcement, steel and composite reinforcement under the action of concentrated forces is considered. The nature of stress-strain state of structures is investigated with the help of computer modeling using a three-dimensional model. Five models of beams with different characteristics were studied. According to the results of numerical studies the data on the distribution of stresses and displacements in continuous beams was provided. The dependence of the stress-strain state on increasing the percentage of the top reinforcement (composite of fittings and change in the concrete class is determined and presented in the article. Currently, the interest in the use of composite reinforcement as a working reinforcement of concrete structures in Russia has increased significantly, which is reflected in the increase of the number of scientific and practical publications devoted to the study of the properties and use of composite materials in construction, as well as emerging draft documents for design of such structures. One of the proposals for basalt reinforcement application is to use it in bending elements with combined reinforcement. For theoretical justification of the proposed nature of reinforcement and improvement of the calculation method the authors conduct a study of stress-strain state of continuous beams with the use of modern computing systems. The software program LIRA is most often used compared to other programs representing strain-stress state analysis of concrete structures.

  7. Weighted Local Active Pixel Pattern (WLAPP for Face Recognition in Parallel Computation Environment

    Directory of Open Access Journals (Sweden)

    Gundavarapu Mallikarjuna Rao

    2013-10-01

    Full Text Available Abstract  - The availability of multi-core technology resulted totally new computational era. Researchers are keen to explore available potential in state of art-machines for breaking the bearer imposed by serial computation. Face Recognition is one of the challenging applications on so ever computational environment. The main difficulty of traditional Face Recognition algorithms is lack of the scalability. In this paper Weighted Local Active Pixel Pattern (WLAPP, a new scalable Face Recognition Algorithm suitable for parallel environment is proposed.  Local Active Pixel Pattern (LAPP is found to be simple and computational inexpensive compare to Local Binary Patterns (LBP. WLAPP is developed based on concept of LAPP. The experimentation is performed on FG-Net Aging Database with deliberately introduced 20% distortion and the results are encouraging. Keywords — Active pixels, Face Recognition, Local Binary Pattern (LBP, Local Active Pixel Pattern (LAPP, Pattern computing, parallel workers, template, weight computation.  

  8. Hybrid hierarchical bio-based materials: Development and characterization through experimentation and computational simulations

    Science.gov (United States)

    Haq, Mahmoodul

    Environmentally friendly bio-based composites with improved properties can be obtained by harnessing the synergy offered by hybrid constituents such as multiscale (nano- and micro-scale) reinforcement in bio-based resins composed of blends of synthetic and natural resins. Bio-based composites have recently gained much attention due to their low cost, environmental appeal and their potential to compete with synthetic composites. The advantage of multiscale reinforcement is that it offers synergy at various length scales, and when combined with bio-based resins provide stiffness-toughness balance, improved thermal and barrier properties, and increased environmental appeal to the resulting composites. Moreover, these hybrid materials are tailorable in performance and in environmental impact. While the use of different concepts of multiscale reinforcement has been studied for synthetic composites, the study of mukiphase/multiscale reinforcements for developing new types of sustainable materials is limited. The research summarized in this dissertation focused on development of multiscale reinforced bio-based composites and the effort to understand and exploit the synergy of its constituents through experimental characterization and computational simulations. Bio-based composites consisting of petroleum-based resin (unsaturated polyester), natural or bio-resin (epoxidized soybean and linseed oils), natural fibers (industrial hemp), and nanosilicate (nanoclay) inclusions were developed. The work followed the "materials by Mahmoodul Haq design" philosophy by incorporating an integrated experimental and computational approach to strategically explore the design possibilities and limits. Experiments demonstrated that the drawbacks of bio-resin addition, which lowers stiffness, strength and increases permeability, can be counter-balanced through nanoclay reinforcement. Bio-resin addition yields benefits in impact strength and ductility. Conversely, nanoclay enhances stiffness

  9. Formative Questioning in Computer Learning Environments: A Course for Pre-Service Mathematics Teachers

    Science.gov (United States)

    Akkoç, Hatice

    2015-01-01

    This paper focuses on a specific aspect of formative assessment, namely questioning. Given that computers have gained widespread use in learning and teaching, specific attention should be made when organizing formative assessment in computer learning environments (CLEs). A course including various workshops was designed to develop knowledge and…

  10. 分布式并行计算环境:MPI%Distributed Paralel Computing Environment :MPI

    Institute of Scientific and Technical Information of China (English)

    王萃寒; 赵晨; 许小刚; 吴国新

    2003-01-01

    Message passing Interface (MPI) is a kind of network distributed parallel computing environments whichhave been widely used on super parallel computers and networks. First,this paper describes the rssearch backgroundand developing status of MPI. Then on this basis it will study and analyze the functions and features of MPI ,summa-rize its insufficiencies and gives some suggestions for modification.

  11. A Client-Server Architecture for an Instructional Environment Based on Computer Networks and the Internet.

    Science.gov (United States)

    Guidon, Jacques; Pierre, Samuel

    1996-01-01

    Discusses the use of computers in education and training and proposes a client-server architecture for an experimental computer environment as an approach to a virtual classroom. Highlights include the World Wide Web and client software, document delivery, hardware architecture, and Internet resources and services. (Author/LRW)

  12. Examining Student Outcomes in University Computer Laboratory Environments: Issues for Educational Management

    Science.gov (United States)

    Newby, Michael; Marcoulides, Laura D.

    2008-01-01

    Purpose: The purpose of this paper is to model the relationship between student performance, student attitudes, and computer laboratory environments. Design/methodology/approach: Data were collected from 234 college students enrolled in courses that involved the use of a computer to solve problems and provided the laboratory experience by means of…

  13. Probing the local environment of hybrid materials designed from ionic liquids and synthetic clay by Raman spectroscopy

    Science.gov (United States)

    Siqueira, Leonardo J. A.; Constantino, Vera R. L.; Camilo, Fernanda F.; Torresi, Roberto M.; Temperini, Marcia L. A.; Ribeiro, Mauro C. C.; Izumi, Celly M. S.

    2014-03-01

    Hybrid organic-inorganic material containing Laponite clay and ionic liquids forming cations have been prepared and characterized by FT-Raman spectroscopy, X-ray diffraction, and thermal analysis. The effect of varying the length of the alkyl side chain and conformations of cations has been investigated by using different ionic liquids based on piperidinium and imidazolium cations. The structure of the N,N-butyl-methyl-piperidinium cation and the assignment of its vibrational spectrum have been further elucidated by quantum chemistry calculations. The X-ray data indicate that the organic cations are intercalated parallel to the layers of the clay. Comparison of Raman spectra of pure ionic liquids with different anions and the resulting solid hybrid materials in which the organic cations have been intercalated into the clay characterizes the local environment experienced by the cations in the hybrid materials. The Raman spectra of hybrid materials suggest that the local environment of all confined cations, in spite of this diversity in properties, resembles the liquid state of ionic liquids with a relatively disordered structure.

  14. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers' Touch-Interface User Experiences

    Science.gov (United States)

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users' shopping behavior. In this research, I examine the…

  15. Educational Game Design. Bridging the gab between computer based learning and experimental learning environments

    DEFF Research Database (Denmark)

    Andersen, Kristine

    2007-01-01

    Considering the rapidly growing amount of digital educational materials only few of them bridge the gab between experimental learning environments and computer based learning environments (Gardner, 1991). Observations from two cases in primary school and lower secondary school in the subject...

  16. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  17. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers' Touch-Interface User Experiences

    Science.gov (United States)

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users' shopping behavior. In this research, I examine the…

  18. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  19. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  20. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  1. Preparing Students for Success in Hybrid Learning Environments with Academic Resource Centers

    Science.gov (United States)

    Newman, Daniel; Dickinson, Michael

    2017-01-01

    This chapter describes institutional and andragogical best practices for preparing students to succeed in hybrid courses through the programming of academic resource centers, offers information on how to create peer support systems for students, and outlines some of the common pitfalls for students encountering a hybrid course for the first time.

  2. Research of Dependent Tasks Scheduling Algorithm in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Chen Qing-Yi

    2016-01-01

    Full Text Available With the dependent relationship of tasks submitted by the users in the model of Cloud computing resources scheduling become stronger and stronger, it is worthy of studying how to optimize the scheduling strategy and algorithm to meet the different demands of the users, and it is absolutely importance. In this article, the author analysed the factors that will affect the entire task-sets execution firstly. Then proposed a new tasks scheduling model based on the original priority calculation method and the idea of redundant duplication of tasks. In the phase of tasks scheduling in the model, the execution results of all parent tasks of the subtask that being executing are considered. The costs of communication between task-sets has reduced by the method of redundant duplication of tasks, so that the execution time of some subtasks share be advanced, and the entire execution efficiency of task-sets can be increased. At the end of this article, from the comparative results of the space-time complexity of contrast algorithms and the algorithm proposed by the author during the process of processing dependent tasks, we can find that subtasks execution time can be advanced and the complete time of the whole task-set can be cut down to a certain extent

  3. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-31

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three.

  4. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  5. Effect of Genetics, Environment, and Phenotype on the Metabolome of Maize Hybrids Using GC/MS and LC/MS.

    Science.gov (United States)

    Tang, Weijuan; Hazebroek, Jan; Zhong, Cathy; Harp, Teresa; Vlahakis, Chris; Baumhover, Brian; Asiago, Vincent

    2017-06-28

    We evaluated the variability of metabolites in various maize hybrids due to the effect of environment, genotype, phenotype as well as the interaction of the first two factors. We analyzed 480 forage and the same number of grain samples from 21 genetically diverse non-GM Pioneer brand maize hybrids, including some with drought tolerance and viral resistance phenotypes, grown at eight North American locations. As complementary platforms, both GC/MS and LC/MS were utilized to detect a wide diversity of metabolites. GC/MS revealed 166 and 137 metabolites in forage and grain samples, respectively, while LC/MS captured 1341 and 635 metabolites in forage and grain samples, respectively. Univariate and multivariate analyses were utilized to investigate the response of the maize metabolome to the environment, genotype, phenotype, and their interaction. Based on combined percentages from GC/MS and LC/MS datasets, the environment affected 36% to 84% of forage metabolites, while less than 7% were affected by genotype. The environment affected 12% to 90% of grain metabolites, whereas less than 27% were affected by genotype. Less than 10% and 11% of the metabolites were affected by phenotype in forage and grain, respectively. Unsupervised PCA and HCA analyses revealed similar trends, i.e., environmental effect was much stronger than genotype or phenotype effects. On the basis of comparisons of disease tolerant and disease susceptible hybrids, neither forage nor grain samples originating from different locations showed obvious phenotype effects. Our findings demonstrate that the combination of GC/MS and LC/MS based metabolite profiling followed by broad statistical analysis is an effective approach to identify the relative impact of environmental, genetic and phenotypic effects on the forage and grain composition of maize hybrids.

  6. Image selection as a service for cloud computing environments

    KAUST Repository

    Filepp, Robert

    2010-12-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe that as the number of images proliferates it will become increasingly difficult for users to decide effectively. Cloud service providers often allow their customers to instantiate standard system images, to modify their instances, and to store images of these customized instances for public or private future use. Storing modified instances as images enables customers to avoid re-provisioning and re-configuration of required resources thereby reducing their future costs. However Cloud service providers generally do not expose details regarding the configurations of the images in a rigorous canonical fashion nor offer services that assist clients in the best target image selection to support client transformation objectives. Rather, they allow customers to enter a free-form description of an image based on client\\'s best effort. This means in order to find a "best fit" image to instantiate, a human user must review potentially thousands of image descriptions, reading each description to evaluate its suitability as a platform to host their source application. Furthermore, the actual content of the selected image may differ greatly from its description. Finally, even images that have been customized and retained for future use may need additional provisioning and customization to accommodate specific needs. In this paper we propose a service that accumulates image configuration details in a canonical fashion and a further service that employs an algorithm to order images per best fit /least cost in conformance to user-specified policies. These services collectively facilitate workload transformation into enterprise cloud environments.

  7. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, C.E.; Koo, J. [Texaco Inc., Houston, TX (United States). Exploration and Production Technology Dept.

    1995-06-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC`s and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco`s E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC`s connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software.

  8. Urbancontext: A Management Model For Pervasive Environments In User-Oriented Urban Computing

    Directory of Open Access Journals (Sweden)

    Claudia L. Zuniga-Canon

    2014-01-01

    Full Text Available Nowadays, urban computing has gained a lot of interest for guiding the evolution of citiesinto intelligent environments. These environments are appropriated for individuals’ inter-actions changing in their behaviors. These changes require new approaches that allow theunderstanding of how urban computing systems should be modeled.In this work we present UrbanContext, a new model for designing of urban computingplatforms that applies the theory of roles to manage the individual’s context in urban envi-ronments. The theory of roles helps to understand the individual’s behavior within a socialenvironment, allowing to model urban computing systems able to adapt to individuals statesand their needs.UrbanContext collects data in urban atmospheres and classifies individuals’ behaviorsaccording to their change of roles, to optimize social interaction and offer secure services.Likewise, UrbanContext serves as a generic model to provide interoperability, and to facilitatethe design, implementation and expansion of urban computing systems.

  9. Customized Architecture for Complex Routing Analysis: Case Study for the Convey Hybrid-Core Computer

    Science.gov (United States)

    2014-02-18

    circuits  that  can   be  reconfigured  using  a  hardware  description  language  such  as   Verilog .  Current   state...a   Verilog -­‐based  design  environment,  is  used  to  implement  a  custom-­‐ designed  computer  architecture,  or

  10. HyCFS, a high-resolution shock capturing code for numerical simulation on hybrid computational clusters

    Science.gov (United States)

    Shershnev, Anton A.; Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Khotyanovsky, Dmitry V.

    2016-10-01

    The present paper describes HyCFS code, developed for numerical simulation of compressible high-speed flows on hybrid CPU/GPU (Central Processing Unit / Graphical Processing Unit) computational clusters on the basis of full unsteady Navier-Stokes equations, using modern shock capturing high-order TVD (Total Variation Diminishing) and WENO (Weighted Essentially Non-Oscillatory) schemes on general curvilinear structured grids. We discuss the specific features of hybrid architecture and details of program implementation and present the results of code verification.

  11. Protection Against DDoS and Data Modification Attack in Computational Grid Cluster Environment

    Directory of Open Access Journals (Sweden)

    Basappa B. Kodada

    2012-07-01

    Full Text Available In the past decades, focus of computation has shifted to high performance computing like Grid Computing and Cloud Computing. In Grid computing, grid server is responsible for managing the all resources like processor, memory and CPU cycles. Grids are basically networks that pool resources, CPU cycles, storage or data from many different nodes used to solve the complex or scientific problem. However in this case, security is a major concern. Even most grid security researches focus on user authentication, authorization and secure communication. This paper presents DDoS and Data Modification attack scenario and also provides the solution to prevent it. In case of data modification attack, it shows how easy to read/forward/modify the data exchanged between a cluster head node and computing nodes. Therefore this paper provides the solution to protect the grid computing environment against Data Modification and DDOS attack.

  12. A Scheme for Verification on Data Integrity in Mobile Multicloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Laicheng Cao

    2016-01-01

    Full Text Available In order to verify the data integrity in mobile multicloud computing environment, a MMCDIV (mobile multicloud data integrity verification scheme is proposed. First, the computability and nondegeneracy of verification can be obtained by adopting BLS (Boneh-Lynn-Shacham short signature scheme. Second, communication overhead is reduced based on HVR (Homomorphic Verifiable Response with random masking and sMHT (sequence-enforced Merkle hash tree construction. Finally, considering the resource constraints of mobile devices, data integrity is verified by lightweight computing and low data transmission. The scheme improves shortage that mobile device communication and computing power are limited, it supports dynamic data operation in mobile multicloud environment, and data integrity can be verified without using direct source file block. Experimental results also demonstrate that this scheme can achieve a lower cost of computing and communications.

  13. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  14. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  15. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  16. Potential of Hybrid Computational Phantoms for Retrospective Heart Dosimetry After Breast Radiation Therapy: A Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Moignier, Alexandra, E-mail: alexandra.moignier@irsn.fr [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Derreumaux, Sylvie; Broggio, David; Beurrier, Julien [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Chea, Michel; Boisserie, Gilbert [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France); Franck, Didier; Aubert, Bernard [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Mazeron, Jean-Jacques [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France)

    2013-02-01

    Purpose: Current retrospective cardiovascular dosimetry studies are based on a representative patient or simple mathematic phantoms. Here, a process of patient modeling was developed to personalize the anatomy of the thorax and to include a heart model with coronary arteries. Methods and Materials: The patient models were hybrid computational phantoms (HCPs) with an inserted detailed heart model. A computed tomography (CT) acquisition (pseudo-CT) was derived from HCP and imported into a treatment planning system where treatment conditions were reproduced. Six current patients were selected: 3 were modeled from their CT images (A patients) and the others were modelled from 2 orthogonal radiographs (B patients). The method performance and limitation were investigated by quantitative comparison between the initial CT and the pseudo-CT, namely, the morphology and the dose calculation were compared. For the B patients, a comparison with 2 kinds of representative patients was also conducted. Finally, dose assessment was focused on the whole coronary artery tree and the left anterior descending coronary. Results: When 3-dimensional anatomic information was available, the dose calculations performed on the initial CT and the pseudo-CT were in good agreement. For the B patients, comparison of doses derived from HCP and representative patients showed that the HCP doses were either better or equivalent. In the left breast radiation therapy context and for the studied cases, coronary mean doses were at least 5-fold higher than heart mean doses. Conclusions: For retrospective dose studies, it is suggested that HCP offers a better surrogate, in terms of dose accuracy, than representative patients. The use of a detailed heart model eliminates the problem of identifying the coronaries on the patient's CT.

  17. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  18. Tools for Brain-Computer Interaction: a general concept for a hybrid BCI (hBCI

    Directory of Open Access Journals (Sweden)

    Gernot R. Mueller-Putz

    2011-11-01

    Full Text Available The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s offer the most reliable signal(s and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a common software framework was investigated.

  19. sBCI-Headset—Wearable and Modular Device for Hybrid Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Tatsiana Malechka

    2015-02-01

    Full Text Available Severely disabled people, like completely paralyzed persons either with tetraplegia or similar disabilities who cannot use their arms and hands, are often considered as a user group of Brain Computer Interfaces (BCI. In order to achieve high acceptance of the BCI by this user group and their supporters, the BCI system has to be integrated into their support infrastructure. Critical disadvantages of a BCI are the time consuming preparation of the user for the electroencephalography (EEG measurements and the low information transfer rate of EEG based BCI. These disadvantages become apparent if a BCI is used to control complex devices. In this paper, a hybrid BCI is described that enables research for a Human Machine Interface (HMI that is optimally adapted to requirements of the user and the tasks to be carried out. The solution is based on the integration of a Steady-state visual evoked potential (SSVEP-BCI, an Event-related (de-synchronization (ERD/ERS-BCI, an eye tracker, an environmental observation camera, and a new EEG head cap for wearing comfort and easy preparation. The design of the new fast multimodal BCI (called sBCI system is described and first test results, obtained in experiments with six healthy subjects, are presented. The sBCI concept may also become useful for healthy people in cases where a “hands-free” handling of devices is necessary.

  20. 混合云市场的计算资源交易模型%Computing resource trading models in hybrid cloud market Com-puter Engineering and Applications, 2014, 50(18):25-32

    Institute of Scientific and Technical Information of China (English)

    孙英华; 吴哲辉; 郭振波; 顾卫东

    2014-01-01

    A computing resource trading model named HCRM(Hybrid Cloud Resource Market)is proposed based on hybrid cloud environment. The market structure, management layers and quality models of supply and demand are dis-cussed. A quality-aware double auction algorithm named QaDA(Quality-aware Double Auction)is designed and simulated. Compared with traditional CDA(Continuous Double Auction), the simulation results show that QaDA not only can guide reasonable price but also can obtain higher matching ratio and higher deal amount.%基于计算资源共享模型的研究,提出了混合云计算资源市场HCRM(Hybrid Cloud Resource Market)的功能架构,研究了市场管理层的交易管理机制,给出了计算资源的供需质量模型,设计了一种质量感知的双向拍卖算法QaDA(Quality-aware Double Auction)。仿真运行结果表明,与普通的连续双向拍卖算法CDA(Continuous Double Auction)相比,QaDA不仅可以引导用户合理定价,也能获得较高的匹配成功率和较高的交易总额。

  1. Impact of Hybrid Intelligent Computing in Identifying Constructive Weather Parameters for Modeling Effective Rainfall Prediction

    Directory of Open Access Journals (Sweden)

    M. Sudha

    2015-12-01

    Full Text Available Uncertain atmosphere is a prevalent factor affecting the existing prediction approaches. Rough set and fuzzy set theories as proposed by Pawlak and Zadeh have become an effective tool for handling vagueness and fuzziness in the real world scenarios. This research work describes the impact of Hybrid Intelligent System (HIS for strategic decision support in meteorology. In this research a novel exhaustive search based Rough set reduct Selection using Genetic Algorithm (RSGA is introduced to identify the significant input feature subset. The proposed model could identify the most effective weather parameters efficiently than other existing input techniques. In the model evaluation phase two adaptive techniques were constructed and investigated. The proposed Artificial Neural Network based on Back Propagation learning (ANN-BP and Adaptive Neuro Fuzzy Inference System (ANFIS was compared with existing Fuzzy Unordered Rule Induction Algorithm (FURIA, Structural Learning Algorithm on Vague Environment (SLAVE and Particle Swarm OPtimization (PSO. The proposed rainfall prediction models outperformed when trained with the input generated using RSGA. A meticulous comparison of the performance indicates ANN-BP model as a suitable HIS for effective rainfall prediction. The ANN-BP achieved 97.46% accuracy with a nominal misclassification rate of 0.0254 %.

  2. Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choonsik [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Lodwick, Daniel [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Hasenauer, Deanna [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Williams, Jonathan L [Department of Radiology, University of Florida, Gainesville, FL 32611 (United States); Lee, Choonik [MD Anderson Cancer Center-Orlando, Orlando, FL 32806 (United States); Bolch, Wesley E [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-21

    Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images-the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid

  3. Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models

    Science.gov (United States)

    Lee, Choonsik; Lodwick, Daniel; Hasenauer, Deanna; Williams, Jonathan L.; Lee, Choonik; Bolch, Wesley E.

    2007-07-01

    Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images—the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid

  4. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

    Energy Technology Data Exchange (ETDEWEB)

    Lingerfelt, Eric J [ORNL; Endeve, Eirik [ORNL; Ovchinnikov, Oleg S [ORNL; Borreguero Calvo, Jose M [ORNL; Park, Byung H [ORNL; Archibald, Richard K [ORNL; Symons, Christopher T [ORNL; Kalinin, Sergei V [ORNL; Messer, Bronson [ORNL; Shankar, Mallikarjun [ORNL; Jesse, Stephen [ORNL

    2016-01-01

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

  5. Grid Computing: A Collaborative Approach in Distributed Environment for Achieving Parallel Performance and Better Resource Utilization

    Directory of Open Access Journals (Sweden)

    Sashi Tarun

    2011-01-01

    Full Text Available From the very beginning various measures are taken or consider for better utilization of available limited resources in the computer system for operational environment, this is came in consideration because most of the time our system get free and not able to exploit the system resource/capabilities as whole cause low performance. Parallel Computing can work efficiently, where operations are handled by multi-processors independently or efficiently, without any other processing capabilities. All processing unit’s works in a parallel fashioned and increases the system throughput without any resource allocation problem among different processing units. But this is limited and effective within a single machine. Today in this computing world, maintaining and establishing high speed computational work environment in a distributed scenario seems to be a challenging task because this environment made all operations by not depending on single resources but by interacting with otherresources in the vast network architecture. All current resource management system can only work smoothly if they apply these resources within their clusters, local organizations or disputed among many users who needs processing power, but for vast distributed environment performing various operational activities seems to be difficult because data is physically not maintained in a centralized location, it is geographically dispersed on multiple remote computers systems. Computers in the distributed environment have to depend on multiple resources for their task completion. Effective performance with high availability of resources to each computer in this speedy distributed computational environment is the major concern. To solve this problem a new approach is coined called “Grid Computing” environment. Grid uses a Middleware to coordinate disparate resources across a network, allows users to function as a virtual whole and make computing fast. In this paper I want to

  6. Testing the hybrid-3D Hillslope Hydrological Model in a Real-World Controlled Environment

    Science.gov (United States)

    Hazenberg, P.; Broxton, P. D.; Gochis, D. J.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    Hillslopes play an important role for converting rainfall into runoff, and as such, influence theterrestrial dynamics of the Earth's climate system. Recently, we have developed a hybrid-3D (h3D) hillslope hydrological model that couples a 1D vertical soil column model with a lateral pseudo-2D saturated zone and overland flow model. The h3D model gives similar results as the CATchment HYdrological model (CATHY), which simulates the subsurface movement of water with the 3D Richards equation, though the runtime efficiency of the h3D model is about 2-3 orders of magnitude faster. In the current work, the ability of the h3D model to predict real-world hydrological dynamics is assessed using a number of recharge-drainage experiments within the Landscape Evolution Observatory (LEO) at the Biosphere 2 near Tucson, Arizona, USA. LEO offers accurate and high-resolution (both temporally and spatially) observations of the inputs, outputs and storage dynamics of several hillslopes. The level of detail of these observations is generally not possible with real-world hillslope studies. Therefore, LEO offers an optimal environment to test the h3D model. The h3D model captures the observed storage, baseflow, and overland flow dynamics of both a larger and a smaller hillslope. Furthermore, it simulates overland flow better than CATHY. The h3D model has difficulties correctly representing the height of the saturated zone close to the seepage face of the smaller hillslope, though. There is a gravel layer near this seepage face, and the numerical boundary condition of the h3D model is insufficient to capture the hydrological dynamics within this region. In addition, the h3D model is used to test the hypothesis that model parameters change through time due to the migration of soil particles during the recharge-drainage experiments. An in depth calibration of the h3D model parameters reveals that the best results are obtained by applying an event-based optimization procedure as compared

  7. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  8. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  9. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A. (University of Oregon, Eugene, OR); Shende, Sameer (University of Oregon, Eugene, OR); Trebon, Nicholas D.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  10. Dynamic Geometry Environments as a Tool for Computer Modeling in the System of Modern Mathematics Education

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-01-01

    Full Text Available This paper discusses a number of issues and problems associated with the use of computer models in the study of geometry in university, as well as school mathematics in order to improve its efficiency. We show that one of the efficient ways to solve a number of problems in nowadays mathematics education is to use dynamic geometry environment GeoGebra. We also provide some examples of computer models created with GeoGebra.

  11. A mixed-methods exploration of an environment for learning computer programming

    OpenAIRE

    Mather, Richard

    2015-01-01

    A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning\\ud computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments.\\ud Here, ethnographic approaches used for the requirements engineering of computing systems are combined with questionnaire-based feedback and skill tests. These are applied to ...

  12. Environment-sensitive manipulator control. [real time, decision making computer aided control

    Science.gov (United States)

    Bejczy, A. K.

    1974-01-01

    Environment-sensitive manipulator control (control systems capable of controlling manipulator motion based on real-time response to sensor data obtained during the attempt to perform a requested task) is described, and experiments on (1) proximity control in manipulation and (2) application of an articulated and adaptively controlled hand-to-environment-sensitive manipulator control are reported. The efficiency of such systems is determined by the separation of control and data processing functions between operator and computer.

  13. Enhanced Survey and Proposal to secure the data in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    MR.S.SUBBIAH

    2013-01-01

    Full Text Available Cloud computing have the power to eliminate the cost of setting high end computing infrastructure. It is a promising area or design to give very flexible architecture, accessible through the internet. In the cloud computing environment the data will be reside at any of the data centers. Due to that, some data center may leak the data stored on there, beyond the reach and control of the users. For this kind of misbehaving data centers, the service providers should take care of the security and privacy of the data stored in the data centers through the cloud computing environment. This survey paper try to elaborate and analyze the various unresolved issues in the cloud computing environment and try to propose an alternate method which can be useful to the various kind of users who are willing to get into the new era of cloud computing. Moreover this paper try to give some suggestions in the area of Securing the data while storing the data in the cloud server, implement new Data displacement strategies, Service Level Agreement between the user and the Cloud Service Provider and finally how to improve the Quality of Service.

  14. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Soon Heum [Linkoeping University, Linkoeping (Sweden); Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel [Louisiana State University, Baton Rouge (United States); Jha, Shantenu [Rutgers University, Piscataway (United States)

    2014-01-15

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  15. Design and performance evaluation of dynamic wavelength scheduled hybrid WDM/TDM PON for distributed computing applications.

    Science.gov (United States)

    Zhu, Min; Guo, Wei; Xiao, Shilin; Dong, Yi; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2009-01-19

    This paper investigates the design and implementation of distributed computing applications in local area network. We propose a novel Dynamical Wavelength Scheduled Hybrid WDM/TDM Passive Optical Network, which is termed as DWS-HPON. The system is implemented by using spectrum slicing techniques of broadband light source and overlay broadcast-signaling scheme. The Time-Wavelength Co-Allocation (TWCA) Problem is defined and an effective greedy approach to this problem is presented for aggregating large files in distributed computing applications. The simulations demonstrate that the performance is improved significantly compared with the conventional TDM-over-WDM PON.

  16. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  17. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  18. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  19. A Hybrid Computational Model to Explore the Topological Characteristics of Epithelial Tissues.

    Science.gov (United States)

    González-Valverde, Ismael; García Aznar, José Manuel

    2017-03-01

    Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue.

  20. A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs

    Science.gov (United States)

    Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.

    2016-12-01

    Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non

  1. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems

    Science.gov (United States)

    Li, Ying

    2016-09-01

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  2. Dynamic modelling of an adsorption storage tank using a hybrid approach combining computational fluid dynamics and process simulation

    Science.gov (United States)

    Mota, J.P.B.; Esteves, I.A.A.C.; Rostam-Abadi, M.

    2004-01-01

    A computational fluid dynamics (CFD) software package has been coupled with the dynamic process simulator of an adsorption storage tank for methane fuelled vehicles. The two solvers run as independent processes and handle non-overlapping portions of the computational domain. The codes exchange data on the boundary interface of the two domains to ensure continuity of the solution and of its gradient. A software interface was developed to dynamically suspend and activate each process as necessary, and be responsible for data exchange and process synchronization. This hybrid computational tool has been successfully employed to accurately simulate the discharge of a new tank design and evaluate its performance. The case study presented here shows that CFD and process simulation are highly complementary computational tools, and that there are clear benefits to be gained from a close integration of the two. ?? 2004 Elsevier Ltd. All rights reserved.

  3. Beyond the photocopy machine: document delivery in a hybrid library environment

    NARCIS (Netherlands)

    Dekker, R.; Waaijers, L.

    2001-01-01

    Document delivery bridges the gap between where the customer is and where the document is. Libraries have to offer user-friendly access to hybrid collections, and design and implement document delivery mechanisms from paper originals to provide a seamless integration between delivery from electronic

  4. Learning Style, Sense of Community and Learning Effectiveness in Hybrid Learning Environment

    Science.gov (United States)

    Chen, Bryan H.; Chiou, Hua-Huei

    2014-01-01

    The purpose of this study is to investigate how hybrid learning instruction affects undergraduate students' learning outcome, satisfaction and sense of community. The other aim of the present study is to examine the relationship between students' learning style and learning conditions in mixed online and face-to-face courses. A quasi-experimental…

  5. Assessing the Therapeutic Environment in Hybrid Models of Treatment: Prisoner Perceptions of Staff

    Science.gov (United States)

    Kubiak, Sheryl Pimlott

    2009-01-01

    Hybrid treatment models within prisons are staffed by both criminal justice and treatment professionals. Because these models may be indicative of future trends, examining the perceptions of prisoners/participants may provide important information. This study examines the perceptions of male and female inmates in three prisons, comparing those in…

  6. Learning Style, Sense of Community and Learning Effectiveness in Hybrid Learning Environment

    Science.gov (United States)

    Chen, Bryan H.; Chiou, Hua-Huei

    2014-01-01

    The purpose of this study is to investigate how hybrid learning instruction affects undergraduate students' learning outcome, satisfaction and sense of community. The other aim of the present study is to examine the relationship between students' learning style and learning conditions in mixed online and face-to-face courses. A quasi-experimental…

  7. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    Science.gov (United States)

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  8. A Concept of a Hybrid WDM/TDM Topology Using the Fabry-Perot Laser in the Optiwave Simulation Environment

    Directory of Open Access Journals (Sweden)

    Jan Skapa

    2011-01-01

    Full Text Available The aim of this article is to point out the possibility of solving problems related to a concept of a flexible hybrid optical access network. The entire topology design was realized using the OPTIWAVE development environment in which particular test measurements were carried out as well. Therefore, in the following chapters, we will subsequently focus on individual parts of the proposed topology and will give reasons for their functions whilst the last part of the article consists of values measured in the topology and their overall evaluation.

  9. Encountering the Expertise Reversal Effect with a Computer-Based Environment on Electrical Circuit Analysis

    Science.gov (United States)

    Reisslein, Jana; Atkinson, Robert K.; Seeling, Patrick; Reisslein, Martin

    2006-01-01

    This study examined the effectiveness of a computer-based environment employing three example-based instructional procedures (example-problem, problem-example, and fading) to teach series and parallel electrical circuit analysis to learners classified by two levels of prior knowledge (low and high). Although no differences between the…

  10. Ecological Affordance and Anxiety in an Oral Asynchronous Computer-Mediated Environment

    Science.gov (United States)

    McNeil, Levi

    2014-01-01

    Previous research suggests that the affordances (van Lier, 2000) of asynchronous computer-mediated communication (ACMC) environments help reduce foreign language anxiety (FLA). However, FLA is rarely the focus of these studies and research has not adequately addressed the relationship between FLA and the affordances that students use. This study…

  11. The Use of Engineering Design Concept for Computer Programming Course: A Model of Blended Learning Environment

    Science.gov (United States)

    Tritrakan, Kasame; Kidrakarn, Pachoen; Asanok, Manit

    2016-01-01

    The aim of this research is to develop a learning model which blends factors from learning environment and engineering design concept for learning in computer programming course. The usage of the model was also analyzed. This study presents the design, implementation, and evaluation of the model. The research methodology is divided into three…

  12. Detecting and Understanding the Impact of Cognitive and Interpersonal Conflict in Computer Supported Collaborative Learning Environments

    Science.gov (United States)

    Prata, David Nadler; Baker, Ryan S. J. d.; Costa, Evandro d. B.; Rose, Carolyn P.; Cui, Yue; de Carvalho, Adriana M. J. B.

    2009-01-01

    This paper presents a model which can automatically detect a variety of student speech acts as students collaborate within a computer supported collaborative learning environment. In addition, an analysis is presented which gives substantial insight as to how students' learning is associated with students' speech acts, knowledge that will…

  13. Secondary School Students' Attitudes towards Mathematics Computer--Assisted Instruction Environment in Kenya

    Science.gov (United States)

    Mwei, Philip K.; Wando, Dave; Too, Jackson K.

    2012-01-01

    This paper reports the results of research conducted in six classes (Form IV) with 205 students with a sample of 94 respondents. Data represent students' statements that describe (a) the role of Mathematics teachers in a computer-assisted instruction (CAI) environment and (b) effectiveness of CAI in Mathematics instruction. The results indicated…

  14. Speeding-up MADYMO 3D on serial and parallel computers using a portable coding environment

    NARCIS (Netherlands)

    Tsiandikos, T.; Rooijackers, H.F.L.; Asperen, F.G.J. van; Lupker, H.A.

    1996-01-01

    This paper outlines the strategy and methodology used to create a portable coding environment for the commercial package MADYMO. The objective is to design a global data structure that efficiently utilises the memory and cache of computers, so that one source code can be used for serial, vector and

  15. A Mixed-Methods Exploration of an Environment for Learning Computer Programming

    Science.gov (United States)

    Mather, Richard

    2015-01-01

    A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments. Here, ethnographic approaches…

  16. Computer program determines thermal environment and temperature history of lunar orbiting space vehicles

    Science.gov (United States)

    Head, D. E.; Mitchell, K. L.

    1967-01-01

    Program computes the thermal environment of a spacecraft in a lunar orbit. The quantities determined include the incident flux /solar and lunar emitted radiation/, total radiation absorbed by a surface, and the resulting surface temperature as a function of time and orbital position.

  17. A virtual machine-based invasion detection system for the virtual computing environment

    Institute of Scientific and Technical Information of China (English)

    Zeng Yu; Wang Jie; Sun Ninghui; Li Jun; Nie Hua

    2006-01-01

    Under virtualization idea based on large-scale dismantling and sharing,the implementing of network interconnection of calculation components and storage components by loose coupling, which are tightly coupling in traditional server, achieves computing capacity, storage capacity and service capacity distribution according to need in application-level. Under the new server model, the segregation and protection of user space and system space as well as the security monitoring of virtual resources are the important factors of ultimate security guarantee. This article presents a large-scale and expansible distributed invasion detection system of virtual computing environment based on virtual machine. The system supports security monitoring management of global resources and provides uniform view of security attacks under virtual computing environment, thereby protecting the user applications and system security under capacity services domain.

  18. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    Science.gov (United States)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  19. Improvement of uniformity in cultivation environment and crop growth rate by hybrid control of air flow devices

    Institute of Scientific and Technical Information of China (English)

    BAEK Min-Seon; KWON Sook-Youn; LIM Jae-Hyun

    2015-01-01

    A complete control type plant factory has high efficiency in terms of cultivation area by constructing vertical multiple layered cultivation beds. However, it has a problem of irregular crop growth due to temperature deviation at upper and lower beds and increases in energy consumption by a prolonged cultivation period. In this work, air flow rate inside a facility was improved by a hybrid control of air flow devices like air conditioning and air circulation fan with an established wireless sensor network to minimize temperature deviations between upper and lower beds and to promote crop growth. The performance of proposed system was verified with an experimental environment or Case A wherein air conditioning device was operated without a control algorithm and Case B wherein air conditioning and circulation fans were alternatively operated based on the hybrid control algorithm. After planting leafy vegetables under each experimental condition, crops were cultivated for 21 days. As a result, Case B wherein AC (air conditioning) and ACF (air-circulation fan) were alternatively operated based on the hybrid control algorithm showed that fresh mass, number of leaves, and leaf length for the crops grown were increased by 40.6%, 41.1%, and 11.1%, respectively, compared to Case A.

  20. Erosion effects of atomic oxygen on polyhedral oligomeric silsesquioxane-polyimide hybrid films in low earth orbit space environment.

    Science.gov (United States)

    Duo, Shuwang; Song, Mimi; Liu, Tingzhi; Hu, Changyuan; Li, Meishuan

    2013-02-01

    A novel polyimide (PI) hybrid nanocomposite containing polyhedral oligomeric silsesquioxane (POSS) had been prepared by copolymerization of trisilanolphenyl-POSS, 4,4'-oxydianiline (ODA), and pyromellitic dianhydride (PMDA). The AO resistance of these PI/POSS hybrid films was tested in the ground-based AO simulation facility. Exposed and unexposed surfaces were characterized by SEM and X-ray photoelectron spectroscopy. SEM images showed that the surface of the 20 wt% PI/POSS became much less rough than that of the pristine polyimide. Mass measurements of the samples showed that the erosion yield of the PI/POSS (20 wt.%) hybrid film was 1.2 x 10(-25) cm3/atom, and reduced to 4% of the polyimide film. The XPS data indicated that the carbon content of the near-surface region was decreased from 60.1 to 13.2 at% after AO exposure. The oxygen and silicon concentrations in the near-surface region increased to 1.96 after AO exposure. The nanometer-sized structure of POSS, with its large surface area, had led AO-irradiated samples to form a SiO2 passivation layer, which protected the underlying polymer from further AO attack. The incorporation of POSS into the polyimide could dramatically improve the AO resistance of polyimide films in low earth orbit environment.

  1. A Multi-Language Computing Environment for Literate Programming and Reproducible Research

    Directory of Open Access Journals (Sweden)

    Eric Schulte

    2012-01-01

    Full Text Available We present a new computing environment for authoring mixed natural and computer language documents. In this environment a single hierarchically-organized plain text source file may contain a variety of elements such as code in arbitrary programming languages, raw data, links to external resources, project management data, working notes, and text for publication. Code fragments may be executed in situ with graphical, numerical and textual output captured or linked in the le. Export to LATEX, HTML, LATEX beamer, DocBook and other formats permits working reports, presentations and manuscripts for publication to be generated from the file. In addition, functioning pure code files can be automatically extracted from the file. This environment is implemented as an extension to the Emacs text editor and provides a rich set of features for authoring both prose and code, as well as sophisticated project management capabilities.

  2. Bridging Social and Semantic Computing - Design and Evaluation of User Interfaces for Hybrid Systems

    Science.gov (United States)

    Bostandjiev, Svetlin Alex I.

    2012-01-01

    The evolution of the Web brought new interesting problems to computer scientists that we loosely classify in the fields of social and semantic computing. Social computing is related to two major paradigms: computations carried out by a large amount of people in a collective intelligence fashion (i.e. wikis), and performing computations on social…

  3. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  4. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  5. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    Science.gov (United States)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  6. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-04-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  7. Land use and wind direction drive hybridization between cultivated poplar and native species in a Mediterranean floodplain environment.

    Science.gov (United States)

    Paffetti, Donatella; Travaglini, Davide; Labriola, Mariaceleste; Buonamici, Anna; Bottalico, Francesca; Materassi, Alessandro; Fasano, Gianni; Nocentini, Susanna; Vettori, Cristina

    2018-01-01

    Deforestation and intensive land use management with plantations of fast-growing tree species, like Populus spp., may endanger native trees not only by eliminating or reducing their habitats, but also by diminishing their species integrity via hybridization and introgression. The genus Populus has persistent natural hybrids because clonal and sexual reproduction is common. The objective of this study was to assess the effect of land use management of poplar plantations on the spatial genetic structure and species composition in poplar stands. Specifically, we studied the potential breeding between natural and cultivated poplar populations in the Mediterranean environment to gain insight into spontaneous hybridization events between exotic and native poplars; we also used a GIS-based model to evaluate the potential threats related to an intensive land use management. Two study areas, both near to poplar plantations (P.×euramericana), were designated in the native mixed stands of P. alba, P. nigra and P.×canescens within protected areas. We found that the spatial genetic structure differed between the two stands and their differences depended on their environmental features. We detected a hybridization event with P.×canescens that was made possible by the synchrony of flowering between the poplar plantation and P.×canescens and facilitated by the wind intensity and direction favoring the spread of pollen. Taken together, our results indicate that natural and artificial barriers are crucial to mitigate the threats, and so they should be explicitly considered in land use planning. For example, our results suggest the importance of conserving rows of trees and shrubs along rivers and in agricultural landscapes. In sum, it is necessary to understand, evaluate, and monitor the spread of exotic species and genetic material to ensure effective land use management and mitigation of their impact on native tree populations. Copyright © 2017 Elsevier B.V. All rights

  8. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    Science.gov (United States)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two

  9. The ion-ion hybrid Alfvén resonator in a fusion environment

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, W. A. [Univ. of California, Los Angeles, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morales, G. J. [Univ. of California, Los Angeles, CA (United States)

    2014-06-01

    An investigation is made of a shear Alfvén wave resonator for burning plasma conditions expected in the ITER device. For small perpendicular scale-lengths the shear mode, which propagates predominantly along the magnetic field direction, experiences a parallel reflection where the wave frequency matches the local ion-ion hybrid frequency. In a tokamak device operating with a deuterium–tritium fuel, this effect can form a natural resonator because of the variation in local field strength along a field line. The relevant kinetic dispersion relation is examined to determine the relative importance of Landau and cyclotron damping over the possible resonator parameter space. A WKB model based on the kinetic dispersion relation is used to determine the eigenfrequencies and the quality factors of modes trapped in the resonator. The lowest frequency found has a value slightly larger than the ion-ion hybrid frequency at the outboard side of a given flux surface. The possibility that the resonator modes can be driven unstable by energetic alpha particles is considered. It is found that within a bandwidth of roughly 600 kHz above the ion-ion hybrid frequency on the outboard side of the flux surface, the shear modes can experience significant spatial amplification. An assessment is made of the form of an approximate global eigenmode that possesses the features of a resonator. It is identified that magnetic field shear combined with large ion temperature can cause coupling to an ion-Bernstein wave, which can limit the instability.

  10. Energy-Efficient Scheduling of HPC Applications in Cloud Computing Environments

    CERN Document Server

    Garg, Saurabh Kumar; Anandasivam, Arun; Buyya, Rajkumar

    2009-01-01

    The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. They need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver such a computing infrastructure using data centers so that HPC users can access applications and data from a Cloud anywhere in the world on demand and pay based on what they use. However, the growing demand drastically increases the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high energy cost, which will reduce the profit margin of Cloud providers, but also high carbon emissions which is not environmentally sustainable. Hence, energy-efficient solutions are required that can address the high increase in the energy consumption from the perspective of not only Cloud provider but also from the environment. To address this issue we propose near-optimal scheduling policies that exploits heterogeneity across mu...

  11. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm.

    Science.gov (United States)

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-15

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  12. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-09-01

    Full Text Available This paper takes advantage of the complementary characteristics of Global Positioning System (GPS and Light Detection and Ranging (LiDAR to provide periodic corrections to Inertial Navigation System (INS alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  13. A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection

    Directory of Open Access Journals (Sweden)

    Simeone Marino

    2016-10-01

    Full Text Available Tuberculosis (TB is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB. The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs, take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung to lung-draining lymph nodes (LNs where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim describing in silico immune cell, bacterial (Mtb and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form, lung draining lymph node (LN, site of generation of adaptive immunity and blood (a measurable compartment. Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM, while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs. In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is

  14. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L. [Univ. of Washington, Seattle, WA (United States). Dept. of Computer Science and Engineering

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  15. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  16. The use of computer vision techniques to augment home based sensorised environments.

    Science.gov (United States)

    Uhríková, Zdenka; Nugent, Chris D; Hlavác, Václav

    2008-01-01

    Technology within the home environment is becoming widely accepted as a means to facilitate independent living. Nevertheless, practical issues of detecting different tasks between multiple persons within the same environment along with managing instances of uncertainty associated with recorded sensor data are two key challenges yet to be fully solved. This work presents details of how computer vision techniques can be used as both alternative and complementary means in the assessment of behaviour in home based sensorised environments. Within our work we assessed the ability of vision processing techniques in conjunction with sensor based data to deal with instances of multiple occupancy. Our Results indicate that the inclusion of the video data improved the overall process of task identification by detecting and recognizing multiple people in the environment using color based tracking algorithm.

  17. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  18. Data of NODDI diffusion metrics in the brain and computer simulation of hybrid diffusion imaging (HYDI acquisition scheme

    Directory of Open Access Journals (Sweden)

    Chandana Kodiweera

    2016-06-01

    Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].

  19. Robust computation of dipole electromagnetic fields in arbitrarily anisotropic, planar-stratified environments.

    Science.gov (United States)

    Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay

    2014-01-01

    We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.

  20. ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

    Science.gov (United States)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2014-06-01

    The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.

  1. Secure encapsulation and publication of biological services in the cloud computing environment.

    Science.gov (United States)

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  2. Peat hybrid sorbents for treatment of wastewaters and remediation of polluted environment

    Science.gov (United States)

    Klavins, Maris; Burlakovs, Juris; Robalds, Artis; Ansone-Bertina, Linda

    2015-04-01

    For remediation of soils and purification of polluted waters, wastewaters, sorbents might be considered as an prospective group of materials and amongst them peat have a special role due to low cost, biodegradability, high number of functional groups, well developed surface area and combination of hydrophilic/hydrophobic structural elements. Peat as sorbent have good application potential for removal of trace metals, and we have demonstrated peat sorption capacities, sorption kinetics, thermodynamics in respect to metals with different valencies - Tl(I), Cu(II), Cr(III). However peat sorption capacity in respect to nonmetallic (anionic species) elements is low. Also peat mechanical properties do not support application in large scale column processes. To expand peat application possibilities the approach of biomass based hybrid sorbents has been elaborated. The concept "hybrid sorbent" in our understanding means natural, biomass based sorbent modified, covered with another sorbent material, thus combining two types of sorbent properties, sorbent functionalities, surface properties etc. As the "covering layer" both inorganic substances, mineral phases (iron oxohydroxides, oxyapatite) both organic polymers (using graft polymerization) were used. The obtained sorbents were characterised by their spectral properties, surface area, elemental composition. The obtained hybrid sorbents were tested for sorption of compounds in anionic speciation forms, for example of arsenic, antimony, tellurium and phosphorous compounds in comparison with weakly basic anionites. The highest sorption capacity was observed when peat sorbents modified with iron compounds were used. Sorption of different arsenic speciation forms onto iron-modified peat sorbents was investigated as a function of pH and temperature. It was established that sorption capacity increases with a rise in temperature, and the calculation of sorption process thermodynamic parameters indicates the spontaneity of sorption

  3. Higher Education Reform for Computer Major Students in Open and Research Environments

    Institute of Scientific and Technical Information of China (English)

    LI Xin; XU Xin-shun; JIA Zhi-ping; MENG Xiang-xu

    2012-01-01

    This paper analyzes the requirement of professional computer talents in Chinese universities, and introduces the practice in innovative educational methods taken by Shandong University in an open and research environment. In order to improve educational quality, we have carried out a serial of reforms, including "Four Experiences" aiming at diversifying study environments and fostering their adaptability and extending their vision. Students are encouraged to join "Research Assistant" program and participate in scientific projects to improve their ability in research and innovation. They also conduct "Engineering Practice" to learn latest modeling and programming skills. Compound talents characterized of solid foundation, high quality and strong practical ability are shaped through these initiatives.

  4. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  5. Plasma environment of magnetized asteroids: a 3-D hybrid simulation study

    Directory of Open Access Journals (Sweden)

    S. Simon

    2006-03-01

    Full Text Available The interaction of a magnetized asteroid with the solar wind is studied by using a three-dimensional hybrid simulation code (fluid electrons, kinetic ions. When the obstacle's intrinsic magnetic moment is sufficiently strong, the interaction region develops signs of magnetospheric structures. On the one hand, an area from which the solar wind is excluded forms downstream of the obstacle. On the other hand, the interaction region is surrounded by a boundary layer which indicates the presence of a bow shock. By analyzing the trajectories of individual ions, it is demonstrated that kinetic effects have global consequences for the structure of the interaction region.

  6. Hybrid Finite Element Developments for Rotorcraft Interior Noise Computations within a Multidisciplinary Design Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — One of the main attributes contributing to the civil competitiveness of rotorcraft, is the continuously increasing expectations for passenger comfort which is...

  7. Computationally Probing the Performance of Hybrid, Heterogeneous, and Homogeneous Iridium-Based Catalysts for Water Oxidation

    Energy Technology Data Exchange (ETDEWEB)

    García-Melchor, Max [SUNCAT Center for Interface Science and Catalysis, Department of Chemical Engineering, Stanford University, Stanford CA (United States); Vilella, Laia [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST),Tarragona (Spain); Departament de Quimica, Universitat Autonoma de Barcelona, Barcelona (Spain); López, Núria [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST), Tarragona (Spain); Vojvodic, Aleksandra [SUNCAT Center for Interface Science and Catalysis, SLAC National Accelerator Laboratory, Menlo Park CA (United States)

    2016-04-29

    An attractive strategy to improve the performance of water oxidation catalysts would be to anchor a homogeneous molecular catalyst on a heterogeneous solid surface to create a hybrid catalyst. The idea of this combined system is to take advantage of the individual properties of each of the two catalyst components. We use Density Functional Theory to determine the stability and activity of a model hybrid water oxidation catalyst consisting of a dimeric Ir complex attached on the IrO2(110) surface through two oxygen atoms. We find that homogeneous catalysts can be bound to its matrix oxide without losing significant activity. Hence, designing hybrid systems that benefit from both the high tunability of activity of homogeneous catalysts and the stability of heterogeneous systems seems feasible.

  8. DiFX: A Software Correlator for Very Long Baseline Interferometry Using Multiprocessor Computing Environments

    Science.gov (United States)

    Deller, A. T.; Tingay, S. J.; Bailes, M.; West, C.

    2007-03-01

    We describe the development of an FX-style correlator for very long baseline interferometry (VLBI), implemented in software and intended to run in multiprocessor computing environments, such as large clusters of commodity machines (Beowulf clusters) or computers specifically designed for high-performance computing, such as multiprocessor shared-memory machines. We outline the scientific and practical benefits for VLBI correlation, these chiefly being due to the inherent flexibility of software and the fact that the highly parallel and scalable nature of the correlation task is well suited to a multiprocessor computing environment. We suggest scientific applications where such an approach to VLBI correlation is most suited and will give the best returns. We report detailed results from the Distributed FX (DiFX) software correlator running on the Swinburne supercomputer (a Beowulf cluster of ~300 commodity processors), including measures of the performance of the system. For example, to correlate all Stokes products for a 10 antenna array with an aggregate bandwidth of 64 MHz per station, and using typical time and frequency resolution, currently requires an order of 100 desktop-class compute nodes. Due to the effect of Moore's law on commodity computing performance, the total number and cost of compute nodes required to meet a given correlation task continues to decrease rapidly with time. We show detailed comparisons between DiFX and two existing hardware-based correlators: the Australian Long Baseline Array S2 correlator and the NRAO Very Long Baseline Array correlator. In both cases, excellent agreement was found between the correlators. Finally, we describe plans for the future operation of DiFX on the Swinburne supercomputer for both astrophysical and geodetic science.

  9. Method and system for rendering and interacting with an adaptable computing environment

    Science.gov (United States)

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  10. Yabi: An online research environment for grid, high performance and cloud computing

    Directory of Open Access Journals (Sweden)

    Hunter Adam A

    2012-02-01

    Full Text Available Abstract Background There is a significant demand for creating pipelines or workflows in the life science discipline that chain a number of discrete compute and data intensive analysis tasks into sophisticated analysis procedures. This need has led to the development of general as well as domain-specific workflow environments that are either complex desktop applications or Internet-based applications. Complexities can arise when configuring these applications in heterogeneous compute and storage environments if the execution and data access models are not designed appropriately. These complexities manifest themselves through limited access to available HPC resources, significant overhead required to configure tools and inability for users to simply manage files across heterogenous HPC storage infrastructure. Results In this paper, we describe the architecture of a software system that is adaptable to a range of both pluggable execution and data backends in an open source implementation called Yabi. Enabling seamless and transparent access to heterogenous HPC environments at its core, Yabi then provides an analysis workflow environment that can create and reuse workflows as well as manage large amounts of both raw and processed data in a secure and flexible way across geographically distributed compute resources. Yabi can be used via a web-based environment to drag-and-drop tools to create sophisticated workflows. Yabi can also be accessed through the Yabi command line which is designed for users that are more comfortable with writing scripts or for enabling external workflow environments to leverage the features in Yabi. Configuring tools can be a significant overhead in workflow environments. Yabi greatly simplifies this task by enabling system administrators to configure as well as manage running tools via a web-based environment and without the need to write or edit software programs or scripts. In this paper, we highlight Yabi's capabilities

  11. Hybrid heating systems optimization of residential environment to have thermal comfort conditions by numerical simulation.

    Science.gov (United States)

    Jahantigh, Nabi; Keshavarz, Ali; Mirzaei, Masoud

    2015-01-01

    The aim of this study is to determine optimum hybrid heating systems parameters, such as temperature, surface area of a radiant heater and vent area to have thermal comfort conditions. DOE, Factorial design method is used to determine the optimum values for input parameters. A 3D model of a virtual standing thermal manikin with real dimensions is considered in this study. Continuity, momentum, energy, species equations for turbulent flow and physiological equation for thermal comfort are numerically solved to study heat, moisture and flow field. K - ɛRNG Model is used for turbulence modeling and DO method is used for radiation effects. Numerical results have a good agreement with the experimental data reported in the literature. The effect of various combinations of inlet parameters on thermal comfort is considered. According to Pareto graph, some of these combinations that have significant effect on the thermal comfort require no more energy can be used as useful tools. A better symmetrical velocity distribution around the manikin is also presented in the hybrid system.

  12. Second language writing anxiety, computer anxiety, and performance in a classroom versus a web-based environment

    OpenAIRE

    2011-01-01

    This study examined the impact of writing anxiety and computer anxiety on language learning for 45 ESL adult learners enrolled in an English grammar and writing course. Two sections of the course were offered in a traditional classroom setting whereas two others were given in a hybrid form that involved distance learning. Contrary to previous research, writing anxiety showed no correlation with learning performance, whereas computer anxiety only yielded a positive correlation with performance...

  13. Second language writing anxiety, computer anxiety, and performance in a classroom versus a web-based environment

    OpenAIRE

    2011-01-01

    This study examined the impact of writing anxiety and computer anxiety on language learning for 45 ESL adult learners enrolled in an English grammar and writing course. Two sections of the course were offered in a traditional classroom setting whereas two others were given in a hybrid form that in-volved distance learning. Contrary to previous research, writing anxiety showed no correlation with learning performance, whereas computer anxie-ty only yielded a positive correlation with performan...

  14. Engine control strategy for a series hybrid electric vehicle incorporating load-leveling and computer controlled energy management

    Energy Technology Data Exchange (ETDEWEB)

    Hochgraf, C.G.; Ryan, M.J.; Wiegman, H.L. [Univ. of Wisconsin, Madison, WI (United States)

    1996-09-01

    This paper identifies important engine, alternator and battery characteristics needed for determining an appropriate engine control strategy for a series hybrid electric vehicle. Examination of these characteristics indicates that a load-leveling strategy applied to the small engine will provide better fuel economy than a power-tracking scheme. An automatic energy management strategy is devised whereby a computer controller determines the engine-alternator turn-on and turn-off conditions and controls the engine-alternator autonomously. Battery state of charge is determined from battery voltage and current measurements. Experimental results of the system`s performance in a test vehicle during city driving are presented.

  15. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    Science.gov (United States)

    2017-08-08

    IDENTIFICATION) 1. Description In a typical design review process, a design space is presented to the reviewer(s) who examine the space for design ...Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...sacrificing accuracy. The research team timed each task, and recorded activity on evaluation sheets for Fault Identification Test. At the

  16. Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness

    Science.gov (United States)

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of...sacrificing accuracy. The research team timed each task, and recorded activity on evaluation sheets for spatial awareness Test. At the completion of the...The significance and the detail description of this study is very well explained by Satter (2012) in his recent paper. Here we only present the

  17. Training troubleshooting skills with an anchored instruction module in an authentic computer based simulation environment

    OpenAIRE

    2013-01-01

    To improve the application and transfer of troubleshooting skills when diagnosing faults in complex automated production units, we developed and implemented an “anchored instruction” learning module in the context of a computer based simulation environment. The effects of the instructional module were evaluated in a quasi-experimental evaluation study. During the study 42 mechatronic apprentices were trained in two parallel experimental groups with and without the anchored instruction module....

  18. Workshop on Pervasive Computing and Cooperative Environments in a Global Context

    Science.gov (United States)

    Selvarajah, Kirusnapillai; Speirs, Neil

    The increasing number of devices that are invisibly embedded into our surrounding environment as well as the proliferation of wireless communication and sensing technologies are the basis for visions like ambient intelligence, ubiquitous and pervasive computing. In this context, the objective of PECES EU project is the creation of a comprehensive software layer to enable the seamless cooperation of embedded devices across various smart spaces on a global scale in a context-dependent, secure and trustworthy manner.

  19. Do Social Computing Make You Happy? A Case Study of Nomadic Children in Mixed Environments

    DEFF Research Database (Denmark)

    Christensen, Bent Guldbjerg

    2005-01-01

    In this paper I describe a perspective on ambient, ubiquitous, and pervasive computing called the happiness perspective. By using the happiness perspective, the application domain and how the technology is used and experienced, becomes a central and integral part of perceiving ambient technology....... will use the perspective in a case study on field test experiments with nomadic children in mixed environments using the eBag system....

  20. A study of the impact of scheduling parameters in heterogeneous computing environments

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S [ORNL

    2014-01-01

    This paper describes a tool for exploring system scheduler parameter settings in a heterogeneous computing environment. Through the coupling of simulation and optimization techniques, this work investigates optimal scheduling intervals, the impact of job arrival prediction on scheduling, as well as how to best apply fair use policies. The developed simulation framework is quick and modular, enabling decision makers to further explore decisions in real-time regarding scheduling policies or parameter changes.