WorldWideScience

Sample records for hybrid computing environment

  1. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  2. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    Science.gov (United States)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  3. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  4. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-06-17

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  7. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    simulation with diagonal preconditioning shows the better speedup. The MPI library was used for node-to-node communication among partitioned subdomains, and the OpenMP threads were activated in every single node using multi-core computing environment. The results of hybrid computing show good performance comparing the pure MPI parallel computing.

  8. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  9. Hybrid quantum computation

    International Nuclear Information System (INIS)

    Sehrawat, Arun; Englert, Berthold-Georg; Zemann, Daniel

    2011-01-01

    We present a hybrid model of the unitary-evolution-based quantum computation model and the measurement-based quantum computation model. In the hybrid model, part of a quantum circuit is simulated by unitary evolution and the rest by measurements on star graph states, thereby combining the advantages of the two standard quantum computation models. In the hybrid model, a complicated unitary gate under simulation is decomposed in terms of a sequence of single-qubit operations, the controlled-z gates, and multiqubit rotations around the z axis. Every single-qubit and the controlled-z gate are realized by a respective unitary evolution, and every multiqubit rotation is executed by a single measurement on a required star graph state. The classical information processing in our model requires only an information flow vector and propagation matrices. We provide the implementation of multicontrol gates in the hybrid model. They are very useful for implementing Grover's search algorithm, which is studied as an illustrative example.

  10. Security in hybrid cloud computing

    OpenAIRE

    Koudelka, Ondřej

    2016-01-01

    This bachelor thesis deals with the area of hybrid cloud computing, specifically with its security. The major aim of the thesis is to analyze and compare the chosen hybrid cloud providers. For the minor aim this thesis compares the security challenges of hybrid cloud as opponent to other deployment models. In order to accomplish said aims, this thesis defines the terms cloud computing and hybrid cloud computing in its theoretical part. Furthermore the security challenges for cloud computing a...

  11. CHPS IN CLOUD COMPUTING ENVIRONMENT

    OpenAIRE

    K.L.Giridas; A.Shajin Nargunam

    2012-01-01

    Workflow have been utilized to characterize a various form of applications concerning high processing and storage space demands. So, to make the cloud computing environment more eco-friendly,our research project was aiming in reducing E-waste accumulated by computers. In a hybrid cloud, the user has flexibility offered by public cloud resources that can be combined to the private resources pool as required. Our previous work described the process of combining the low range and mid range proce...

  12. Computing environment logbook

    Science.gov (United States)

    Osbourn, Gordon C; Bouchard, Ann M

    2012-09-18

    A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

  13. DCE. Future IHEP's computing environment

    International Nuclear Information System (INIS)

    Zheng Guorui; Liu Xiaoling

    1995-01-01

    IHEP'S computing environment consists of several different computing environments established on IHEP computer networks. In which, the BES environment supported HEP computing is the main part of IHEP computing environment. Combining with the procedure of improvement and extension of BES environment, the authors describe development of computing environments in outline as viewed from high energy physics (HEP) environment establishment. The direction of developing to distributed computing of the IHEP computing environment based on the developing trend of present distributed computing is presented

  14. Hybrid computing - Generalities and bibliography

    International Nuclear Information System (INIS)

    Neel, Daniele

    1970-01-01

    This note presents the content of a research thesis. It describes the evolution of hybrid computing systems, discusses the benefits and shortcomings of analogue or hybrid systems, discusses the building up of an hybrid system (requires properties), comments different possible uses, addresses the issues of language and programming, discusses analysis methods and scopes of application. An appendix proposes a bibliography on these issues and notably the different scopes of application (simulation, fluid dynamics, biology, chemistry, electronics, energy, errors, space, programming languages, hardware, mechanics, and optimisation of equations or processes, physics) [fr

  15. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  16. Checkpointing for a hybrid computing node

    Science.gov (United States)

    Cher, Chen-Yong

    2016-03-08

    According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.

  17. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...... propose an interesting application to formalisation of hybrid systems. We obtain some class of hybrid systems, which trajectories are computable in the sense of computable analysis. This research was supported in part by the RFBR (grants N 99-01-00485, N 00-01- 00810) and by the Siberian Branch of RAS (a...... grant for young researchers, 2000)...

  18. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  19. Heterotic computing: exploiting hybrid computational devices.

    Science.gov (United States)

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. Hybrid computer modelling in plasma physics

    International Nuclear Information System (INIS)

    Hromadka, J; Ibehej, T; Hrach, R

    2016-01-01

    Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)

  1. Printing in Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Karapantelakis, Athanasios; Delvic, Alisa; Zarifi Eslami, Mohammed; Khamit, Saltanat

    Document printing has long been considered an indispensable part of the workspace. While this process is considered trivial and simple for environments where resources are ample (e.g. desktop computers connected to printers within a corporate network), it becomes complicated when applied in a mobile

  2. Adaptation and hybridization in computational intelligence

    CERN Document Server

    Jr, Iztok

    2015-01-01

      This carefully edited book takes a walk through recent advances in adaptation and hybridization in the Computational Intelligence (CI) domain. It consists of ten chapters that are divided into three parts. The first part illustrates background information and provides some theoretical foundation tackling the CI domain, the second part deals with the adaptation in CI algorithms, while the third part focuses on the hybridization in CI. This book can serve as an ideal reference for researchers and students of computer science, electrical and civil engineering, economy, and natural sciences that are confronted with solving the optimization, modeling and simulation problems. It covers the recent advances in CI that encompass Nature-inspired algorithms, like Artificial Neural networks, Evolutionary Algorithms and Swarm Intelligence –based algorithms.  

  3. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  4. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  5. Airborne Cloud Computing Environment (ACCE)

    Science.gov (United States)

    Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz

    2011-01-01

    Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.

  6. Evaluation of a Compact Hybrid Brain-Computer Interface System

    Directory of Open Access Journals (Sweden)

    Jaeyoung Shin

    2017-01-01

    Full Text Available We realized a compact hybrid brain-computer interface (BCI system by integrating a portable near-infrared spectroscopy (NIRS device with an economical electroencephalography (EEG system. The NIRS array was located on the subjects’ forehead, covering the prefrontal area. The EEG electrodes were distributed over the frontal, motor/temporal, and parietal areas. The experimental paradigm involved a Stroop word-picture matching test in combination with mental arithmetic (MA and baseline (BL tasks, in which the subjects were asked to perform either MA or BL in response to congruent or incongruent conditions, respectively. We compared the classification accuracies of each of the modalities (NIRS or EEG with that of the hybrid system. We showed that the hybrid system outperforms the unimodal EEG and NIRS systems by 6.2% and 2.5%, respectively. Since the proposed hybrid system is based on portable platforms, it is not confined to a laboratory environment and has the potential to be used in real-life situations, such as in neurorehabilitation.

  7. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  8. HTTR plant dynamic simulation using a hybrid computer

    International Nuclear Information System (INIS)

    Shimazaki, Junya; Suzuki, Katsuo; Nabeshima, Kunihiko; Watanabe, Koichi; Shinohara, Yoshikuni; Nakagawa, Shigeaki.

    1990-01-01

    A plant dynamic simulation of High-Temperature Engineering Test Reactor has been made using a new-type hybrid computer. This report describes a dynamic simulation model of HTTR, a hybrid simulation method for SIMSTAR and some results obtained from dynamics analysis of HTTR simulation. It concludes that the hybrid plant simulation is useful for on-line simulation on account of its capability of computation at high speed, compared with that of all digital computer simulation. With sufficient accuracy, 40 times faster computation than real time was reached only by changing an analog time scale for HTTR simulation. (author)

  9. Modelling of data uncertainties on hybrid computers

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Anke (ed.)

    2016-06-15

    The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the

  10. Vanderbilt University: Campus Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    Despite the decentralized nature of computing at Vanderbilt, there is significant evidence of cooperation and use of each other's resources by the various computing entities. Planning for computing occurs in every school and department. Caravan, a campus-wide network, is described. (MLW)

  11. Computing all hybridization networks for multiple binary phylogenetic input trees.

    Science.gov (United States)

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  12. Evaluation of Hybrid and Distance Education Learning Environments in Spain

    Science.gov (United States)

    Ferrer-Cascales, Rosario; Walker, Scott L.; Reig-Ferrer, Abilio; Fernandez-Pascual, Maria Dolores; Albaladejo-Blazquez, Natalia

    2011-01-01

    This article describes the adaptation and validation of the "Distance Education Learning Environments Survey" (DELES) for use in investigating the qualities found in distance and hybrid education psycho-social learning environments in Spain. As Europe moves toward post-secondary student mobility, equanimity in access to higher education,…

  13. Advances in hybrid optics physical sensors for extreme environments

    Science.gov (United States)

    Riza, Nabeel A.

    2010-04-01

    Highlighted are novel innovations in hybrid optical design physical sensors for extreme environments. Various hybrid design compositions are proposed that are suited for a particular sensor application. Examples includes combining freespace (wireless) and fiber-optics (wired) for gas turbine sensing and combining single crystal and sintered Silicon Carbide (SiC) materials for robust extreme environment Coefficent of Thermal Expansion (CTE) matched frontend probe design. Sensor signal processing also includes the hybrid theme where for example Black-Body radiation thermometry (pyrometry) is combined with laser interferometry to provide extreme temperature measurements. The hybrid theme also operates on the optical device level where a digital optical device such as a Digital Micromirror Device (DMD) is combined with an analog optical device such as an Electronically Controlled Variable Focal Length Lens (ECVFL) to deliver a smart and compressive Three Dimensional (3-D) imaging sensor for remote scene and object shape capture including both ambient light (passive) mode and active laser targeting and receive processing. Within a device level, the hybrid theme also operates via combined analog and digital control such as within a wavelength-coded variable optical delay line. These powerful hybrid design optical sensors have numerous applications in engineering and science applications from the military to the commercial/industrial sectors.

  14. Embedding Moodle into Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus

    2010-01-01

    Glahn, C., & Specht, M. (2010). Embedding Moodle into Ubiquitous Computing Environments. In M. Montebello, et al. (Eds.), 9th World Conference on Mobile and Contextual Learning (MLearn2010) (pp. 100-107). October, 19-22, 2010, Valletta, Malta.

  15. Security Management Model in Cloud Computing Environment

    OpenAIRE

    Ahmadpanah, Seyed Hossein

    2016-01-01

    In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...

  16. Intelligent computing for sustainable energy and environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang [Queen' s Univ. Belfast (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Li, Shaoyuan; Li, Dewei [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Niu, Qun (eds.) [Shanghai Univ. (China). School of Mechatronic Engineering and Automation

    2013-07-01

    Fast track conference proceedings. State of the art research. Up to date results. This book constitutes the refereed proceedings of the Second International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2012, held in Shanghai, China, in September 2012. The 60 full papers presented were carefully reviewed and selected from numerous submissions and present theories and methodologies as well as the emerging applications of intelligent computing in sustainable energy and environment.

  17. A Hybrid Scheduler for Many Task Computing in Big Data Systems

    Directory of Open Access Journals (Sweden)

    Vasiliu Laura

    2017-06-01

    Full Text Available With the rapid evolution of the distributed computing world in the last few years, the amount of data created and processed has fast increased to petabytes or even exabytes scale. Such huge data sets need data-intensive computing applications and impose performance requirements to the infrastructures that support them, such as high scalability, storage, fault tolerance but also efficient scheduling algorithms. This paper focuses on providing a hybrid scheduling algorithm for many task computing that addresses big data environments with few penalties, taking into consideration the deadlines and satisfying a data dependent task model. The hybrid solution consists of several heuristics and algorithms (min-min, min-max and earliest deadline first combined in order to provide a scheduling algorithm that matches our problem. The experimental results are conducted by simulation and prove that the proposed hybrid algorithm behaves very well in terms of meeting deadlines.

  18. Hybrid E-Textbooks as Comprehensive Interactive Learning Environments

    Science.gov (United States)

    Ghaem Sigarchian, Hajar; Logghe, Sara; Verborgh, Ruben; de Neve, Wesley; Salliau, Frank; Mannens, Erik; Van de Walle, Rik; Schuurman, Dimitri

    2018-01-01

    An e-TextBook can serve as an interactive learning environment (ILE), facilitating more effective teaching and learning processes. In this paper, we propose the novel concept of an EPUB 3-based Hybrid e-TextBook, which allows for interaction between the digital and the physical world. In that regard, we first investigated the gap between the…

  19. Architecture for Collaborative Learning Activities in Hybrid Learning Environments

    OpenAIRE

    Ibáñez, María Blanca; Maroto, David; García Rueda, José Jesús; Leony, Derick; Delgado Kloos, Carlos

    2012-01-01

    3D virtual worlds are recognized as collaborative learning environments. However, the underlying technology is not sufficiently mature and the virtual worlds look cartoonish, unlinked to reality. Thus, it is important to enrich them with elements from the real world to enhance student engagement in learning activities. Our approach is to build learning environments where participants can either be in the real world or in its mirror world while sharing the same hybrid space in a collaborative ...

  20. Evaluation of hybrid and distance education learning environments in Spain

    OpenAIRE

    Ferrer-Cascales, Rosario; Walker, Scott L.; Reig-Ferrer, Abilio; Fernández-Pascual, M. Dolores; Albaladejo-Blázquez, Natalia

    2011-01-01

    This article describes the adaptation and validation of the Distance Education Learning Environments Survey (DELES) for use in investigating the qualities found in distance and hybrid education psycho-social learning environments in Spain. As Europe moves toward post-secondary student mobility, equanimity in access to higher education, and more standardised degree programs across the European Higher Education Area (EHEA) the need for a high quality method for continually assessing the excelle...

  1. Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home

    Directory of Open Access Journals (Sweden)

    Gila Cohen Zilka

    2016-06-01

    Full Text Available Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school; and (b the digital divide in literacy skills. In the present study we examined the degree of effectiveness of receiving a desktop or hybrid computer for the home in reducing the digital divide among children of low socio-economic status aged 8-12 from various localities across Israel. The sample consisted of 1,248 respondents assessed in two measurements. As part of the mixed-method study, 128 children were also interviewed. Findings indicate that after the children received desktop or hybrid computers, changes occurred in their frequency of access, mobility, and computer literacy. Differences were found between the groups: hybrid computers reduce disparities and promote work with the computer and surfing the Internet more than do desktop computers. Narrowing the digital divide for this age group has many implications for the acquisition of skills and study habits, and consequently, for the realization of individual potential. The children spoke about self improvement as a result of exposure to the digital environment, about a sense of empowerment and of improvement in their advantage in the social fabric. Many children expressed a desire to continue their education and expand their knowledge of computer applications, the use of software, of games, and more. Therefore, if there is no computer in the home and it is necessary to decide between a desktop and a hybrid computer, a hybrid computer is preferable.

  2. Knowledge-inducing Global Path Planning for Robots in Environment with Hybrid Terrain

    Directory of Open Access Journals (Sweden)

    Yi-nan Guo

    2010-09-01

    Full Text Available In complex environment with hybrid terrain, different regions may have different terrain. Path planning for robots in such environment is an open NP-complete problem, which lacks effective methods. The paper develops a novel global path planning method based on common sense and evolution knowledge by adopting dual evolution structure in culture algorithms. Common sense describes terrain information and feasibility of environment, which is used to evaluate and select the paths. Evolution knowledge describes the angle relationship between the path and the obstacles, or the common segments of paths, which is used to judge and repair infeasible individuals. Taken two types of environments with different obstacles and terrain as examples, simulation results indicate that the algorithm can effectively solve path planning problem in complex environment and decrease the computation complexity for judgment and repair of infeasible individuals. It also can improve the convergence speed and have better computation stability.

  3. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  4. Novel hybrid adaptive controller for manipulation in complex perturbation environments.

    Directory of Open Access Journals (Sweden)

    Alex M C Smith

    Full Text Available In this paper we present a hybrid control scheme, combining the advantages of task-space and joint-space control. The controller is based on a human-like adaptive design, which minimises both control effort and tracking error. Our novel hybrid adaptive controller has been tested in extensive simulations, in a scenario where a Baxter robot manipulator is affected by external disturbances in the form of interaction with the environment and tool-like end-effector perturbations. The results demonstrated improved performance in the hybrid controller over both of its component parts. In addition, we introduce a novel method for online adaptation of learning parameters, using the fuzzy control formalism to utilise expert knowledge from the experimenter. This mechanism of meta-learning induces further improvement in performance and avoids the need for tuning through trial testing.

  5. Input/output routines for a hybrid computer

    International Nuclear Information System (INIS)

    Izume, Akitada; Yodo, Terutaka; Sakama, Iwao; Sakamoto, Akira; Miyake, Osamu

    1976-05-01

    This report is concerned with data processing programs for a hybrid computer system. Especially pre-data processing of magnetic tapes which are recorded during the dynamic experiment by FACOM 270/25 data logging system in the 50 MW steam generator test facility is described in detail. The magnetic tape is a most effective recording medium for data logging, but recording formats of the magnetic tape are different between data logging systems. In our section, the final data analyses are performed by data in the disk of EAI-690 hybrid computer system, and to transfer all required information in magnetic tapes to the disk, the magnetic tape editing and data transit are necessary by sub-computer NEAC-3200 system. This report is written for users as a manual and reference hand book of pre-data processing between different type computers. (auth.)

  6. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  7. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    DEFF Research Database (Denmark)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-01-01

    for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric...

  8. Hybrid computer optimization of systems with random parameters

    Science.gov (United States)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  9. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  10. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC environments

    Directory of Open Access Journals (Sweden)

    Athanassios M. Kintsakis

    2017-01-01

    Full Text Available Hermes introduces a new “describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  11. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    Science.gov (United States)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  12. Hybrid parallel computing architecture for multiview phase shifting

    Science.gov (United States)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  13. Human-Computer Interaction in Smart Environments

    Science.gov (United States)

    Paravati, Gianluca; Gatteschi, Valentina

    2015-01-01

    Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  14. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  15. Scheduling multimedia services in cloud computing environment

    Science.gov (United States)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  16. Computing, Environment and Life Sciences | Argonne National Laboratory

    Science.gov (United States)

    Computing, Environment and Life Sciences Research Divisions BIOBiosciences CPSComputational Science DSLData Argonne Leadership Computing Facility Biosciences Division Environmental Science Division Mathematics and Computer Science Division Facilities and Institutes Argonne Leadership Computing Facility News Events About

  17. CERR: A computational environment for radiotherapy research

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Blanco, Angel I.; Clark, Vanessa H.

    2003-01-01

    A software environment is described, called the computational environment for radiotherapy research (CERR, pronounced 'sir'). CERR partially addresses four broad needs in treatment planning research: (a) it provides a convenient and powerful software environment to develop and prototype treatment planning concepts, (b) it serves as a software integration environment to combine treatment planning software written in multiple languages (MATLAB, FORTRAN, C/C++, JAVA, etc.), together with treatment plan information (computed tomography scans, outlined structures, dose distributions, digital films, etc.), (c) it provides the ability to extract treatment plans from disparate planning systems using the widely available AAPM/RTOG archiving mechanism, and (d) it provides a convenient and powerful tool for sharing and reproducing treatment planning research results. The functional components currently being distributed, including source code, include: (1) an import program which converts the widely available AAPM/RTOG treatment planning format into a MATLAB cell-array data object, facilitating manipulation; (2) viewers which display axial, coronal, and sagittal computed tomography images, structure contours, digital films, and isodose lines or dose colorwash, (3) a suite of contouring tools to edit and/or create anatomical structures, (4) dose-volume and dose-surface histogram calculation and display tools, and (5) various predefined commands. CERR allows the user to retrieve any AAPM/RTOG key word information about the treatment plan archive. The code is relatively self-describing, because it relies on MATLAB structure field name definitions based on the AAPM/RTOG standard. New structure field names can be added dynamically or permanently. New components of arbitrary data type can be stored and accessed without disturbing system operation. CERR has been applied to aid research in dose-volume-outcome modeling, Monte Carlo dose calculation, and treatment planning optimization

  18. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  19. Hybrid soft computing systems for electromyographic signals analysis: a review

    Science.gov (United States)

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  20. Hybrid soft computing systems for electromyographic signals analysis: a review.

    Science.gov (United States)

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  1. Computer simulation of spacecraft/environment interaction

    International Nuclear Information System (INIS)

    Krupnikov, K.K.; Makletsov, A.A.; Mileev, V.N.; Novikov, L.S.; Sinolits, V.V.

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language

  2. Computer simulation of spacecraft/environment interaction

    CERN Document Server

    Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  3. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  4. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  5. Xcache in the ATLAS Distributed Computing Environment

    CERN Document Server

    Hanushevsky, Andrew; The ATLAS collaboration

    2018-01-01

    Built upon the Xrootd Proxy Cache (Xcache), we developed additional features to adapt the ATLAS distributed computing and data environment, especially its data management system RUCIO, to help improve the cache hit rate, as well as features that make the Xcache easy to use, similar to the way the Squid cache is used by the HTTP protocol. We are optimizing Xcache for the HPC environments, and adapting the HL-LHC Data Lakes design as its component for data delivery. We packaged the software in CVMFS, in Docker and Singularity containers in order to standardize the deployment and reduce the cost to resolve issues at remote sites. We are also integrating it into RUCIO as a volatile storage systems, and into various ATLAS workflow such as user analysis,

  6. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  7. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  8. Hybrid epidemics--a case study on computer worm conficker.

    Directory of Open Access Journals (Sweden)

    Changwang Zhang

    Full Text Available Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols.

  9. Hybrid epidemics--a case study on computer worm conficker.

    Science.gov (United States)

    Zhang, Changwang; Zhou, Shi; Chain, Benjamin M

    2015-01-01

    Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols.

  10. Hybrid Epidemics—A Case Study on Computer Worm Conficker

    Science.gov (United States)

    Zhang, Changwang; Zhou, Shi; Chain, Benjamin M.

    2015-01-01

    Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm’s spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm’s effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols. PMID:25978309

  11. Printing in heterogeneous computer environment at DESY

    International Nuclear Information System (INIS)

    Jakubowski, Z.

    1996-01-01

    The number of registered hosts DESY reaches 3500 while the number of print queues approaches 150. The spectrum of used computing environment is very wide: from MAC's and PC's, through SUN, DEC and SGI machines to the IBM mainframe. In 1994 we used 18 tons of paper. We present a solution for providing print services in such an environment for more than 3500 registered users. The availability of the print service is a serious issue. Using centralized printing has a lot of advantages for software administration but creates single point of failure. We solved this problem partially without using expensive software and hardware. The talk provides information about the DESY central central print spooler concept. None of the systems available on the market provides ready to use reliable solution for all platforms used for DESY. We discuss concepts for installation, administration and monitoring large number of printers. We found a solution for printing both on central computing facilities likewise for support of stand-alone workstations. (author)

  12. The Use of Computer Simulation to Compare Student performance in Traditional versus Distance Learning Environments

    Directory of Open Access Journals (Sweden)

    Retta Guy

    2015-06-01

    Full Text Available Simulations have been shown to be an effective tool in traditional learning environments; however, as distance learning grows in popularity, the need to examine simulation effectiveness in this environment has become paramount. A casual-comparative design was chosen for this study to determine whether students using a computer-based instructional simulation in hybrid and fully online environments learned better than traditional classroom learners. The study spans a period of 6 years beginning fall 2008 through spring 2014. The population studied was 281 undergraduate business students self-enrolled in a 200-level microcomputer application course. The overall results support previous studies in that computer simulations are most effective when used as a supplement to face-to-face lectures and in hybrid environments.

  13. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    Science.gov (United States)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  14. OPTHYLIC: An Optimised Tool for Hybrid Limits Computation

    Science.gov (United States)

    Busato, Emmanuel; Calvet, David; Theveneaux-Pelzer, Timothée

    2018-05-01

    A software tool, computing observed and expected upper limits on Poissonian process rates using a hybrid frequentist-Bayesian CLs method, is presented. This tool can be used for simple counting experiments where only signal, background and observed yields are provided or for multi-bin experiments where binned distributions of discriminating variables are provided. It allows the combination of several channels and takes into account statistical and systematic uncertainties, as well as correlations of systematic uncertainties between channels. It has been validated against other software tools and analytical calculations, for several realistic cases.

  15. Reach and get capability in a computing environment

    Science.gov (United States)

    Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM

    2012-06-05

    A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

  16. A Review of Hybrid Brain-Computer Interface Systems

    Directory of Open Access Journals (Sweden)

    Setare Amiri

    2013-01-01

    Full Text Available Increasing number of research activities and different types of studies in brain-computer interface (BCI systems show potential in this young research area. Research teams have studied features of different data acquisition techniques, brain activity patterns, feature extraction techniques, methods of classifications, and many other aspects of a BCI system. However, conventional BCIs have not become totally applicable, due to the lack of high accuracy, reliability, low information transfer rate, and user acceptability. A new approach to create a more reliable BCI that takes advantage of each system is to combine two or more BCI systems with different brain activity patterns or different input signal sources. This type of BCI, called hybrid BCI, may reduce disadvantages of each conventional BCI system. In addition, hybrid BCIs may create more applications and possibly increase the accuracy and the information transfer rate. However, the type of BCIs and their combinations should be considered carefully. In this paper, after introducing several types of BCIs and their combinations, we review and discuss hybrid BCIs, different possibilities to combine them, and their advantages and disadvantages.

  17. ComputerApplications and Virtual Environments (CAVE)

    Science.gov (United States)

    1993-01-01

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Centerr (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provided general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.

  18. A Hybrid Metaheuristic for Multi-Objective Scientific Workflow Scheduling in a Cloud Environment

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-03-01

    Full Text Available Cloud computing has emerged as a high-performance computing environment with a large pool of abstracted, virtualized, flexible, and on-demand resources and services. Scheduling of scientific workflows in a distributed environment is a well-known NP-complete problem and therefore intractable with exact solutions. It becomes even more challenging in the cloud computing platform due to its dynamic and heterogeneous nature. The aim of this study is to optimize multi-objective scheduling of scientific workflows in a cloud computing environment based on the proposed metaheuristic-based algorithm, Hybrid Bio-inspired Metaheuristic for Multi-objective Optimization (HBMMO. The strong global exploration ability of the nature-inspired metaheuristic Symbiotic Organisms Search (SOS is enhanced by involving an efficient list-scheduling heuristic, Predict Earliest Finish Time (PEFT, in the proposed algorithm to obtain better convergence and diversity of the approximate Pareto front in terms of reduced makespan, minimized cost, and efficient load balance of the Virtual Machines (VMs. The experiments using different scientific workflow applications highlight the effectiveness, practicality, and better performance of the proposed algorithm.

  19. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  20. Ubiquitous computing in shared-care environments.

    Science.gov (United States)

    Koch, S

    2006-07-01

    In light of future challenges, such as growing numbers of elderly, increase in chronic diseases, insufficient health care budgets and problems with staff recruitment for the health-care sector, information and communication technology (ICT) becomes a possible means to meet these challenges. Organizational changes such as the decentralization of the health-care system lead to a shift from in-hospital to both advanced and basic home health care. Advanced medical technologies provide solutions for distant home care in form of specialist consultations and home monitoring. Furthermore, the shift towards home health care will increase mobile work and the establishment of shared care teams which require ICT-based solutions that support ubiquitous information access and cooperative work. Clinical documentation and decision support systems are the main ICT-based solutions of interest in the context of ubiquitous computing for shared care environments. This paper therefore describes the prerequisites for clinical documentation and decision support at the point of care, the impact of mobility on the documentation process, and how the introduction of ICT-based solutions will influence organizations and people. Furthermore, the role of dentistry in shared-care environments is discussed and illustrated in the form of a future scenario.

  1. Hybrid female mate choice as a species isolating mechanism: environment matters.

    Science.gov (United States)

    Schmidt, E M; Pfennig, K S

    2016-04-01

    A fundamental goal of biology is to understand how new species arise and are maintained. Female mate choice is potentially critical to the speciation process: mate choice can prevent hybridization and thereby generate reproductive isolation between potentially interbreeding groups. Yet, in systems where hybridization occurs, mate choice by hybrid females might also play a key role in reproductive isolation by affecting hybrid fitness and contributing to patterns of gene flow between species. We evaluated whether hybrid mate choice behaviour could serve as such an isolating mechanism using spadefoot toad hybrids of Spea multiplicata and Spea bombifrons. We assessed the mate preferences of female hybrid spadefoot toads for sterile hybrid males vs. pure-species males in two alternative habitat types in which spadefoots breed: deep or shallow water. We found that, in deep water, hybrid females preferred the calls of sterile hybrid males to those of S. multiplicata males. Thus, maladaptive hybrid mate preferences could serve as an isolating mechanism. However, in shallow water, the preference for hybrid male calls was not expressed. Moreover, hybrid females did not prefer hybrid calls to those of S. bombifrons in either environment. Because hybrid female mate choice was context-dependent, its efficacy as a reproductive isolating mechanism will depend on both the environment in which females choose their mates as well as the relative frequencies of males in a given population. Thus, reproductive isolation between species, as well as habitat specific patterns of gene flow between species, might depend critically on the nature of hybrid mate preferences and the way in which they vary across environments. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  2. Hybrid glowworm swarm optimization for task scheduling in the cloud environment

    Science.gov (United States)

    Zhou, Jing; Dong, Shoubin

    2018-06-01

    In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.

  3. CSNS computing environment Based on OpenStack

    Science.gov (United States)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  4. Specialized Computer Systems for Environment Visualization

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  5. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  6. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  7. Three-dimensional pseudo-random number generator for implementing in hybrid computer systems

    International Nuclear Information System (INIS)

    Ivanov, M.A.; Vasil'ev, N.P.; Voronin, A.V.; Kravtsov, M.Yu.; Maksutov, A.A.; Spiridonov, A.A.; Khudyakova, V.I.; Chugunkov, I.V.

    2012-01-01

    The algorithm for generating pseudo-random numbers oriented to implementation by using hybrid computer systems is considered. The proposed solution is characterized by a high degree of parallel computing [ru

  8. DEFACTO: A Design Environment for Adaptive Computing Technology

    National Research Council Canada - National Science Library

    Hall, Mary

    2003-01-01

    This report describes the activities of the DEFACTO project, a Design Environment for Adaptive Computing Technology funded under the DARPA Adaptive Computing Systems and Just-In-Time-Hardware programs...

  9. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  10. The sociability of computer-supported collaborative learning environments

    NARCIS (Netherlands)

    Kreijns, C.J.; Kirschner, P.A.; Jochems, W.M.G.

    2002-01-01

    There is much positive research on computer-supported collaborative learning (CSCL) environments in asynchronous distributed learning groups (DLGs). There is also research that shows that contemporary CSCL environments do not completely fulfil expectations on supporting interactive group learning,

  11. Distributed Computations Environment Protection Using Artificial Immune Systems

    Directory of Open Access Journals (Sweden)

    A. V. Moiseev

    2011-12-01

    Full Text Available In this article the authors describe possibility of artificial immune systems applying for distributed computations environment protection from definite types of malicious impacts.

  12. A hybrid computer simulation of reactor spatial dynamics

    International Nuclear Information System (INIS)

    Hinds, H.W.

    1977-08-01

    The partial differential equations describing the one-speed spatial dynamics of thermal neutron reactors were converted to a set of ordinary differential equations, using finite-difference approximations for the spatial derivatives. The variables were then normalized to a steady-state reference condition in a novel manner, to yield an equation set particularly suitable for implementation on a hybrid computer. One Applied Dynamics AD/FIVE analog-computer console is capable of solving, all in parallel, up to 30 simultaneous differential equations. This corresponds roughly to eight reactor nodes, each with two active delayed-neutron groups. To improve accuracy, an increase in the number of nodes is usually required. Using the Hsu-Howe multiplexing technique, an 8-node, one-dimensional module was switched back and forth between the left and right halves of the reactor, to simulate a 16-node model, also in one dimension. These two versions (8 or 16 nodes) of the model were tested on benchmark problems of the loss-of-coolant type, which were also solved using the digital code FORSIM, with two energy groups and 26 nodes. Good agreement was obtained between the two solution techniques. (author)

  13. Plasma environment of Titan: a 3-D hybrid simulation study

    Directory of Open Access Journals (Sweden)

    S. Simon

    2006-05-01

    Full Text Available Titan possesses a dense atmosphere, consisting mainly of molecular nitrogen. Titan's orbit is located within the Saturnian magnetosphere most of the time, where the corotating plasma flow is super-Alfvénic, yet subsonic and submagnetosonic. Since Titan does not possess a significant intrinsic magnetic field, the incident plasma interacts directly with the atmosphere and ionosphere. Due to the characteristic length scales of the interaction region being comparable to the ion gyroradii in the vicinity of Titan, magnetohydrodynamic models can only offer a rough description of Titan's interaction with the corotating magnetospheric plasma flow. For this reason, Titan's plasma environment has been studied by using a 3-D hybrid simulation code, treating the electrons as a massless, charge-neutralizing fluid, whereas a completely kinetic approach is used to cover ion dynamics. The calculations are performed on a curvilinear simulation grid which is adapted to the spherical geometry of the obstacle. In the model, Titan's dayside ionosphere is mainly generated by solar UV radiation; hence, the local ion production rate depends on the solar zenith angle. Because the Titan interaction features the possibility of having the densest ionosphere located on a face not aligned with the ram flow of the magnetospheric plasma, a variety of different scenarios can be studied. The simulations show the formation of a strong magnetic draping pattern and an extended pick-up region, being highly asymmetric with respect to the direction of the convective electric field. In general, the mechanism giving rise to these structures exhibits similarities to the interaction of the ionospheres of Mars and Venus with the supersonic solar wind. The simulation results are in agreement with data from recent Cassini flybys.

  14. Plasma environment of Titan: a 3-D hybrid simulation study

    Directory of Open Access Journals (Sweden)

    S. Simon

    2006-05-01

    Full Text Available Titan possesses a dense atmosphere, consisting mainly of molecular nitrogen. Titan's orbit is located within the Saturnian magnetosphere most of the time, where the corotating plasma flow is super-Alfvénic, yet subsonic and submagnetosonic. Since Titan does not possess a significant intrinsic magnetic field, the incident plasma interacts directly with the atmosphere and ionosphere. Due to the characteristic length scales of the interaction region being comparable to the ion gyroradii in the vicinity of Titan, magnetohydrodynamic models can only offer a rough description of Titan's interaction with the corotating magnetospheric plasma flow. For this reason, Titan's plasma environment has been studied by using a 3-D hybrid simulation code, treating the electrons as a massless, charge-neutralizing fluid, whereas a completely kinetic approach is used to cover ion dynamics. The calculations are performed on a curvilinear simulation grid which is adapted to the spherical geometry of the obstacle. In the model, Titan's dayside ionosphere is mainly generated by solar UV radiation; hence, the local ion production rate depends on the solar zenith angle. Because the Titan interaction features the possibility of having the densest ionosphere located on a face not aligned with the ram flow of the magnetospheric plasma, a variety of different scenarios can be studied. The simulations show the formation of a strong magnetic draping pattern and an extended pick-up region, being highly asymmetric with respect to the direction of the convective electric field. In general, the mechanism giving rise to these structures exhibits similarities to the interaction of the ionospheres of Mars and Venus with the supersonic solar wind. The simulation results are in agreement with data from recent Cassini flybys.

  15. InSAR Scientific Computing Environment

    Science.gov (United States)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and

  16. Does hybridization of endophytic symbionts in a native grass increase fitness in resource-limited environments?

    DEFF Research Database (Denmark)

    Faeth, Stanley H.; Oberhofer, Martina; Saari, Susanna Talvikki

    2017-01-01

    to their grass hosts, especially in stressful environments. We tested the hybrid fitness hypothesis (HFH) that hybrid endophytes enhance fitness in stressful environments relative to non-hybrid endophytes. In a long-term field experiment, we monitored growth and reproduction of hybrid-infected (H+), non......-hybrid infected (NH+), naturally endophyte free (E-) plants and those plants from which the endophyte had been experimentally removed (H- and NH-) in resource-rich and resource-poor environments. Infection by both endophyte species enhanced growth and reproduction. H+ plants outperformed NH+ plants in terms...... of growth by the end of the experiment, supporting HFH. However, H+ plants only outperformed NH+ plants in the resource-rich treatment, contrary to HFH. Plant genotypes associated with each endophyte species had strong effects on growth and reproduction. Our results provide some support the HFH hypothesis...

  17. Energy-aware hybrid fruitfly optimization for load balancing in cloud environments for EHR applications

    Directory of Open Access Journals (Sweden)

    M. Lawanyashri

    Full Text Available Cloud computing has gained precise attention from the research community and management of IT, due to its scalable and dynamic capabilities. It is evolving as a vibrant technology to modernize and restructure healthcare organization to provide best services to the consumers. The rising demand for healthcare services and applications in cloud computing leads to the imbalance in resource usage and drastically increases the power consumption resulting in high operating cost. To achieve fast execution time and optimum utilization of the virtual machines, we propose a multi-objective hybrid fruitfly optimization technique based on simulated annealing to improve the convergence rate and optimization accuracy. The proposed approach is used to achieve the optimal resource utilization and reduces the energy consumption and cost in cloud computing environment. The result attained in our proposed technique provides an improved solution. The experimental results show that the proposed algorithm efficiently outperforms compared to the existing load balancing algorithms. Keywords: Cloud computing, Electronic Health Records (EHR, Load balancing, Fruitfly Optimization Algorithm (FOA, Simulated Annealing (SA, Energy consumption

  18. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  19. A Hybrid Approach for Supporting Adaptivity in E-Learning Environments

    Science.gov (United States)

    Al-Omari, Mohammad; Carter, Jenny; Chiclana, Francisco

    2016-01-01

    Purpose: The purpose of this paper is to identify a framework to support adaptivity in e-learning environments. The framework reflects a novel hybrid approach incorporating the concept of the event-condition-action (ECA) model and intelligent agents. Moreover, a system prototype is developed reflecting the hybrid approach to supporting adaptivity…

  20. Computers and the Environment: Minimizing the Carbon Footprint

    Science.gov (United States)

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  1. Computational analysis on plug-in hybrid electric motorcycle chassis

    Science.gov (United States)

    Teoh, S. J.; Bakar, R. A.; Gan, L. M.

    2013-12-01

    Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.

  2. Ubiquitous Computing in Physico-Spatial Environments

    DEFF Research Database (Denmark)

    Dalsgård, Peter; Eriksson, Eva

    2007-01-01

    Interaction design of pervasive and ubiquitous computing (UC) systems must take into account physico-spatial issues as technology is implemented into our physical surroundings. In this paper we discuss how one conceptual framework for understanding interaction in context, Activity Theory (AT...

  3. Carbon nanotube reinforced hybrid composites: Computational modeling of environmental fatigue and usability for wind blades

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2015-01-01

    The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....

  4. THE VALUE OF CLOUD COMPUTING IN THE BUSINESS ENVIRONMENT

    OpenAIRE

    Mircea GEORGESCU; Marian MATEI

    2013-01-01

    Without any doubt, cloud computing has become one of the most significant trends in any enterprise, not only for IT businesses. Besides the fact that the cloud can offer access to low cost, considerably flexible computing resources, cloud computing also provides the capacity to create a new relationship between business entities and corporate IT departments. The value added to the business environment is given by the balanced use of resources, offered by cloud computing. The cloud mentality i...

  5. Research computing in a distributed cloud environment

    International Nuclear Information System (INIS)

    Fransham, K; Agarwal, A; Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J

    2010-01-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  6. Security in cloud computing and virtual environments

    OpenAIRE

    Aarseth, Raymond

    2015-01-01

    Cloud computing is a big buzzwords today. Just watch the commercials on TV and I can promise that you will hear the word cloud service at least once. With the growth of cloud technology steadily rising, and everything from cellphones to cars connected to the cloud, how secure is cloud technology? What are the caveats of using cloud technology? And how does it all work? This thesis will discuss cloud security and the underlying technology called Virtualization to ...

  7. Distributed computing environment for Mine Warfare Command

    OpenAIRE

    Pritchard, Lane L.

    1993-01-01

    Approved for public release; distribution is unlimited. The Mine Warfare Command in Charleston, South Carolina has been converting its information systems architecture from a centralized mainframe based system to a decentralized network of personal computers over the past several years. This thesis analyzes the progress Of the evolution as of May of 1992. The building blocks of a distributed architecture are discussed in relation to the choices the Mine Warfare Command has made to date. Ar...

  8. Deterministic linear-optics quantum computing based on a hybrid approach

    International Nuclear Information System (INIS)

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-01-01

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources

  9. Deterministic linear-optics quantum computing based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  10. Heart dosimetry in radiotherapy with hybrid computational phantoms

    International Nuclear Information System (INIS)

    Moignier, Cyril

    2014-01-01

    Cardiovascular diseases following radiotherapy are major secondary late effects raising questions among the scientific community, especially regarding the dose-effect relationship and confounding risk factors (chemotherapy, cholesterolemia, age at treatment, blood pressure,..). Post-radiation coronary diseases are one of the main causes of cardiac morbidity. Some approximations are made when coronary doses due to radiotherapy are estimated, especially regarding the morphology. For retrospective studies with old medical records, only radiographs are usually available with sometimes some contours made with a simulator. For recent medical records, CT scans displaying the anatomy in 3D are used for radiotherapy simulation but do not allow the coronary artery visualization due to low resolution and contrast. Currently, coronary doses are barely assessed in clinical practice, and when it is done, anatomical prior knowledge is generally used. This thesis proposes an original approach based on hybrid computational phantoms to study coronary artery doses following radiotherapy for left-side breast cancer and Hodgkin lymphoma. During the thesis, a method inserting hybrid computational phantoms in a DICOM format into the treatment planning system has been developed and validated. It has been adapted and tested in conditions where only radiographs provide anatomical information, as with old medical records for left side breast radiotherapy. The method has also been adapted to perform precise dose reconstructions to the coronary artery for patients treated for a mediastinal Hodgkin lymphoma and diagnosed with coronary stenosis through a coroscanner. A case-control study was carried out and the risk of coronary stenosis on a coronary artery segment was assessed to be multiplied by 1.049 at each additional gray on the median dose to the coronary artery segment. For recent medical records, coronary doses uncertainties related to an approach by anatomical prior knowledge

  11. Cardiovascular dosimetry using hybrid computational phantoms after external radiotherapy

    International Nuclear Information System (INIS)

    Moignier, Alexandra

    2014-01-01

    Cardiovascular diseases following radiotherapy are major secondary late effects raising questions among the scientific community, especially regarding the dose-effect relationship and confounding risk factors (chemotherapy, cholesterolemia, age at treatment, blood pressure,..). Post-radiation coronary diseases are one of the main causes of cardiac morbidity. Some approximations are made when coronary doses due to radiotherapy are estimated, especially regarding the morphology. For retrospective studies with old medical records, only radiographs are usually available with sometimes some contours made with a simulator. For recent medical records, CT scans displaying the anatomy in 3D are used for radiotherapy simulation but do not allow the coronary artery visualization due to low resolution and contrast. Currently, coronary doses are barely assessed in clinical practice, and when it is done, anatomical prior knowledge is generally used. This thesis proposes an original approach based on hybrid computational phantoms to study coronary artery doses following radiotherapy for left-side breast cancer and Hodgkin lymphoma. During the thesis, a method inserting hybrid computational phantoms in a DICOM format into the treatment planning system has been developed and validated. It has been adapted and tested in conditions where only radiographs provide anatomical information, as with old medical records for left side breast radiotherapy. The method has also been adapted to perform precise dose reconstructions to the coronary artery for patients treated for a mediastinal Hodgkin lymphoma and diagnosed with coronary stenosis through a coro-scanner. A case-control study was carried out and the risk of coronary stenosis on a coronary artery segment was assessed to be multiplied by 1.049 at each additional gray on the median dose to the coronary artery segment. For recent medical records, coronary doses uncertainties related to an approach by anatomical prior knowledge

  12. Hybrid Computational Model for High-Altitude Aeroassist Vehicles, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  13. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  14. Computational Tool for Aerothermal Environment Around Transatmospheric Vehicles, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this Project is to develop a high-fidelity computational tool for accurate prediction of aerothermal environment on transatmospheric vehicles. This...

  15. Influence of environment and mitochondrial heritage on the ecological characteristics of fish in a hybrid zone.

    Directory of Open Access Journals (Sweden)

    Nicolas Stolzenberg

    2009-06-01

    Full Text Available Ecological characteristics (growth, morphology, reproduction arise from the interaction between environmental factors and genetics. Genetic analysis of individuals' life history traits might be used to improve our understanding of mechanisms that form and maintain a hybrid zone.A fish hybrid zone was used to characterize the process of natural selection. Data were collected during two reproductive periods (2001 and 2002 and 1117 individuals (nase, Chondrostama nasus nasus, sofie C. toxostoma toxostoma and hybrids were sampled. Reproductive dates of the two parental species overlapped at sympatric sites. The nase had an earlier reproductive period than the sofie; males had longer reproductive periods for both species. Hybridisation between female nase and male sofie was the most likely. Hybrids had a reproductive period similar to the inherited parental mitochondrial type. Growth and reproductive information from different environments has been synthesised following a bayesian approach of the von Bertalanffy model. Hybrid life history traits appear to link with maternal heritage. Hybrid size from the age of two and size at first maturity appeared to be closer to the size of the maternal origin species (nase or sofie. Median growth rates for hybrids were similar and intermediate between those of the parental species. We observed variable life history traits for hybrids and pure forms in the different parts of the hybrid zone. Geometrical analysis of the hybrid fish shape gave evidence of two main morphologies with a link to maternal heritage.Selective mating seemed to be the underlying process which, with mitochondrial heritage, could explain the evolution of the studied hybrid zone. More generally, we showed the importance of studies on hybrid zones and specifically the study of individuals' ecological characteristics, to improve our understanding of speciation.

  16. Influence of environment and mitochondrial heritage on the ecological characteristics of fish in a hybrid zone.

    Science.gov (United States)

    Stolzenberg, Nicolas; Nguyen The, Bénédicte; Salducci, Marie Dominique; Cavalli, Laurent

    2009-06-18

    Ecological characteristics (growth, morphology, reproduction) arise from the interaction between environmental factors and genetics. Genetic analysis of individuals' life history traits might be used to improve our understanding of mechanisms that form and maintain a hybrid zone. A fish hybrid zone was used to characterize the process of natural selection. Data were collected during two reproductive periods (2001 and 2002) and 1117 individuals (nase, Chondrostama nasus nasus, sofie C. toxostoma toxostoma and hybrids) were sampled. Reproductive dates of the two parental species overlapped at sympatric sites. The nase had an earlier reproductive period than the sofie; males had longer reproductive periods for both species. Hybridisation between female nase and male sofie was the most likely. Hybrids had a reproductive period similar to the inherited parental mitochondrial type. Growth and reproductive information from different environments has been synthesised following a bayesian approach of the von Bertalanffy model. Hybrid life history traits appear to link with maternal heritage. Hybrid size from the age of two and size at first maturity appeared to be closer to the size of the maternal origin species (nase or sofie). Median growth rates for hybrids were similar and intermediate between those of the parental species. We observed variable life history traits for hybrids and pure forms in the different parts of the hybrid zone. Geometrical analysis of the hybrid fish shape gave evidence of two main morphologies with a link to maternal heritage. Selective mating seemed to be the underlying process which, with mitochondrial heritage, could explain the evolution of the studied hybrid zone. More generally, we showed the importance of studies on hybrid zones and specifically the study of individuals' ecological characteristics, to improve our understanding of speciation.

  17. Effect of genotype x environment interactions of grapevine hybrids characteristics

    Directory of Open Access Journals (Sweden)

    Nikolić Dragan

    2017-01-01

    Full Text Available Research in this paper was performed at two different locations: Radmilovac and Vršac in Serbia. Four new interspecific hybrids (9846, 9896, 19574 and 20506 which are intended for table consumption were used as a material. Grape yield per unit area, the properties of the bunch (bunch weight, bunch length, bunch width and number of berries in bunch, the properties of berry (berry weight, berry length and berry width, as well as the characteristic of grape quality (sugar content and total acids in the must were studied in selected hybrids. The highest yield per unit area in the localities Radmilovac and Vršac had a hybrid 9896 (14 998 kg/ha; 11 365 kg/ha. Analysis of variance results showed for the bunch weight, bunch width and number of berries in bunch, berry weight and berry length significant differences among the genotypes. Significant differences between investigated localities were determined for the bunch length and all the berry characters. The interaction between genotype and localities showed significant differences for bunch length, berry length and berry width. Since the genotypes in the initial yielding (third year after planting, they are showed satisfactory results in relation to the objectives of selection.

  18. Fostering computational thinking skills with a tangible blocks programming environment

    OpenAIRE

    Turchi, T; Malizia, A

    2016-01-01

    Computational Thinking has recently returned into the limelight as an essential skill to have for both the general public and disciplines outside Computer Science. It encapsulates those thinking skills integral to solving complex problems using a computer, thus widely applicable in our technological society. Several public initiatives such as the Hour of Code successfully introduced it to millions of people of different ages and backgrounds, mostly using Blocks Programming Environments like S...

  19. The DIII-D Computing Environment: Characteristics and Recent Changes

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1999-01-01

    The DIII-D tokamak national fusion research facility along with its predecessor Doublet III has been operating for over 21 years. The DIII-D computing environment consists of real-time systems controlling the tokamak, heating systems, and diagnostics, and systems acquiring experimental data from instrumentation; major data analysis server nodes performing short term and long term data access and data analysis; and systems providing mechanisms for remote collaboration and the dissemination of information over the world wide web. Computer systems for the facility have undergone incredible changes over the course of time as the computer industry has changed dramatically. Yet there are certain valuable characteristics of the DIII-D computing environment that have been developed over time and have been maintained to this day. Some of these characteristics include: continuous computer infrastructure improvements, distributed data and data access, computing platform integration, and remote collaborations. These characteristics are being carried forward as well as new characteristics resulting from recent changes which have included: a dedicated storage system and a hierarchical storage management system for raw shot data, various further infrastructure improvements including deployment of Fast Ethernet, the introduction of MDSplus, LSF and common IDL based tools, and improvements to remote collaboration capabilities. This paper will describe this computing environment, important characteristics that over the years have contributed to the success of DIII-D computing systems, and recent changes to computer systems

  20. Hybrid NN/SVM Computational System for Optimizing Designs

    Science.gov (United States)

    Rai, Man Mohan

    2009-01-01

    A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily

  1. Performance evaluation of commercial maize hybrids across diverse Terai environments during the winter season in Nepal

    Directory of Open Access Journals (Sweden)

    Mahendra Prasad Tripathi

    2016-12-01

    Full Text Available The hybrid maize cultivars of multinational seed companies are gradually being popular among the farmers in Nepal. This paper reports on research finding of 117 maize hybrids of 20 seed companies assessed for grain yield and other traits at three sites in winter season of 2011 and 2012. The objective of the study was to identify superior maize hybrids suitable for winter time planting in eastern, central and inner Terai of Nepal. Across site analysis of variance revealed that highly significant effect of genotype and genotype × environment interaction (GEI on grain yield of commercial hybrids. Overall, 47 genotypes of 16 seed companies identified as high yielding and stable based on superiority measures. The statistical analysis ranked topmost three genotypes among tested hybrids as P3856 (10515 kg ha-1, Bisco prince (8763 kg ha-1 as well as Shaktiman (8654 kg ha-1 in the first year; and 3022 (8378 kg ha-1, Kirtiman manik (8323 kg ha-1 as well as Top class (7996 kg ha-1 in the second year. It can be concluded that stable and good performing hybrids identified as potential commercial hybrids for general cultivation on similar environments in Nepal.

  2. Hybrid data storage system in an HPC exascale environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  3. Environments for online maritime simulators with cloud computing capabilities

    Science.gov (United States)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  4. Proposing Hybrid Architecture to Implement Cloud Computing in Higher Education Institutions Using a Meta-synthesis Appro

    Directory of Open Access Journals (Sweden)

    hamid reza bazi

    2017-12-01

    Full Text Available Cloud computing is a new technology that considerably helps Higher Education Institutions (HEIs to develop and create competitive advantage with inherent characteristics such as flexibility, scalability, accessibility, reliability, fault tolerant and economic efficiency. Due to the numerous advantages of cloud computing, and in order to take advantage of cloud computing infrastructure, services of universities and HEIs need to migrate to the cloud. However, this transition involves many challenges, one of which is lack or shortage of appropriate architecture for migration to the technology. Using a reliable architecture for migration ensures managers to mitigate risks in the cloud computing technology. Therefore, organizations always search for suitable cloud computing architecture. In previous studies, these important features have received less attention and have not been achieved in a comprehensive way. The aim of this study is to use a meta-synthesis method for the first time to analyze the previously published studies and to suggest appropriate hybrid cloud migration architecture (IUHEC. We reviewed many papers from relevant journals and conference proceedings. The concepts extracted from these papers are classified to related categories and sub-categories. Then, we developed our proposed hybrid architecture based on these concepts and categories. The proposed architecture was validated by a panel of experts and Lawshe’s model was used to determine the content validity. Due to its innovative yet user-friendly nature, comprehensiveness, and high security, this architecture can help HEIs have an effective migration to cloud computing environment.

  5. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...

  6. Micro-computer cards for hard industrial environment

    Energy Technology Data Exchange (ETDEWEB)

    Breton, J M

    1984-03-15

    Approximately 60% of present or future distributed systems have, or will have, operational units installed in hard environments. In these applications, which include canalization and industrial motor control, robotics and process control, systems must be easily applied in environments not made for electronic use. The development of card systems in this hard industrial environment, which is found in petrochemical industry and mines is described. National semiconductor CIM card system CMOS technology allows the real time micro computer application to be efficient and functional in hard industrial environments.

  7. ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments

    Science.gov (United States)

    Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.

    2017-12-01

    The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access

  8. Phoneme-based speech segmentation using hybrid soft computing framework

    CERN Document Server

    Sarma, Mousmita

    2014-01-01

    The book discusses intelligent system design using soft computing and similar systems and their interdisciplinary applications. It also focuses on the recent trends to use soft computing as a versatile tool for designing a host of decision support systems.

  9. Activity Recognition Using Hybrid Generative/Discriminative Models on Home Environments Using Binary Sensors

    Directory of Open Access Journals (Sweden)

    Araceli Sanchis

    2013-04-01

    Full Text Available Activities of daily living are good indicators of elderly health status, and activity recognition in smart environments is a well-known problem that has been previously addressed by several studies. In this paper, we describe the use of two powerful machine learning schemes, ANN (Artificial Neural Network and SVM (Support Vector Machines, within the framework of HMM (Hidden Markov Model in order to tackle the task of activity recognition in a home setting. The output scores of the discriminative models, after processing, are used as observation probabilities of the hybrid approach. We evaluate our approach by comparing these hybrid models with other classical activity recognition methods using five real datasets. We show how the hybrid models achieve significantly better recognition performance, with significance level p < 0:05, proving that the hybrid approach is better suited for the addressed domain.

  10. Evaluation of genotype x environment interactions in maize hybrids using GGE biplot analysis

    Directory of Open Access Journals (Sweden)

    Fatma Aykut Tonk

    2011-01-01

    Full Text Available Seventeen hybrid maize genotypes were evaluated at four different locations in 2005 and 2006 cropping seasonsunder irrigated conditions in Turkey. The analysis of variance showed that mean squares of environments (E, genotypes (G andGE interactions (GEI were highly significant and accounted for 74, 7 and 19 % of treatment combination sum squares, respectively.To determine the effects of GEI on grain yield, the data were subjected to the GGE biplot analysis. Maize hybrid G16 can be proposedas reliably growing in test locations for high grain yield. Also, only the Yenisehir location could be best representative of overalllocations for deciding about which experimental hybrids can be recommended for grain yield in this study. Consequently, using ofgrain yield per plant instead of grain yield per plot in hybrid maize breeding programs could be preferred by private companies dueto some advantages.

  11. Design requirements for ubiquitous computing environments for healthcare professionals.

    Science.gov (United States)

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2004-01-01

    Ubiquitous computing environments can support clinical administrative routines in new ways. The aim of such computing approaches is to enhance routine physical work, thus it is important to identify specific design requirements. We studied healthcare professionals in an emergency room and developed the computer-augmented environment NOSTOS to support teamwork in that setting. NOSTOS uses digital pens and paper-based media as the primary input interface for data capture and as a means of controlling the system. NOSTOS also includes a digital desk, walk-up displays, and sensor technology that allow the system to track documents and activities in the workplace. We propose a set of requirements and discuss the value of tangible user interfaces for healthcare personnel. Our results suggest that the key requirements are flexibility in terms of system usage and seamless integration between digital and physical components. We also discuss how ubiquitous computing approaches like NOSTOS can be beneficial in the medical workplace.

  12. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  13. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  14. Collaborative virtual reality environments for computational science and design

    International Nuclear Information System (INIS)

    Papka, M. E.

    1998-01-01

    The authors are developing a networked, multi-user, virtual-reality-based collaborative environment coupled to one or more petaFLOPs computers, enabling the interactive simulation of 10 9 atom systems. The purpose of this work is to explore the requirements for this coupling. Through the design, development, and testing of such systems, they hope to gain knowledge that allows computational scientists to discover and analyze their results more quickly and in a more intuitive manner

  15. A Secure Authenticate Framework for Cloud Computing Environment

    OpenAIRE

    Nitin Nagar; Pradeep k. Jatav

    2014-01-01

    Cloud computing has an important aspect for the companies to build and deploy their infrastructure and application. Data Storage service in the cloud computing is easy as compare to the other data storage services. At the same time, cloud security in the cloud environment is challenging task. Security issues ranging from missing system configuration, lack of proper updates, or unwise user actions from remote data storage. It can expose user’s private data and information to unwanted access. i...

  16. Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas

    Science.gov (United States)

    Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi

    2016-11-01

    A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.

  17. A computational model for lower hybrid current drive

    International Nuclear Information System (INIS)

    Englade, R.C.; Bonoli, P.T.; Porkolab, M.

    1983-01-01

    A detailed simulation model for lower hybrid (LH) current drive in toroidal devices is discussed. This model accounts reasonably well for the magnitude of radio frequency (RF) current observed in the PLT and Alcator C devices. It also reproduces the experimental dependencies of RF current generation on toroidal magnetic field and has provided insights about mechanisms which may underlie the observed density limit of current drive. (author)

  18. Cloud Computing and Virtual Desktop Infrastructures in Afloat Environments

    OpenAIRE

    Gillette, Stefan E.

    2012-01-01

    The phenomenon of “cloud computing” has become ubiquitous among users of the Internet and many commercial applications. Yet, the U.S. Navy has conducted limited research in this nascent technology. This thesis explores the application and integration of cloud computing both at the shipboard level and in a multi-ship environment. A virtual desktop infrastructure, mirroring a shipboard environment, was built and analyzed in the Cloud Lab at the Naval Postgraduate School, which offers a potentia...

  19. Achieving a hybrid brain-computer interface with tactile selective attention and motor imagery

    Science.gov (United States)

    Ahn, Sangtae; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan

    2014-12-01

    Objective. We propose a new hybrid brain-computer interface (BCI) system that integrates two different EEG tasks: tactile selective attention (TSA) using a vibro-tactile stimulator on the left/right finger and motor imagery (MI) of left/right hand movement. Event-related desynchronization (ERD) from the MI task and steady-state somatosensory evoked potential (SSSEP) from the TSA task are retrieved and combined into two hybrid senses. Approach. One hybrid approach is to measure two tasks simultaneously; the features of each task are combined for testing. Another hybrid approach is to measure two tasks consecutively (TSA first and MI next) using only MI features. For comparison with the hybrid approaches, the TSA and MI tasks are measured independently. Main results. Using a total of 16 subject datasets, we analyzed the BCI classification performance for MI, TSA and two hybrid approaches in a comparative manner; we found that the consecutive hybrid approach outperformed the others, yielding about a 10% improvement in classification accuracy relative to MI alone. It is understood that TSA may play a crucial role as a prestimulus in that it helps to generate earlier ERD prior to MI and thus sustains ERD longer and to a stronger degree; this ERD may give more discriminative information than ERD in MI alone. Significance. Overall, our proposed consecutive hybrid approach is very promising for the development of advanced BCI systems.

  20. Extended Immersive Learning Environment: A Hybrid Remote/Virtual Laboratory

    Directory of Open Access Journals (Sweden)

    Lírio Shaeffer

    2010-09-01

    Full Text Available This paper presents a collaborative virtual learning environment, which includes technologies such as 3D virtual representations, learning and content management systems, remote experiments, and collaborative learning spaces, among others. It intends to facilitate the construction, management and sharing of knowledge among teachers and students, in a global perspective. The environment proposes the use of 3D social representations for accessing learning materials in a dynamic and interactive form, which is regarded to be closer to the physical reality experienced by teachers and students in a learning context. A first implementation of the proposed extended immersive learning environment, in the area of solid mechanics, is also described, including the access to theoretical contents and a remote experiment to determine the elastic modulus of a given object.These instructions give you basic guidelines for preparing camera-ready papers for conference proceedings. Use this document as a template if you are using Microsoft Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further. Define all symbols used in the abstract. Do not cite references in the abstract.

  1. A Hybrid Three Layer Architecture for Fire Agent Management in Rescue Simulation Environment

    Directory of Open Access Journals (Sweden)

    Alborz Geramifard

    2008-11-01

    Full Text Available This paper presents a new architecture called FAIS for imple- menting intelligent agents cooperating in a special Multi Agent environ- ment, namely the RoboCup Rescue Simulation System. This is a layered architecture which is customized for solving fire extinguishing problem. Structural decision making algorithms are combined with heuristic ones in this model, so it's a hybrid architecture.

  2. A Hybrid Three Layer Architecture for Fire Agent Management in Rescue Simulation Environment

    Directory of Open Access Journals (Sweden)

    Alborz Geramifard

    2005-06-01

    Full Text Available This paper presents a new architecture called FAIS for implementing intelligent agents cooperating in a special Multi Agent environment, namely the RoboCup Rescue Simulation System. This is a layered architecture which is customized for solving fire extinguishing problem. Structural decision making algorithms are combined with heuristic ones in this model, so it's a hybrid architecture.

  3. Communicator Style as a Predictor of Cyberbullying in a Hybrid Learning Environment

    Science.gov (United States)

    Dursun, Ozcan Ozgur; Akbulut, Yavuz

    2012-01-01

    This study aimed to describe the characteristics of undergraduate students in a hybrid learning environment with regard to their communicator styles and cyberbullying behaviors. Moreover, relationships between cyberbullying victimization and learners' perceived communicator styles were investigated. Cyberbullying victimization was measured through…

  4. Protect Heterogeneous Environment Distributed Computing from Malicious Code Assignment

    Directory of Open Access Journals (Sweden)

    V. S. Gorbatov

    2011-09-01

    Full Text Available The paper describes the practical implementation of the protection system of heterogeneous environment distributed computing from malicious code for the assignment. A choice of technologies, development of data structures, performance evaluation of the implemented system security are conducted.

  5. Visual Reasoning in Computational Environment: A Case of Graph Sketching

    Science.gov (United States)

    Leung, Allen; Chan, King Wah

    2004-01-01

    This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…

  6. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  7. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  8. Study of an analog/logic processor for the design of an auto patch hybrid computer

    International Nuclear Information System (INIS)

    Koched, Hassen

    1976-01-01

    This paper presents the experimental study of an analog multiprocessor designed at SES/CEN-Saclay. An application of such a device as a basic component of an auto-patch hybrid computer is presented. First, the description of the processor, and a presentation of the theoretical concepts which governed the design of the processor are given. Experiments on an hybrid computer are then presented. Finally, different systems of automatic patching are presented, and conveniently modified, for the use of such a processor. (author) [fr

  9. The Computer Revolution in Science: Steps towards the realization of computer-supported discovery environments

    NARCIS (Netherlands)

    de Jong, Hidde; Rip, Arie

    1997-01-01

    The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful

  10. Transpiration Response and Growth in Pearl Millet Parental Lines and Hybrids Bred for Contrasting Rainfall Environments

    Directory of Open Access Journals (Sweden)

    Susan Medina

    2017-10-01

    Full Text Available Under conditions of high vapor pressure deficit (VPD and soil drying, restricting transpiration is an important avenue to gain efficiency in water use. The question we raise in this article is whether breeding for agro-ecological environments that differ for the rainfall have selected for traits that control plant water use. These are measured in pearl millet materials bred for zones varying in rainfall (8 combinations of parent and F1-hybrids, 18 F1-hybrids and then 40 F1-hybrids. In all cases, we found an agro-ecological variation in the slope of the transpiration response to increasing VPD, and parental line variation in the transpiration response to soil drying within hybrids/parent combinations. The hybrids adapted to lower rainfall had higher transpiration response curves than those from the highest rainfall zones, but showed no variation in how transpiration responded to soil drying. The genotypes bred for lower rainfall zones showed lower leaf area, dry matter, thicker leaves, root development, and exudation, than the ones bred for high rainfall zone when grown in the low VPD environment of the greenhouse, but there was no difference in their root length neither on the root/shoot index in these genotypes. By contrast, when grown under high VPD conditions outdoors, the lower rainfall hybrids had the highest leaf, tiller, and biomass development. Finally, under soil drying the genotypes from the lower rainfall accumulated less biomass than the ones from higher rainfall zone, and so did the parental lines compared to the hybrids. These differences in the transpiration response and growth clearly showed that breeding for different agro-ecological zones also bred for different genotype strategies in relation to traits related to plant water use.Highlights:• Variation in transpiration response reflected breeding for agro-ecological zones• Different growth strategies depended on the environmental conditions• Different ideotypes reflected

  11. Transpiration Response and Growth in Pearl Millet Parental Lines and Hybrids Bred for Contrasting Rainfall Environments.

    Science.gov (United States)

    Medina, Susan; Gupta, S K; Vadez, Vincent

    2017-01-01

    Under conditions of high vapor pressure deficit (VPD) and soil drying, restricting transpiration is an important avenue to gain efficiency in water use. The question we raise in this article is whether breeding for agro-ecological environments that differ for the rainfall have selected for traits that control plant water use. These are measured in pearl millet materials bred for zones varying in rainfall (8 combinations of parent and F 1 -hybrids, 18 F 1 -hybrids and then 40 F 1 -hybrids). In all cases, we found an agro-ecological variation in the slope of the transpiration response to increasing VPD, and parental line variation in the transpiration response to soil drying within hybrids/parent combinations. The hybrids adapted to lower rainfall had higher transpiration response curves than those from the highest rainfall zones, but showed no variation in how transpiration responded to soil drying. The genotypes bred for lower rainfall zones showed lower leaf area, dry matter, thicker leaves, root development, and exudation, than the ones bred for high rainfall zone when grown in the low VPD environment of the greenhouse, but there was no difference in their root length neither on the root/shoot index in these genotypes. By contrast, when grown under high VPD conditions outdoors, the lower rainfall hybrids had the highest leaf, tiller, and biomass development. Finally, under soil drying the genotypes from the lower rainfall accumulated less biomass than the ones from higher rainfall zone, and so did the parental lines compared to the hybrids. These differences in the transpiration response and growth clearly showed that breeding for different agro-ecological zones also bred for different genotype strategies in relation to traits related to plant water use. Highlights : • Variation in transpiration response reflected breeding for agro-ecological zones • Different growth strategies depended on the environmental conditions • Different ideotypes reflected

  12. A Hybrid Soft Computing Approach for Subset Problems

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2013-01-01

    Full Text Available Subset problems (set partitioning, packing, and covering are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S can be partitioned into smaller subsets. All items in S must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions and set covering (all items must be contained in at least one partition. Here, we present a hybrid solver based on ant colony optimization (ACO combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.

  13. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    Science.gov (United States)

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low

  14. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  15. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  16. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  17. Modeling and Simulation of Renewable Hybrid Power System using Matlab Simulink Environment

    Directory of Open Access Journals (Sweden)

    Cristian Dragoş Dumitru

    2010-12-01

    Full Text Available The paper presents the modeling of a solar-wind-hydroelectric hybrid system in Matlab/Simulink environment. The application is useful for analysis and simulation of a real hybrid solar-wind-hydroelectric system connected to a public grid. Application is built on modular architecture to facilitate easy study of each component module influence. Blocks like wind model, solar model, hydroelectric model, energy conversion and load are implemented and the results of simulation are also presented. As an example, one of the most important studies is the behavior of hybrid system which allows employing renewable and variable in time energy sources while providing a continuous supply. Application represents a useful tool in research activity and also in teaching

  18. Operational computer graphics in the flight dynamics environment

    Science.gov (United States)

    Jeletic, James F.

    1989-01-01

    Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.

  19. Hybrid computer simulation of the dynamics of the Hoger Onderwijs Reactor

    International Nuclear Information System (INIS)

    Moers, J.C.; Vries, J.W. de.

    1976-01-01

    A distributed parameter model for the dynamics of the Hoger Onderwijs Reactor (HOR) at Delft is presented. The neutronic and the thermodynamic part of this model have been separately implemented on the AD4-IBM1800 Hybrid Computer of the Delft University of Technology Computation Centre. A continuous Space/Discrete Time solution method has been employed. Some test results of the simulation are included

  20. A Cost–Effective Computer-Based, Hybrid Motorised and Gravity ...

    African Journals Online (AJOL)

    A Cost–Effective Computer-Based, Hybrid Motorised and Gravity-Driven Material Handling System for the Mauritian Apparel Industry. ... Thus, many companies are investing significantly in a Research & Development department in order to design new techniques to improve worker's efficiency, and to decrease the amount ...

  1. Computation of hybrid static potentials in SU(3 lattice gauge theory

    Directory of Open Access Journals (Sweden)

    Reisinger Christian

    2018-01-01

    Full Text Available We compute hybrid static potentials in SU(3 lattice gauge theory. We present a method to automatically generate a large set of suitable creation operators with defined quantum numbers from elementary building blocks. We show preliminary results for several channels and discuss, which structures of the gluonic flux tube seem to be realized by the ground states in these channels.

  2. Saccharomyces interspecies hybrids as model organisms for studying yeast adaptation to stressful environments.

    Science.gov (United States)

    Lopandic, Ksenija

    2018-01-01

    The strong development of molecular biology techniques and next-generation sequencing technologies in the last two decades has significantly improved our understanding of the evolutionary history of Saccharomyces yeasts. It has been shown that many strains isolated from man-made environments are not pure genetic lines, but contain genetic materials from different species that substantially increase their genome complexity. A number of strains have been described as interspecies hybrids, implying different yeast species that under specific circumstances exchange and recombine their genomes. Such fusing usually results in a wide variety of alterations at the genetic and chromosomal levels. The observed changes have suggested a high genome plasticity and a significant role of interspecies hybridization in the adaptation of yeasts to environmental stresses and industrial processes. There is a high probability that harsh wine and beer fermentation environments, from which the majority of interspecies hybrids have been isolated so far, influence their selection and stabilization as well as their genomic and phenotypic heterogeneity. The lessons we have learned about geno- and phenotype plasticity and the diversity of natural and commercial yeast hybrids have already had a strong impact on the development of artificial hybrids that can be successfully used in the fermentation-based food and beverage industry. The creation of artificial hybrids through the crossing of strains with desired attributes is a possibility to obtain a vast variety of new, but not genetically modified yeasts with a range of improved and beneficial traits. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Selection of hybrid vehicle for green environment using multi-attributive border approximation area comparison method

    Directory of Open Access Journals (Sweden)

    Tapas Kumar Biswas

    2018-02-01

    Full Text Available The mobility sector including all kinds of transportation systems are facing global challenges in re-spect of green environmental issues. There has been a paradigm shift in the concept of design and manufacturing of automotive vehicles keeping in mind the scarcity of fossil fuel and the impact of emission on environment due to burning of it. The addition of hybrid and electric vehicles in pas-senger car segment has got significant momentum to address the global challenges. This research investigates the performance of a group of hybrid vehicles from customers’ perspective. Among the different brands that are available in the hybrid vehicle market, smart customers have given pri-ority to vehicle cost, mileage, tail pipe emission, comfortness and high tank size volume for long drive. Considering these attributes, selection strategy for hybrid vehicles has been developed using entropy based multi-attributive border approximation area comparison (MABAC method. This research highlights the best hybrid vehicle which reduces air pollution in cities with other significant environmental benefits, reduces dependence on foreign energy imports and minimizes the annual fuel cost.

  4. Computation of 3D form factors in complex environments

    International Nuclear Information System (INIS)

    Coulon, N.

    1989-01-01

    The calculation of radiant interchange among opaque surfaces in a complex environment poses the general problem of determining the visible and hidden parts of the environment. In many thermal engineering applications, surfaces are separated by radiatively non-participating media and may be idealized as diffuse emitters and reflectors. Consenquently the net radiant energy fluxes are intimately related to purely geometrical quantities called form factors, that take into account hidden parts: the problem is reduced to the form factor evaluation. This paper presents the method developed for the computation of 3D form factors in the finite-element module of the system TRIO, which is a general computer code for thermal and fluid flow analysis. The method is derived from an algorithm devised for synthetic image generation. A comparison is performed with the standard contour integration method also implemented and suited to convex geometries. Several illustrative examples of finite-element thermal calculations in radiating enclosures are given

  5. A Novel Biometric Approach for Authentication In Pervasive Computing Environments

    OpenAIRE

    Rachappa,; Divyajyothi M G; D H Rao

    2016-01-01

    The paradigm of embedding computing devices in our surrounding environment has gained more interest in recent days. Along with contemporary technology comes challenges, the most important being the security and privacy aspect. Keeping the aspect of compactness and memory constraints of pervasive devices in mind, the biometric techniques proposed for identification should be robust and dynamic. In this work, we propose an emerging scheme that is based on few exclusive human traits and characte...

  6. Quality control of computational fluid dynamics in indoor environments

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft; Nielsen, P. V.

    2003-01-01

    Computational fluid dynamics (CFD) is used routinely to predict air movement and distributions of temperature and concentrations in indoor environments. Modelling and numerical errors are inherent in such studies and must be considered when the results are presented. Here, we discuss modelling as...... the quality of CFD calculations, as well as guidelines for the minimum information that should accompany all CFD-related publications to enable a scientific judgment of the quality of the study....

  7. Hybrid Epidemics - A Case Study on Computer Worm Conficker

    OpenAIRE

    Zhang, Changwang; Zhou, Shi; Chain, Benjamin M.

    2014-01-01

    Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore ...

  8. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  9. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  10. The Virtual Cell: a software environment for computational cell biology.

    Science.gov (United States)

    Loew, L M; Schaff, J C

    2001-10-01

    The newly emerging field of computational cell biology requires software tools that address the needs of a broad community of scientists. Cell biological processes are controlled by an interacting set of biochemical and electrophysiological events that are distributed within complex cellular structures. Computational modeling is familiar to researchers in fields such as molecular structure, neurobiology and metabolic pathway engineering, and is rapidly emerging in the area of gene expression. Although some of these established modeling approaches can be adapted to address problems of interest to cell biologists, relatively few software development efforts have been directed at the field as a whole. The Virtual Cell is a computational environment designed for cell biologists as well as for mathematical biologists and bioengineers. It serves to aid the construction of cell biological models and the generation of simulations from them. The system enables the formulation of both compartmental and spatial models, the latter with either idealized or experimentally derived geometries of one, two or three dimensions.

  11. Spatially hybrid computations for streamer discharges with generic features of pulled fronts: I. Planar fronts

    International Nuclear Information System (INIS)

    Li Chao; Ebert, Ute; Hundsdorfer, Willem

    2010-01-01

    Streamers are the first stage of sparks and lightning; they grow due to a strongly enhanced electric field at their tips; this field is created by a thin curved space charge layer. These multiple scales are already challenging when the electrons are approximated by densities. However, electron density fluctuations in the leading edge of the front and non-thermal stretched tails of the electron energy distribution (as a cause of X-ray emissions) require a particle model to follow the electron motion. But present computers cannot deal with all electrons in a fully developed streamer. Therefore, super-particle have to be introduced, which leads to wrong statistics and numerical artifacts. The method of choice is a hybrid computation in space where individual electrons are followed in the region of high electric field and low density while the bulk of the electrons is approximated by densities (or fluids). We here develop the hybrid coupling for planar fronts. First, to obtain a consistent flux at the interface between particle and fluid model in the hybrid computation, the widely used classical fluid model is replaced by an extended fluid model. Then the coupling algorithm and the numerical implementation of the spatially hybrid model are presented in detail, in particular, the position of the model interface and the construction of the buffer region. The method carries generic features of pulled fronts that can be applied to similar problems like large deviations in the leading edge of population fronts, etc.

  12. STAR - A computer language for hybrid AI applications

    Science.gov (United States)

    Borchardt, G. C.

    1986-01-01

    Constructing Artificial Intelligence application systems which rely on both symbolic and non-symbolic processing places heavy demands on the communication of data between dissimilar languages. This paper describes STAR (Simple Tool for Automated Reasoning), a computer language for the development of AI application systems which supports the transfer of data structures between a symbolic level and a non-symbolic level defined in languages such as FORTRAN, C and PASCAL. The organization of STAR is presented, followed by the description of an application involving STAR in the interpretation of airborne imaging spectrometer data.

  13. A Hybrid Verifiable and Delegated Cryptographic Model in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jaber Ibrahim Naser

    2018-02-01

    Full Text Available Access control is very important in cloud data sharing. Especially in the domains like healthcare, it is essential to have access control mechanisms in place for confidentiality and secure data access. Attribute based encryption has been around for many years to secure data and provide controlled access. In this paper, we proposed a framework that supports circuit and attributes based encryption mechanism that involves multiple parties. They are data owner, data user, cloud server and attribute authority. An important feature of the proposed system is the verifiable delegation of the decryption process to cloud server. Data owner encrypts data and delegates decryption process to cloud. Cloud server performs partial decryption and then the final decrypted data are shared for users as per the privileges. Data owner  thus reduces computational complexity by delegating decryption process cloud server. We built a prototype application using the Microsoft.NET platform for proof of the concept. The empirical results revealed that there is controlled access with multiple user roles and access control rights for secure and confidential data access in cloud computing.

  14. Modelling the WWER-type reactor dynamics using a hybrid computer. Part 1

    International Nuclear Information System (INIS)

    Karpeta, C.

    Results of simulation studies into reactor and steam generator dynamics of a WWER type power plant are presented. Spatial kinetics of the reactor core is described by a nodal approximation to diffusion equations, xenon poisoning equations and heat transfer equations. The simulation of the reactor model dynamics was performed on a hybrid computer. Models of both a horizontal and a vertical steam generator were developed. The dynamics was investigated over a large range of power by computing the transients on a digital computer. (author)

  15. An Introduction to Computer Forensics: Gathering Evidence in a Computing Environment

    Directory of Open Access Journals (Sweden)

    Henry B. Wolfe

    2001-01-01

    Full Text Available Business has become increasingly dependent on the Internet and computing to operate. It has become apparent that there are issues of evidence gathering in a computing environment, which by their nature are technical and different to other forms of evidence gathering, that must be addressed. This paper offers an introduction to some of the technical issues surrounding this new and specialized field of Computer Forensics. It attempts to identify and describe sources of evidence that can be found on disk data storage devices in the course of an investigation. It also considers sources of copies of email, which can be used in evidence, as well as case building.

  16. Architecture independent environment for developing engineering software on MIMD computers

    Science.gov (United States)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  17. Fatigue of hybrid glass/carbon composites: 3D computational studies

    DEFF Research Database (Denmark)

    Dai, Gaoming; Mishnaevsky, Leon

    2014-01-01

    3D computational simulations of fatigue of hybrid carbon/glass fiber reinforced composites is carried out using X-FEM and multifiber unit cell models. A new software code for the automatic generation of unit cell multifiber models of composites with randomly misaligned fibers of various properties...... and geometrical parameters is developed. With the use of this program code and the X-FEM method, systematic investigations of the effect of microstructure of hybrid composites (fraction of carbon versus glass fibers, misalignment, and interface strength) and the loading conditions (tensile versus compression...... cyclic loading effects) on fatigue behavior of the materials are carried out. It was demonstrated that the higher fraction of carbon fibers in hybrid composites is beneficial for the fatigue lifetime of the composites under tension-tension cyclic loading, but might have negative effect on the lifetime...

  18. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  19. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  20. Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface

    Science.gov (United States)

    Brunner, Clemens; Allison, Brendan Z.; Krusienski, Dean J.; Kaiser, Vera; Müller-Putz, Gernot R.; Pfurtscheller, Gert; Neuper, Christa

    2012-01-01

    In a conventional brain–computer interface (BCI) system, users perform mental tasks that yield specific patterns of brain activity. A pattern recognition system determines which brain activity pattern a user is producing and thereby infers the user’s mental task, allowing users to send messages or commands through brain activity alone. Unfortunately, despite extensive research to improve classification accuracy, BCIs almost always exhibit errors, which are sometimes so severe that effective communication is impossible. We recently introduced a new idea to improve accuracy, especially for users with poor performance. In an offline simulation of a “hybrid” BCI, subjects performed two mental tasks independently and then simultaneously. This hybrid BCI could use two different types of brain signals common in BCIs – event-related desynchronization (ERD) and steady-state evoked potentials (SSEPs). This study suggested that such a hybrid BCI is feasible. Here, we re-analyzed the data from our initial study. We explored eight different signal processing methods that aimed to improve classification and further assess both the causes and the extent of the benefits of the hybrid condition. Most analyses showed that the improved methods described here yielded a statistically significant improvement over our initial study. Some of these improvements could be relevant to conventional BCIs as well. Moreover, the number of illiterates could be reduced with the hybrid condition. Results are also discussed in terms of dual task interference and relevance to protocol design in hybrid BCIs. PMID:20153371

  1. A hybrid source-driven method to compute fast neutron fluence in reactor pressure vessel - 017

    International Nuclear Information System (INIS)

    Ren-Tai, Chiang

    2010-01-01

    A hybrid source-driven method is developed to compute fast neutron fluence with neutron energy greater than 1 MeV in nuclear reactor pressure vessel (RPV). The method determines neutron flux by solving a steady-state neutron transport equation with hybrid neutron sources composed of peripheral fixed fission neutron sources and interior chain-reacted fission neutron sources. The relative rod-by-rod power distribution of the peripheral assemblies in a nuclear reactor obtained from reactor core depletion calculations and subsequent rod-by-rod power reconstruction is employed as the relative rod-by-rod fixed fission neutron source distribution. All fissionable nuclides other than U-238 (such as U-234, U-235, U-236, Pu-239 etc) are replaced with U-238 to avoid counting the fission contribution twice and to preserve fast neutron attenuation for heavy nuclides in the peripheral assemblies. An example is provided to show the feasibility of the method. Since the interior fuels only have a marginal impact on RPV fluence results due to rapid attenuation of interior fast fission neutrons, a generic set or one of several generic sets of interior fuels can be used as the driver and only the neutron sources in the peripheral assemblies will be changed in subsequent hybrid source-driven fluence calculations. Consequently, this hybrid source-driven method can simplify and reduce cost for fast neutron fluence computations. This newly developed hybrid source-driven method should be a useful and simplified tool for computing fast neutron fluence at selected locations of interest in RPV of contemporary nuclear power reactors. (authors)

  2. Acoustic radiosity for computation of sound fields in diffuse environments

    Science.gov (United States)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2002-05-01

    The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.

  3. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  4. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  5. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  6. An online hybrid brain-computer interface combining multiple physiological signals for webpage browse.

    Science.gov (United States)

    Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming

    2015-08-01

    The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.

  7. Hybrid and hierarchical nanoreinforced polymer composites: Computational modelling of structure–properties relationships

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Dai, Gaoming

    2014-01-01

    by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....

  8. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  9. Hybrid magic state distillation for universal fault-tolerant quantum computation

    OpenAIRE

    Zheng, Wenqiang; Yu, Yafei; Pan, Jian; Zhang, Jingfu; Li, Jun; Li, Zhaokai; Suter, Dieter; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng

    2014-01-01

    A set of stabilizer operations augmented by some special initial states known as 'magic states', gives the possibility of universal fault-tolerant quantum computation. However, magic state preparation inevitably involves nonideal operations that introduce noise. The most common method to eliminate the noise is magic state distillation (MSD) by stabilizer operations. Here we propose a hybrid MSD protocol by connecting a four-qubit H-type MSD with a five-qubit T-type MSD, in order to overcome s...

  10. Preserving access to ALEPH computing environment via virtual machines

    International Nuclear Information System (INIS)

    Coscetti, Simone; Boccali, Tommaso; Arezzini, Silvia; Maggi, Marcello

    2014-01-01

    The ALEPH Collaboration [1] took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, which ask checks with data at the Z and WW production energies. An attempt to revive and preserve the ALEPH Computing Environment is presented; the aim is not only the preservation of the data files (usually called bit preservation), but of the full environment a physicist would need to perform brand new analyses. Technically, a Virtual Machine approach has been chosen, using the VirtualBox platform. Concerning simulated events, the full chain from event generators to physics plots is possible, and reprocessing of data events is also functioning. Interactive tools like the DALI event display can be used on both data and simulated events. The Virtual Machine approach is suited for both interactive usage, and for massive computing using Cloud like approaches.

  11. Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template

    Science.gov (United States)

    2011-08-09

    heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale flop rates by the end of the decade. In this framework...and Lamb. Page 10 of 10 UNCLASSIFIED [3] Skaugen, K., Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com

  12. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  13. Cloud computing-based energy optimization control framework for plug-in hybrid electric bus

    International Nuclear Information System (INIS)

    Yang, Chao; Li, Liang; You, Sixiong; Yan, Bingjie; Du, Xian

    2017-01-01

    Considering the complicated characteristics of traffic flow in city bus route and the nonlinear vehicle dynamics, optimal energy management integrated with clustering and recognition of driving conditions in plug-in hybrid electric bus is still a challenging problem. Motivated by this issue, this paper presents an innovative energy optimization control framework based on the cloud computing for plug-in hybrid electric bus. This framework, which includes offline part and online part, can realize the driving conditions clustering in offline part, and the energy management in online part. In offline part, utilizing the operating data transferred from a bus to the remote monitoring center, K-means algorithm is adopted to cluster the driving conditions, and then Markov probability transfer matrixes are generated to predict the possible operating demand of the bus driver. Next in online part, the current driving condition is real-time identified by a well-trained support vector machine, and Markov chains-based driving behaviors are accordingly selected. With the stochastic inputs, stochastic receding horizon control method is adopted to obtain the optimized energy management of hybrid powertrain. Simulations and hardware-in-loop test are carried out with the real-world city bus route, and the results show that the presented strategy could greatly improve the vehicle fuel economy, and as the traffic flow data feedback increases, the fuel consumption of every plug-in hybrid electric bus running in a specific bus route tends to be a stable minimum. - Highlights: • Cloud computing-based energy optimization control framework is proposed. • Driving cycles are clustered into 6 types by K-means algorithm. • Support vector machine is employed to realize the online recognition of driving condition. • Stochastic receding horizon control-based energy management strategy is designed for plug-in hybrid electric bus. • The proposed framework is verified by simulation and hard

  14. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    KAUST Repository

    Mora Cordova, Angel

    2018-01-30

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite\\'s conductivity based on these parameters.

  15. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    Science.gov (United States)

    Mora, A.; Han, F.; Lubineau, G.

    2018-04-01

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in the remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite’s conductivity based on these parameters.

  16. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    KAUST Repository

    Mora Cordova, Angel; Han, Fei; Lubineau, Gilles

    2018-01-01

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite's conductivity based on these parameters.

  17. McMaster University: College and University Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    The computing and information services (CIS) organization includes administrative computing, academic computing, and networking and has three divisions: computing services, development services, and information services. Other computing activities include Health Sciences, Humanities Computing Center, and Department of Computer Science and Systems.…

  18. Phenology, sterility and inheritance of two environment genic male sterile (EGMS) lines for hybrid rice.

    Science.gov (United States)

    El-Namaky, R; van Oort, P A J

    2017-12-01

    There is still limited quantitative understanding of how environmental factors affect sterility of Environment-conditioned genic male sterility (EGMS) lines. A model was developed for this purpose and tested based on experimental data from Ndiaye (Senegal) in 2013-2015. For the two EGMS lines tested here, it was not clear if one or more recessive gene(s) were causing male sterility. This was tested by studying sterility segregation of the F2 populations. Daylength (photoperiod) and minimum temperatures during the period from panicle initiation to flowering had significant effects on male sterility. Results clearly showed that only one recessive gene was involved in causing male sterility. The model was applied to determine the set of sowing dates of two different EGMS lines such that both would flower at the same time the pollen would be completely sterile. In the same time the local popular variety (Sahel 108, the male pollen donor) being sufficiently fertile to produce the hybrid seeds. The model was applied to investigate the viability of the two line breeding system in the same location with climate change (+2oC) and in two other potential locations: in M'Be in Ivory Coast and in the Nile delta in Egypt. Apart from giving new insights in the relation between environment and EGMS, this study shows that these insights can be used to assess safe sowing windows and assess the suitability of sterility and fertility period of different environments for a two line hybrid rice production system.

  19. 16th International Conference on Hybrid Intelligent Systems and the 8th World Congress on Nature and Biologically Inspired Computing

    CERN Document Server

    Haqiq, Abdelkrim; Alimi, Adel; Mezzour, Ghita; Rokbani, Nizar; Muda, Azah

    2017-01-01

    This book presents the latest research in hybrid intelligent systems. It includes 57 carefully selected papers from the 16th International Conference on Hybrid Intelligent Systems (HIS 2016) and the 8th World Congress on Nature and Biologically Inspired Computing (NaBIC 2016), held on November 21–23, 2016 in Marrakech, Morocco. HIS - NaBIC 2016 was jointly organized by the Machine Intelligence Research Labs (MIR Labs), USA; Hassan 1st University, Settat, Morocco and University of Sfax, Tunisia. Hybridization of intelligent systems is a promising research field in modern artificial/computational intelligence and is concerned with the development of the next generation of intelligent systems. The conference’s main aim is to inspire further exploration of the intriguing potential of hybrid intelligent systems and bio-inspired computing. As such, the book is a valuable resource for practicing engineers /scientists and researchers working in the field of computational intelligence and artificial intelligence.

  20. Welfare Mix and Hybridity. Flexible Adjustments to Changed Environments. Introduction to the Special Issue

    DEFF Research Database (Denmark)

    Henriksen, Lars Skov; Smith, Steven Rathgeb; Zimmer, Annette

    2015-01-01

    Present day welfare societies rely on a complex mix of different providers ranging from the state, markets, family, and non-profit organizations to unions, grassroots organizations, and informal networks. At the same time changing welfare discourses have opened up space for new partnerships...... organizations and organizational fields adjust to a new environment that is increasingly dominated by the logic of the market, and how in particular nonprofit organizations, as hybrids by definition, are able to cope with new demands, funding structures, and control mechanisms....

  1. FAST - A multiprocessed environment for visualization of computational fluid dynamics

    International Nuclear Information System (INIS)

    Bancroft, G.V.; Merritt, F.J.; Plessel, T.C.; Kelaita, P.G.; Mccabe, R.K.

    1991-01-01

    The paper presents the Flow Analysis Software Toolset (FAST) to be used for fluid-mechanics analysis. The design criteria for FAST including the minimization of the data path in the computational fluid-dynamics (CFD) process, consistent user interface, extensible software architecture, modularization, and the isolation of three-dimensional tasks from the application programmer are outlined. Each separate process communicates through the FAST Hub, while other modules such as FAST Central, NAS file input, CFD calculator, surface extractor and renderer, titler, tracer, and isolev might work together to generate the scene. An interprocess communication package making it possible for FAST to operate as a modular environment where resources could be shared among different machines as well as a single host is discussed. 20 refs

  2. Student Advising and Retention Application in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Gurdeep S Hura

    2016-11-01

    Full Text Available  This paper proposes a new user-friendly application enhancing and expanding the current advising services of Gradesfirst currently being used for advising and retention by the Athletic department of UMES with a view to implement new performance activities like mentoring, tutoring, scheduling, and study hall hours into existing tools. This application includes various measurements that can be used to monitor and improve the performance of the students in the Athletic Department of UMES by monitoring students’ weekly study hall hours, and tutoring schedules. It also supervises tutors’ login and logout activities in order to monitor their effectiveness, supervises tutor-tutee interaction, and stores and analyzes the overall academic progress of each student. A dedicated server for providing services will be developed at the local site. The paper has been implemented in three steps. The first step involves the creation of an independent cloud computing environment that provides resources such as database creation, query-based statistical data, performance measures activities, and automated support of performance measures such as advising, mentoring, monitoring and tutoring. The second step involves the creation of an application known as Student Advising and Retention (SAR application in a cloud computing environment. This application has been designed to be a comprehensive database management system which contains relevant data regarding student academic development that supports various strategic advising and monitoring of students. The third step involves the creation of a systematic advising chart and frameworks which help advisors. The paper shows ways of creating the most appropriate advising technique based on the student’s academic needs. The proposed application runs in a Windows-based system. As stated above, the proposed application is expected to enhance and expand the current advising service of Gradesfirst tool. A brief

  3. Preliminary Study on Hybrid Computational Phantom for Radiation Dosimetry Based on Subdivision Surface

    International Nuclear Information System (INIS)

    Jeong, Jong Hwi; Choi, Sang Hyoun; Cho, Sung Koo; Kim, Chan Hyeong

    2007-01-01

    The anthropomorphic computational phantoms are classified into two groups. One group is the stylized phantoms, or MIRD phantoms, which are based on mathematical representations of the anatomical structures. The shapes and positions of the organs and tissues in these phantoms can be adjusted by changing the coefficients of the equations in use. The other group is the voxel phantoms, which are based on tomographic images of a real person such as CT, MR and serially sectioned color slice images from a cadaver. Obviously, the voxel phantoms represent the anatomical structures of a human body much more realistically than the stylized phantoms. A realistic representation of anatomical structure is very important for an accurate calculation of radiation dose in the human body. Consequently, the ICRP recently has decided to use the voxel phantoms for the forthcoming update of the dose conversion coefficients. However, the voxel phantoms also have some limitations: (1) The topology and dimensions of the organs and tissues in a voxel model are extremely difficult to change, and (2) The thin organs, such as oral mucosa and skin, cannot be realistically modeled unless the voxel resolution is prohibitively high. Recently, a new approach has been implemented by several investigators. The investigators converted their voxel phantoms to hybrid computational phantoms based on NURBS (Non-Uniform Rational B-Splines) surface, which is smooth and deformable. It is claimed that these new phantoms have the flexibility of the stylized phantom along with the realistic representations of the anatomical structures. The topology and dimensions of the anatomical structures can be easily changed as necessary. Thin organs can be modeled without affecting computational speed or memory requirement. The hybrid phantoms can be also used for 4-D Monte Carlo simulations. In this preliminary study, the external shape of a voxel phantom (i.e., skin), HDRK-Man, was converted to a hybrid computational

  4. A new methodological development for solving linear bilevel integer programming problems in hybrid fuzzy environment

    Directory of Open Access Journals (Sweden)

    Animesh Biswas

    2016-04-01

    Full Text Available This paper deals with fuzzy goal programming approach to solve fuzzy linear bilevel integer programming problems with fuzzy probabilistic constraints following Pareto distribution and Frechet distribution. In the proposed approach a new chance constrained programming methodology is developed from the view point of managing those probabilistic constraints in a hybrid fuzzy environment. A method of defuzzification of fuzzy numbers using ?-cut has been adopted to reduce the problem into a linear bilevel integer programming problem. The individual optimal value of the objective of each DM is found in isolation to construct the fuzzy membership goals. Finally, fuzzy goal programming approach is used to achieve maximum degree of each of the membership goals by minimizing under deviational variables in the decision making environment. To demonstrate the efficiency of the proposed approach, a numerical example is provided.

  5. KeyWare: an open wireless distributed computing environment

    Science.gov (United States)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  6. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  7. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  8. Learning styles: individualizing computer-based learning environments

    Directory of Open Access Journals (Sweden)

    Tim Musson

    1995-12-01

    Full Text Available While the need to adapt teaching to the needs of a student is generally acknowledged (see Corno and Snow, 1986, for a wide review of the literature, little is known about the impact of individual learner-differences on the quality of learning attained within computer-based learning environments (CBLEs. What evidence there is appears to support the notion that individual differences have implications for the degree of success or failure experienced by students (Ford and Ford, 1992 and by trainee end-users of software packages (Bostrom et al, 1990. The problem is to identify the way in which specific individual characteristics of a student interact with particular features of a CBLE, and how the interaction affects the quality of the resultant learning. Teaching in a CBLE is likely to require a subset of teaching strategies different from that subset appropriate to more traditional environments, and the use of a machine may elicit different behaviours from those normally arising in a classroom context.

  9. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  10. Environment Modules on the Peregrine System | High-Performance Computing |

    Science.gov (United States)

    NREL Environment Modules on the Peregrine System Environment Modules on the Peregrine System Peregrine uses environment modules to easily manage software environments. Environment modules facilitate modules commands set up a basic environment for the default compilers, tools and libraries, such as the

  11. All-optical quantum computing with a hybrid solid-state processing unit

    International Nuclear Information System (INIS)

    Pei Pei; Zhang Fengyang; Li Chong; Song Heshan

    2011-01-01

    We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.

  12. A novel hybrid approach with multidimensional-like effects for compressible flow computations

    Science.gov (United States)

    Kalita, Paragmoni; Dass, Anoop K.

    2017-07-01

    A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.

  13. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  14. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)

    2011-08-07

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations

  15. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    International Nuclear Information System (INIS)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley; Aris, John P; Shifrin, Roger Y

    2011-01-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in

  16. Second language writing anxiety, computer anxiety, and performance in a classroom versus a web-based environment

    Directory of Open Access Journals (Sweden)

    Effie Dracopoulos

    2011-04-01

    Full Text Available This study examined the impact of writing anxiety and computer anxiety on language learning for 45 ESL adult learners enrolled in an English grammar and writing course. Two sections of the course were offered in a traditional classroom setting whereas two others were given in a hybrid form that involved distance learning. Contrary to previous research, writing anxiety showed no correlation with learning performance, whereas computer anxiety only yielded a positive correlation with performance in the case of classroom learners. There were no significant differences across learning environments on any measures. These observations are discussed in light of the role computer technologies now play in our society as well as the merging of socio-demographic profiles between classroom and distance learners. Our data suggest that comparisons of profiles between classroom and distance learners may not be an issue worth investigating anymore in language studies, at least in developed countries.

  17. A hybrid stochastic approach for self-location of wireless sensors in indoor environments.

    Science.gov (United States)

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  18. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    Directory of Open Access Journals (Sweden)

    Alejandro Canovas

    2009-05-01

    Full Text Available Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  19. An integrated computer design environment for the development of micro-computer critical software

    International Nuclear Information System (INIS)

    De Agostino, E.; Massari, V.

    1986-01-01

    The paper deals with the development of micro-computer software for Nuclear Safety System. More specifically, it describes an experimental work in the field of software development methodologies to be used for the implementation of micro-computer based safety systems. An investigation of technological improvements that are provided by state-of-the-art integrated packages for micro-based systems development has been carried out. The work has aimed to assess a suitable automated tools environment for the whole software life-cycle. The main safety functions, as DNBR, KW/FT, of a nuclear power reactor have been implemented in a host-target approach. A prototype test-bed microsystem has been implemented to run the safety functions in order to derive a concrete evaluation on the feasibility of critical software according to new technological trends of ''Software Factories''. (author)

  20. Genetics of phenotypic plasticity and biomass traits in hybrid willows across contrasting environments and years.

    Science.gov (United States)

    Berlin, Sofia; Hallingbäck, Henrik R; Beyer, Friderike; Nordh, Nils-Erik; Weih, Martin; Rönnberg-Wästljung, Ann-Christin

    2017-07-01

    Phenotypic plasticity can affect the geographical distribution of taxa and greatly impact the productivity of crops across contrasting and variable environments. The main objectives of this study were to identify genotype-phenotype associations in key biomass and phenology traits and the strength of phenotypic plasticity of these traits in a short-rotation coppice willow population across multiple years and contrasting environments to facilitate marker-assisted selection for these traits. A hybrid Salix viminalis  × ( S. viminalis × Salix schwerinii ) population with 463 individuals was clonally propagated and planted in three common garden experiments comprising one climatic contrast between Sweden and Italy and one water availability contrast in Italy. Several key phenotypic traits were measured and phenotypic plasticity was estimated as the trait value difference between experiments. Quantitative trait locus (QTL) mapping analyses were conducted using a dense linkage map and phenotypic effects of S. schwerinii haplotypes derived from detected QTL were assessed. Across the climatic contrast, clone predictor correlations for biomass traits were low and few common biomass QTL were detected. This indicates that the genetic regulation of biomass traits was sensitive to environmental variation. Biomass QTL were, however, frequently shared across years and across the water availability contrast. Phenology QTL were generally shared between all experiments. Substantial phenotypic plasticity was found among the hybrid offspring, that to a large extent had a genetic origin. Individuals carrying influential S. schwerinii haplotypes generally performed well in Sweden but less well in Italy in terms of biomass production. The results indicate that specific genetic elements of S. schwerinii are more suited to Swedish conditions than to those of Italy. Therefore, selection should preferably be conducted separately for such environments in order to maximize biomass

  1. ANIBAL - a Hybrid Computer Language for EAI 680-PDP 8/I, FPP 12

    DEFF Research Database (Denmark)

    Højberg, Kristian Søe

    1974-01-01

    and special hybrid computer commands. ANIBAL consists of a general-purpose analog interface subroutine ANI and the macro processor 8BAL (DECUS NO 8-497A.1). When a source program with FORTRAN and 8BAL statements is processed, the FORTRAN statements are transferred unchanged, while the 8BAL code is translated...... essentially to ANI sub-routine calls, which are defined in a macro library. The resulting code is translated by the standard FORTRAN compiler. The language is very flexible as the instructions can be changed and commands can be added to or excluded from the library ....

  2. Treatment of early and late reflections in a hybrid computer model for room acoustics

    DEFF Research Database (Denmark)

    Naylor, Graham

    1992-01-01

    The ODEON computer model for acoustics in large rooms is intended for use both in design (by predicting room acoustical indices quickly and easily) and in research (by forming the basis of an auralization system and allowing study of various room acoustical phenomena). These conflicting demands...... preclude the use of both ``pure'' image source and ``pure'' particle tracing methods. A hybrid model has been developed, in which rays discover potential image sources up to a specified order. Thereafter, the same ray tracing process is used in a different way to rapidly generate a dense reverberant decay...

  3. Verification of a hybrid adjoint methodology in Titan for single photon emission computed tomography - 316

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)

  4. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    Science.gov (United States)

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  5. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    International Nuclear Information System (INIS)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  6. A Hybrid Density Functional Theory/Molecular Mechanics Approach for Linear Response Properties in Heterogeneous Environments.

    Science.gov (United States)

    Rinkevicius, Zilvinas; Li, Xin; Sandberg, Jaime A R; Mikkelsen, Kurt V; Ågren, Hans

    2014-03-11

    We introduce a density functional theory/molecular mechanical approach for computation of linear response properties of molecules in heterogeneous environments, such as metal surfaces or nanoparticles embedded in solvents. The heterogeneous embedding environment, consisting from metallic and nonmetallic parts, is described by combined force fields, where conventional force fields are used for the nonmetallic part and capacitance-polarization-based force fields are used for the metallic part. The presented approach enables studies of properties and spectra of systems embedded in or placed at arbitrary shaped metallic surfaces, clusters, or nanoparticles. The capability and performance of the proposed approach is illustrated by sample calculations of optical absorption spectra of thymidine absorbed on gold surfaces in an aqueous environment, where we study how different organizations of the gold surface and how the combined, nonadditive effect of the two environments is reflected in the optical absorption spectrum.

  7. Security issues of cloud computing environment in possible military applications

    OpenAIRE

    Samčović, Andreja B.

    2013-01-01

    The evolution of cloud computing over the past few years is potentially one of major advances in the history of computing and telecommunications. Although there are many benefits of adopting cloud computing, there are also some significant barriers to adoption, security issues being the most important of them. This paper introduces the concept of cloud computing; looks at relevant technologies in cloud computing; takes into account cloud deployment models and some military applications. Addit...

  8. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  9. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  10. One of the Approaches to Creation of Hybrid Cloud Secure Environment

    Directory of Open Access Journals (Sweden)

    Andrey Konstantinovich Kachko

    2014-02-01

    Full Text Available In response to the ever growing needs in the storage and processing of data the main position are occupied by informational-telecommunication systems, operating on the basis of cloud computing. In this case, the key point in the use of cloud computing is the problem of information security. This article is primarily intended to cover the main information safety issues that occur in cloud environments and ways of their solutions in the construction of an integrated information security management system on the cloud architecture.

  11. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    Directory of Open Access Journals (Sweden)

    Alonso-Valerdi Luz María

    2017-01-01

    Full Text Available Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI. Those cognitive processes take place while a user navigates and explores a virtual environment (VE and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI. BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1 set out working environmental conditions, (2 maximize the efficiency of BCI control panels, (3 implement navigation systems based not only on user intentions but also on user emotions, and (4 regulate user mental state to increase the differentiation between control and noncontrol modalities.

  12. Hybrid Single Photon Emission Computed Tomography/Computed Tomography Sulphur Colloid Scintigraphy in Focal Nodular Hyperplasia

    International Nuclear Information System (INIS)

    Bhoil, Amit; Gayana, Shankramurthy; Sood, Ashwani; Bhattacharya, Anish; Mittal, Bhagwant Rai

    2013-01-01

    It is important to differentiate focal nodular hyperplasia (FNH), a benign condition of liver most commonly affecting women, from other neoplasm such as hepatic adenoma and metastasis. The functional reticuloendothelial features of FNH can be demonstrated by scintigraphy. We present a case of breast cancer in whom fluorodeoxyglucose positron emission tomography/computerized tomography (CT) showed a homogenous hyperdense lesion in liver, which on Tc99m sulfur colloid single-photon emission computed tomography/CT was found to have increased focal tracer uptake suggestive of FNH

  13. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Science.gov (United States)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  14. PWR hybrid computer model for assessing the safety implications of control systems

    International Nuclear Information System (INIS)

    Smith, O.L.; Renier, J.P.; Difilippo, F.C.; Clapp, N.E.; Sozer, A.; Booth, R.S.; Craddick, W.G.; Morris, D.G.

    1986-03-01

    The ORNL study of safety-related aspects of nuclear power plant control systems consists of two interrelated tasks: (1) failure mode and effects analysis (FMEA) that identified single and multiple component failures that might lead to significant plant upsets and (2) computer models that used these failures as initial conditions and traced the dynamic impact on the control system and remainder of the plant. This report describes the simulation of Oconee Unit 1, the first plant analyzed. A first-principles, best-estimate model was developed and implemented on a hybrid computer consisting of AD-4 analog and PDP-10 digital machines. Controls were placed primarily on the analog to use its interactive capability to simulate operator action. 48 refs., 138 figs., 15 tabs

  15. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  16. The UF family of reference hybrid phantoms for computational radiation dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lodwick, Daniel; Hurtado, Jorge; Pafundi, Deanna; Williams, Jonathan L; Bolch, Wesley E

    2010-01-01

    Computational human phantoms are computer models used to obtain dose distributions within the human body exposed to internal or external radiation sources. In addition, they are increasingly used to develop detector efficiencies for in vivo whole-body counters. Two classes of computational human phantoms have been widely utilized for dosimetry calculation: stylized and voxel phantoms that describe human anatomy through mathematical surface equations and 3D voxel matrices, respectively. Stylized phantoms are flexible in that changes to organ position and shape are possible given avoidance of region overlap, while voxel phantoms are typically fixed to a given patient anatomy, yet can be proportionally scaled to match individuals of larger or smaller stature, but of equivalent organ anatomy. Voxel phantoms provide much better anatomical realism as compared to stylized phantoms which are intrinsically limited by mathematical surface equations. To address the drawbacks of these phantoms, hybrid phantoms based on non-uniform rational B-spline (NURBS) surfaces have been introduced wherein anthropomorphic flexibility and anatomic realism are both preserved. Researchers at the University of Florida have introduced a series of hybrid phantoms representing the ICRP Publication 89 reference newborn, 15 year, and adult male and female. In this study, six additional phantoms are added to the UF family of hybrid phantoms-those of the reference 1 year, 5 year and 10 year child. Head and torso CT images of patients whose ages were close to the targeted ages were obtained under approved protocols. Major organs and tissues were segmented from these images using an image processing software, 3D-DOCTOR(TM). NURBS and polygon mesh surfaces were then used to model individual organs and tissues after importing the segmented organ models to the 3D NURBS modeling software, Rhinoceros(TM). The phantoms were matched to four reference datasets: (1) standard anthropometric data, (2) reference

  17. A hybrid multiple attribute decision making method for solving problems of industrial environment

    Directory of Open Access Journals (Sweden)

    Dinesh Singh

    2011-01-01

    Full Text Available The selection of appropriate alternative in the industrial environment is an important but, at the same time, a complex and difficult problem because of the availability of a wide range of alternatives and similarity among them. Therefore, there is a need for simple, systematic, and logical methods or mathematical tools to guide decision makers in considering a number of selection attributes and their interrelations. In this paper, a hybrid decision making method of graph theory and matrix approach (GTMA and analytical hierarchy process (AHP is proposed. Three examples are presented to illustrate the potential of the proposed GTMA-AHP method and the results are compared with the results obtained using other decision making methods.

  18. Development of a field measurement methodology for studying the thermal indoor environment in hybrid GEOTABS buildings

    DEFF Research Database (Denmark)

    Kazanci, Ongun Berk; Khovalyg, Dolaana; Olesen, Bjarne W.

    2018-01-01

    buildings. The three demonstration buildings were an office building in Luxembourg, an elderly care home in Belgium, and an elementary school in Czech Republic. All of these buildings are equipped with hybrid GEOTABS systems; however, they vary in size and function, which requires a unique measurement...... methodology for studying them. These buildings already have advanced Building Management Systems (BMS); however, a more detailed measurement plan was needed for the purposes of the project to document the current performance of these systems regarding thermal indoor environment and energy performance......, and to be able to document the improvements after the implementation of the MPC. This study provides the details of the developed field measurement methodology for each of these buildings to study the indoor environmental quality (IEQ) in details. The developed measurement methodology can be applied to other...

  19. A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets

    Science.gov (United States)

    Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele

    2016-12-01

    Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.

  20. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Science.gov (United States)

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  1. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    OpenAIRE

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...

  2. Earth observation scientific workflows in a distributed computing environment

    CSIR Research Space (South Africa)

    Van Zyl, TL

    2011-09-01

    Full Text Available capabilities has focused on the web services approach as exemplified by the OGC's Web Processing Service and by GRID computing. The approach to leveraging distributed computing resources described in this paper uses instead remote objects via RPy...

  3. Back to the future: virtualization of the computing environment at the W. M. Keck Observatory

    Science.gov (United States)

    McCann, Kevin L.; Birch, Denny A.; Holt, Jennifer M.; Randolph, William B.; Ward, Josephine A.

    2014-07-01

    Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory's diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.

  4. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  5. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  6. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  7. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    Science.gov (United States)

    2017-08-08

    communicate their subjective opinions. Keywords: Usability Analysis; CAVETM (Cave Automatic Virtual Environments); Human Computer Interface (HCI...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods

  8. Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments.

    NARCIS (Netherlands)

    Dewiyanti, Silvia; Brand-Gruwel, Saskia; Jochems, Wim; Broers, Nick

    2008-01-01

    Dewiyanti, S., Brand-Gruwel, S., Jochems, W., & Broers, N. (2007). Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments. Computers in Human Behavior, 23, 496-514.

  9. A heterogeneous computing environment to solve the 768-bit RSA challenge

    OpenAIRE

    Kleinjung, Thorsten; Bos, Joppe Willem; Lenstra, Arjen K.; Osvik, Dag Arne; Aoki, Kazumaro; Contini, Scott; Franke, Jens; Thomé, Emmanuel; Jermini, Pascal; Thiémard, Michela; Leyland, Paul; Montgomery, Peter L.; Timofeev, Andrey; Stockinger, Heinz

    2010-01-01

    In December 2009 the 768-bit, 232-digit number RSA-768 was factored using the number field sieve. Overall, the computational challenge would take more than 1700 years on a single, standard core. In the article we present the heterogeneous computing approach, involving different compute clusters and Grid computing environments, used to solve this problem.

  10. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    Science.gov (United States)

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  11. PWR hybrid computer model for assessing the safety implications of control systems

    International Nuclear Information System (INIS)

    Smith, O.L.; Booth, R.S.; Clapp, N.E.; DiFilippo, F.C.; Renier, J.P.; Sozer, A.

    1985-01-01

    The ORNL study of safety-related aspects of control systems consists of two interrelated tasks, (1) a failure mode and effects analysis that, in part, identifies single and multiple component failures that may lead to significant plant upsets, and (2) a hybrid computer model that uses these failures as initial conditions and traces the dynamic impact on the control system and remainder of the plant. The second task is reported here. The initial step in model development was to define a suitable interface between the FMEA and computer simulation tasks. This involved identifying primary plant components that must be simulated in dynamic detail and secondary components that can be treated adequately by the FMEA alone. The FMEA in general explores broader spectra of initiating events that may collapse into a reduced number of computer runs. A portion of the FMEA includes consideration of power supply failures. Consequences of the transients may feedback on the initiating causes, and there may be an interactive relationship between the FMEA and the computer simulation. Since the thrust of this program is to investigate control system behavior, the controls are modeled in detail to accurately reproduce characteristic response under normal and off-normal transients. The balance of the model, including neutronics, thermohydraulics and component submodels, is developed in sufficient detail to provide a suitable support for the control system

  12. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  13. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  14. The Integrated Computational Environment for Airbreathing Hypersonic Flight Vehicle Modeling and Design Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated computational environment for multidisciplinary, physics-based simulation and analyses of airbreathing hypersonic flight vehicles will be developed....

  15. Computation Offloading Algorithm for Arbitrarily Divisible Applications in Mobile Edge Computing Environments: An OCR Case

    Directory of Open Access Journals (Sweden)

    Bo Li

    2018-05-01

    Full Text Available Divisible applications are a class of tasks whose loads can be partitioned into some smaller fractions, and each part can be executed independently by a processor. A wide variety of divisible applications have been found in the area of parallel and distributed processing. This paper addresses the problem of how to partition and allocate divisible applications to available resources in mobile edge computing environments with the aim of minimizing the completion time of the applications. A theoretical model was proposed for partitioning an entire divisible application according to the load of the application and the capabilities of available resources, and the solutions were derived in closed form. Both simulations and real experiments were carried out to justify this model.

  16. Near-term hybrid vehicle program, phase 1. Appendix B: Design trade-off studies report. Volume 3: Computer program listings

    Science.gov (United States)

    1979-01-01

    A description and listing is presented of two computer programs: Hybrid Vehicle Design Program (HYVELD) and Hybrid Vehicle Simulation Program (HYVEC). Both of the programs are modifications and extensions of similar programs developed as part of the Electric and Hybrid Vehicle System Research and Development Project.

  17. Cloud Computing as Network Environment in Students Work

    OpenAIRE

    Piotrowski, Dominik Mirosław

    2013-01-01

    The purpose of the article was to show the need for literacy education from a variety of services available in the cloud computing as a specialist information field of activity. Teaching at university in the field of cloud computing related to the management of information could provide tangible benefits in the form of useful learning outcomes. This allows students and future information professionals to begin enjoying the benefits of cloud computing SaaS model at work, thereby freeing up of...

  18. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2013-01-01

    Full Text Available Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  19. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    Science.gov (United States)

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  20. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    Science.gov (United States)

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  1. Hybrid Donor-Dot Devices made using Top-down Ion Implantation for Quantum Computing

    Science.gov (United States)

    Bielejec, Edward; Bishop, Nathan; Carroll, Malcolm

    2012-02-01

    We present progress towards fabricating hybrid donor -- quantum dots (QD) for quantum computing. These devices will exploit the long coherence time of the donor system and the surface state manipulation associated with a QD. Fabrication requires detection of single ions implanted with 10's of nanometer precision. We show in this talk, 100% detection efficiency for single ions using a single ion Geiger mode avalanche (SIGMA) detector integrated into a Si MOS QD process flow. The NanoImplanter (nI) a focused ion beam system is used for precision top-down placement of the implanted ion. This machine has a 10 nm resolution combined with a mass velocity filter, allowing for the use of multi-species liquid metal ion sources (LMIS) to implant P and Sb ions, and a fast blanking and chopping system for single ion implants. The combination of the nI and integration of the SIGMA with the MOS QD process flow establishes a path to fabricate hybrid single donor-dot devices. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  3. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  4. Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms

    Energy Technology Data Exchange (ETDEWEB)

    Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.

  5. Optimization of a Continuous Hybrid Impeller Mixer via Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    N. Othman

    2014-01-01

    Full Text Available This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD using computational fluid dynamics (CFD. In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT. Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.

  6. Optimization of a continuous hybrid impeller mixer via computational fluid dynamics.

    Science.gov (United States)

    Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Meor Adnan, M A K

    2014-01-01

    This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD) using computational fluid dynamics (CFD). In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT). Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD) was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF) was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.

  7. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Science.gov (United States)

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  8. A Semantic Based Policy Management Framework for Cloud Computing Environments

    Science.gov (United States)

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  9. A hybrid computational method for the discovery of novel reproduction-related genes.

    Science.gov (United States)

    Chen, Lei; Chu, Chen; Kong, Xiangyin; Huang, Guohua; Huang, Tao; Cai, Yu-Dong

    2015-01-01

    Uncovering the molecular mechanisms underlying reproduction is of great importance to infertility treatment and to the generation of healthy offspring. In this study, we discovered novel reproduction-related genes with a hybrid computational method, integrating three different types of method, which offered new clues for further reproduction research. This method was first executed on a weighted graph, constructed based on known protein-protein interactions, to search the shortest paths connecting any two known reproduction-related genes. Genes occurring in these paths were deemed to have a special relationship with reproduction. These newly discovered genes were filtered with a randomization test. Then, the remaining genes were further selected according to their associations with known reproduction-related genes measured by protein-protein interaction score and alignment score obtained by BLAST. The in-depth analysis of the high confidence novel reproduction genes revealed hidden mechanisms of reproduction and provided guidelines for further experimental validations.

  10. A hybrid computation method for determining fluctuations of temperature in branched structures

    International Nuclear Information System (INIS)

    Czomber, L.

    1982-01-01

    A hybrid computation method for determining temperature fluctuations at discrete points of slab like geometries is developed on the basis of a new formulation of the finite difference method. For this purpose, a new finite difference method is combined with an exact solution of the heat equation within the range of values of the Laplace transformation. Whereas the exact solution can be applied to arbitraryly large ranges, the finite difference formulation is given for structural ranges which need finer discretization. The boundary conditions of the exact solution are substituted by finite difference terms for the boundary residual flow or an internal heat source, depending on the problem. The resulting system of conditional equations contains only the node parameters of the finite difference method. (orig.) [de

  11. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners

    Science.gov (United States)

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results. PMID:24083133

  12. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications.

    Science.gov (United States)

    Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L

    2016-08-01

    Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.

  13. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    Science.gov (United States)

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  14. Insect-computer hybrid legged robot with user-adjustable speed, step length and walking gait.

    Science.gov (United States)

    Cao, Feng; Zhang, Chao; Choo, Hao Yu; Sato, Hirotaka

    2016-03-01

    We have constructed an insect-computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g., gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e., applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. © 2016 The Author(s).

  15. Hydrodynamics and Water Quality forecasting over a Cloud Computing environment: INDIGO-DataCloud

    Science.gov (United States)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; García, Daniel; Monteoliva, Agustín

    2017-04-01

    Algae Bloom due to eutrophication is an extended problem for water reservoirs and lakes that impacts directly in water quality. It can create a dead zone that lacks enough oxygen to support life and it can also be human harmful, so it must be controlled in water masses for supplying, bathing or other uses. Hydrodynamic and Water Quality modelling can contribute to forecast the status of the water system in order to alert authorities before an algae bloom event occurs. It can be used to predict scenarios and find solutions to reduce the harmful impact of the blooms. High resolution models need to process a big amount of data using a robust enough computing infrastructure. INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is an European Commission funded project that aims at developing a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The project addresses the development of solutions for different Case Studies using different Cloud-based alternatives. In the first INDIGO software release, a set of components are ready to manage the deployment of services to perform N number of Delft3D simulations (for calibrating or scenario definition) over a Cloud Computing environment, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator, AAI (Authorization, Authentication) and OneData (Distributed Storage System). Moreover, the Future Gateway portal based on Liferay, provides an user-friendly interface where the user can configure the simulations. Due to the data approach of INDIGO, the developed solutions can contribute to manage the full data life cycle of a project, thanks to different tools to manage datasets or even metadata. Furthermore, the cloud environment contributes to provide a dynamic, scalable and easy-to-use framework for non-IT experts users. This framework is potentially capable to automatize the processing of

  16. An Efficient Framework for EEG Analysis with Application to Hybrid Brain Computer Interfaces Based on Motor Imagery and P300

    Directory of Open Access Journals (Sweden)

    Jinyi Long

    2017-01-01

    Full Text Available The hybrid brain computer interface (BCI based on motor imagery (MI and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI.

  17. Construction of a Digital Learning Environment Based on Cloud Computing

    Science.gov (United States)

    Ding, Jihong; Xiong, Caiping; Liu, Huazhong

    2015-01-01

    Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…

  18. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  19. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  20. Computer Aided Design Tools for Extreme Environment Electronics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to provide Computer Aided Design (CAD) tools for radiation-tolerant, wide-temperature-range digital, analog, mixed-signal, and radio-frequency...

  1. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  2. Ubiquitous fuzzy computing in open ambient intelligence environments

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.

    2006-01-01

    Ambient intelligence (AmI) is considered as the composition of three emergent technologies: ubiquitous computing, ubiquitous communication and intelligent user interfaces. The aim of integration of aforesaid technologies is to make wider the interaction between human beings and information

  3. Human face recognition using eigenface in cloud computing environment

    Science.gov (United States)

    Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.

    2018-02-01

    Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.

  4. Computer-based Role Playing Game Environment for Analogue Electronics

    Directory of Open Access Journals (Sweden)

    Lachlan M MacKinnon

    2009-02-01

    Full Text Available An implementation of a design for a game based virtual learning environment is described. The game is developed for a course in analogue electronics, and the topic is the design of a power supply. This task can be solved in a number of different ways, with certain constraints, giving the students a certain amount of freedom, although the game is designed not to facilitate trial-and-error approach. The use of storytelling and a virtual gaming environment provides the student with the learning material in a MMORPG environment.

  5. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  6. COED Transactions, Vol. X, No. 9, September 1978. Use of the Analog/Hybrid Computer in Boundary Layer and Convection Studies.

    Science.gov (United States)

    Mitchell, Eugene E., Ed.

    In certain boundary layer or natural convection work, where a similarity transformation is valid, the equations can be reduced to a set of nonlinear ordinary differential equations. They are therefore well-suited to a fast solution on an analog/hybrid computer. This paper illustrates such usage of the analog/hybrid computer by a set of…

  7. COED Transactions, Vol. X, No. 10, October 1978. Simulation of a Sampled-Data System on a Hybrid Computer.

    Science.gov (United States)

    Mitchell, Eugene E., Ed.

    The simulation of a sampled-data system is described that uses a full parallel hybrid computer. The sampled data system simulated illustrates the proportional-integral-derivative (PID) discrete control of a continuous second-order process representing a stirred-tank. The stirred-tank is simulated using continuous analog components, while PID…

  8. Communicator Style as a Predictor of Cyberbullying in a Hybrid Learning Environment

    Directory of Open Access Journals (Sweden)

    Özcan Özgür Dursun

    2012-07-01

    Full Text Available This study aimed to describe the characteristics of undergraduate students in a hybrid learning environment with regard to their communicator styles and cyberbullying behaviors. Moreover, relationships between cyberbullying victimization and learners’ perceived communicator styles were investigated. Cyberbullying victimization was measured through a recently developed 28-item scale with a single-factor structure, whereas the communicator styles were measured through Norton’s (1983 scale which was recently validated in Turkey. Participants were a total of 59 undergraduate Turkish students enrolled in an effective communication course in 2010 spring and fall semesters. Face-to-face instruction was supported through web 2.0 tools where learners’ hid their real identities through nicknames. Participants used personal blogs in addition to the official online platform of the course. Their posts on these platforms were used as the source of the qualitative data. Descriptive analyses were followed by the investigation of qualitative and quantitative interrelationships between the cyberbullying variable and the components of the communicator style measure. Correlations among victimization and communicator style variables were not significant. However, qualitative analysis revealed that cyberbullying instances varied with regard to discussion topics, nature of the discussions and communicator styles. Example patterns from the log files were presented accompanied with suggestions for further implementations.

  9. Communicator Style as a Predictor of Cyberbullying in a Hybrid Learning Environment

    Directory of Open Access Journals (Sweden)

    Özcan Özgür Dursun

    2012-03-01

    Full Text Available This study aimed to describe the characteristics of undergraduate students in a hybrid learning environment with regard to their communicator styles and cyberbullying behaviors. Moreover, relationships between cyberbullying victimization and learners’ perceived communicator styles were investigated. Cyberbullying victimization was measured through a recently developed 28-item scale with a single-factor structure, whereas the communicator styles were measured through Norton’s (1983 scale which was recently validated in Turkey. Participants were a total of 59 undergraduate Turkish students enrolled in an effective communication course in 2010 spring and fall semesters. Face-to-face instruction was supported through web 2.0 tools where learners’ hid their real identities through nicknames. Participants used personal blogs in addition to the official online platform of the course. Their posts on these platforms were used as the source of the qualitative data. Descriptive analyses were followed by the investigation of qualitative and quantitative interrelationships between the cyberbullying variable and the components of the communicator style measure. Correlations among victimization and communicator style variables were not significant. However, qualitative analysis revealed that cyberbullying instances varied with regard to discussion topics, nature of the discussions and communicator styles. Example patterns from the log files were presented accompanied with suggestions for further implementations

  10. Iterative Relay Scheduling with Hybrid ARQ under Multiple User Equipment (Type II) Relay Environments

    KAUST Repository

    Nam, Sung Sik

    2018-01-09

    In this work, we propose an iterative relay scheduling with hybrid ARQ (IRS-HARQ) scheme which realizes fast jump-in/successive relaying and subframe-based decoding under the multiple user equipment (UE) relay environments applicable to the next-generation cellular systems (e.g., LTE-Advanced and beyond). The proposed IRS-HARQ aims to increase the achievable data rate by iteratively scheduling a relatively better UE relay closer to the end user in a probabilistic sense, provided that the relay-to-end user link should be operated in an open-loop and transparent mode. The latter is due to the fact that not only there are no dedicated control channels between the UE relay and the end user but also a new cell is not created. Under this open-loop and transparent mode, our proposed protocol is implemented by partially exploiting the channel state information based on the overhearing mechanism of ACK/NACK for HARQ. Further, the iterative scheduling enables UE-to-UE direct communication with proximity that offers spatial frequency reuse and energy saving.

  11. A hybrid search algorithm for swarm robots searching in an unknown environment.

    Science.gov (United States)

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.

  12. Context-aware Cloud Computing for Personal Learning Environment

    OpenAIRE

    Chen, Feng; Al-Bayatti, Ali Hilal; Siewe, Francois

    2016-01-01

    Virtual learning means to learn from social interactions in a virtual platform that enables people to study anywhere and at any time. Current Virtual Learning Environments (VLEs) are a range of integrated web based applications to support and enhance the education. Normally, VLEs are institution centric; are owned by the institutions and are designed to support formal learning, which do not support lifelong learning. These limitations led to the research of Personal Learning Environments (PLE...

  13. Tacit knowledge in action: basic notions of knowledge sharing in computer supported work environments

    OpenAIRE

    Mackenzie Owen, John

    2001-01-01

    An important characteristic of most computer supported work environments is the distribution of work over individuals or teams in different locations. This leads to what we nowadays call `virtual' environments. In these environments communication between actors is to a large degree mediated, i.e. established through communications media (telephone, fax, computer networks) rather in a face-to-face way. Unfortunately, mediated communication limits the effectiveness of knowledge exchange in virt...

  14. Bridging context management systems for different types of pervasive computing environments

    NARCIS (Netherlands)

    Hesselman, C.E.W.; Benz, Hartmut; Benz, H.P.; Pawar, P.; Liu, F.; Wegdam, M.; Wibbels, Martin; Broens, T.H.F.; Brok, Jacco

    2008-01-01

    A context management system is a distributed system that enables applications to obtain context information about (mobile) users and forms a key component of any pervasive computing environment. Context management systems are however very environment-specific (e.g., specific for home environments)

  15. The Development and Evaluation of a Computer-Simulated Science Inquiry Environment Using Gamified Elements

    Science.gov (United States)

    Tsai, Fu-Hsing

    2018-01-01

    This study developed a computer-simulated science inquiry environment, called the Science Detective Squad, to engage students in investigating an electricity problem that may happen in daily life. The environment combined the simulation of scientific instruments and a virtual environment, including gamified elements, such as points and a story for…

  16. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  17. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  18. Cluster implementation for parallel computation within MATLAB software environment

    International Nuclear Information System (INIS)

    Santana, Antonio O. de; Dantas, Carlos C.; Charamba, Luiz G. da R.; Souza Neto, Wilson F. de; Melo, Silvio B. Melo; Lima, Emerson A. de O.

    2013-01-01

    A cluster for parallel computation with MATLAB software the COCGT - Cluster for Optimizing Computing in Gamma ray Transmission methods, is implemented. The implementation correspond to creation of a local net of computers, facilities and configurations of software, as well as the accomplishment of cluster tests for determine and optimizing of performance in the data processing. The COCGT implementation was required by data computation from gamma transmission measurements applied to fluid dynamic and tomography reconstruction in a FCC-Fluid Catalytic Cracking cold pilot unity, and simulation data as well. As an initial test the determination of SVD - Singular Values Decomposition - of random matrix with dimension (n , n), n=1000, using the Girco's law modified, revealed that COCGT was faster in comparison to the literature [1] cluster, which is similar and operates at the same conditions. Solution of a system of linear equations provided a new test for the COCGT performance by processing a square matrix with n=10000, computing time was 27 s and for square matrix with n=12000, computation time was 45 s. For determination of the cluster behavior in relation to 'parfor' (parallel for-loop) and 'spmd' (single program multiple data), two codes were used containing those two commands and the same problem: determination of SVD of a square matrix with n= 1000. The execution of codes by means of COCGT proved: 1) for the code with 'parfor', the performance improved with the labs number from 1 to 8 labs; 2) for the code 'spmd', just 1 lab (core) was enough to process and give results in less than 1 s. In similar situation, with the difference that now the SVD will be determined from square matrix with n1500, for code with 'parfor', and n=7000, for code with 'spmd'. That results take to conclusions: 1) for the code with 'parfor', the behavior was the same already described above; 2) for code with 'spmd', the same besides having produced a larger performance, it supports a

  19. On-line computing in a classified environment

    International Nuclear Information System (INIS)

    O'Callaghan, P.B.

    1982-01-01

    Westinghouse Hanford Company (WHC) recently developed a Department of Energy (DOE) approved real-time, on-line computer system to control nuclear material. The system simultaneously processes both classified and unclassified information. Implementation of this system required application of many security techniques. The system has a secure, but user friendly interface. Many software applications protect the integrity of the data base from malevolent or accidental errors. Programming practices ensure the integrity of the computer system software. The audit trail and the reports generation capability record user actions and status of the nuclear material inventory

  20. The GOSTT concept and hybrid mixed/virtual/augmented reality environment radioguided surgery

    International Nuclear Information System (INIS)

    Valdés Olmos, R. A.; Van Leeuwen, F. W. B.; Vidal-Sicart, S.; Giammarile, F.; Zaknun, J. J.; Mariani, G.

    2014-01-01

    The popularity gained by the sentinel lymph node (SLN) procedure in the last two decades did increase the interest of the surgical disciplines for other applications of radioguided surgery. An example is the gamma-probe guided localization of occult or difficult to locate neoplastic lesions. Such guidance can be achieved by intralesional delivery (ultrasound, stereotaxis or CT) of a radiolabelled agent that remains accumulated at the site of the injection. Another possibility rested on the use of systemic administration of a tumour-seeking radiopharmaceutical with favourable tumour accumulation and retention. On the other hand, new intraoperative imaging devices for radioguided surgery in complex anatomical areas became available. All this a few years ago led to the delineation of the concept Guided intraOperative Scintigraphic Tumour Targeting (GOSTT) to include the whole spectrum of basic and advanced nuclear medicine procedures required for providing a roadmap that would optimise surgery. The introduction of allied signatures using, e.g. hybrid tracers for simultaneous detection of the radioactive and fluorescent signals did amply the GOSTT concept. It was now possible to combine perioperative nuclear medicine imaging with the superior resolution of additional optical guidance in the operating room. This hybrid approach is currently in progress and probably will become an important model to follow in the coming years. A cornerstone in the GOSTT concept is constituted by diagnostic imaging technologies like SPECT/CT. SPECT/CT was introduced halfway the past decade and was immediately incorporated into the SLN procedure. Important reasons attributing to the success of SPECT/CT were its combination with lymphoscintigraphy, and the ability to display SLNs in an anatomical environment. This latter aspect has significantly been improved in the new generation of SPECT/CT cameras and provides the base for the novel mixed reality protocols of image-guided surgery. In

  1. The GOSTT concept and hybrid mixed/virtual/augmented reality environment radioguided surgery.

    Science.gov (United States)

    Valdés Olmos, R A; Vidal-Sicart, S; Giammarile, F; Zaknun, J J; Van Leeuwen, F W; Mariani, G

    2014-06-01

    The popularity gained by the sentinel lymph node (SLN) procedure in the last two decades did increase the interest of the surgical disciplines for other applications of radioguided surgery. An example is the gamma-probe guided localization of occult or difficult to locate neoplastic lesions. Such guidance can be achieved by intralesional delivery (ultrasound, stereotaxis or CT) of a radiolabelled agent that remains accumulated at the site of the injection. Another possibility rested on the use of systemic administration of a tumour-seeking radiopharmaceutical with favourable tumour accumulation and retention. On the other hand, new intraoperative imaging devices for radioguided surgery in complex anatomical areas became available. All this a few years ago led to the delineation of the concept Guided intraOperative Scintigraphic Tumour Targeting (GOSTT) to include the whole spectrum of basic and advanced nuclear medicine procedures required for providing a roadmap that would optimise surgery. The introduction of allied signatures using, e.g. hybrid tracers for simultaneous detection of the radioactive and fluorescent signals did amply the GOSTT concept. It was now possible to combine perioperative nuclear medicine imaging with the superior resolution of additional optical guidance in the operating room. This hybrid approach is currently in progress and probably will become an important model to follow in the coming years. A cornerstone in the GOSTT concept is constituted by diagnostic imaging technologies like SPECT/CT. SPECT/CT was introduced halfway the past decade and was immediately incorporated into the SLN procedure. Important reasons attributing to the success of SPECT/CT were its combination with lymphoscintigraphy, and the ability to display SLNs in an anatomical environment. This latter aspect has significantly been improved in the new generation of SPECT/CT cameras and provides the base for the novel mixed reality protocols of image-guided surgery. In

  2. Cloud Computing E-Communication Services in the University Environment

    Science.gov (United States)

    Babin, Ron; Halilovic, Branka

    2017-01-01

    The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…

  3. Music Teachers' Experiences in One-to-One Computing Environments

    Science.gov (United States)

    Dorfman, Jay

    2016-01-01

    Ubiquitous computing scenarios such as the one-to-one model, in which every student is issued a device that is to be used across all subjects, have increased in popularity and have shown both positive and negative influences on education. Music teachers in schools that adopt one-to-one models may be inadequately equipped to integrate this kind of…

  4. Formal and Information Learning in a Computer Clubhouse Environment

    Science.gov (United States)

    McDougall, Anne; Lowe, Jenny; Hopkins, Josie

    2004-01-01

    This paper outlines the establishment and running of an after-school Computer Clubhouse, describing aspects of the leadership, mentoring and learning activities undertaken there. Research data has been collected from examination of documents associated with the Clubhouse, interviews with its founders, Director, session leaders and mentors, and…

  5. The environment strongly affects estimates of heterosis in hybrid sweet sorghum

    Science.gov (United States)

    Sweet sorghum (Sorghum bicolor (L.) Moench) has potential as a biofuel feedstock but hybrid cultivars are needed to support an industry based on this crop. The purpose of this study was to compare five inbred sweet sorghum lines and 15 hybrids derived from them, and to determine the extent of envir...

  6. Control of a nursing bed based on a hybrid brain-computer interface.

    Science.gov (United States)

    Nengneng Peng; Rui Zhang; Haihua Zeng; Fei Wang; Kai Li; Yuanqing Li; Xiaobin Zhuang

    2016-08-01

    In this paper, we propose an intelligent nursing bed system which is controlled by a hybrid brain-computer interface (BCI) involving steady-state visual evoked potential (SSVEP) and P300. Specifically, the hybrid BCI includes an asynchronous brain switch based on SSVEP and P300, and a P300-based BCI. The brain switch is used to turn on/off the control system of the electric nursing bed through idle/control state detection, whereas the P300-based BCI is for operating the nursing bed. At the beginning, the user may focus on one group of flashing buttons in the graphic user interface (GUI) of the brain switch, which can simultaneously evoke SSVEP and P300, to switch on the control system. Here, the combination of SSVEP and P300 is used for improving the performance of the brain switch. Next, the user can control the nursing bed using the P300-based BCI. The GUI of the P300-based BCI includes 10 flashing buttons, which correspond to 10 functional operations, namely, left-side up, left-side down, back up, back down, bedpan open, bedpan close, legs up, legs down, right-side up, and right-side down. For instance, he/she can focus on the flashing button "back up" in the GUI of the P300-based BCI to activate the corresponding control such that the nursing bed is adjusted up. Eight healthy subjects participated in our experiment, and obtained an average accuracy of 93.75% and an average false positive rate (FPR) of 0.15 event/min. The effectiveness of our system was thus demonstrated.

  7. Computational lymphatic node models in pediatric and adult hybrid phantoms for radiation dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lamart, Stephanie; Moroz, Brian E

    2013-01-01

    We developed models of lymphatic nodes for six pediatric and two adult hybrid computational phantoms to calculate the lymphatic node dose estimates from external and internal radiation exposures. We derived the number of lymphatic nodes from the recommendations in International Commission on Radiological Protection (ICRP) Publications 23 and 89 at 16 cluster locations for the lymphatic nodes: extrathoracic, cervical, thoracic (upper and lower), breast (left and right), mesentery (left and right), axillary (left and right), cubital (left and right), inguinal (left and right) and popliteal (left and right), for different ages (newborn, 1-, 5-, 10-, 15-year-old and adult). We modeled each lymphatic node within the voxel format of the hybrid phantoms by assuming that all nodes have identical size derived from published data except narrow cluster sites. The lymph nodes were generated by the following algorithm: (1) selection of the lymph node site among the 16 cluster sites; (2) random sampling of the location of the lymph node within a spherical space centered at the chosen cluster site; (3) creation of the sphere or ovoid of tissue representing the node based on lymphatic node characteristics defined in ICRP Publications 23 and 89. We created lymph nodes until the pre-defined number of lymphatic nodes at the selected cluster site was reached. This algorithm was applied to pediatric (newborn, 1-, 5-and 10-year-old male, and 15-year-old males) and adult male and female ICRP-compliant hybrid phantoms after voxelization. To assess the performance of our models for internal dosimetry, we calculated dose conversion coefficients, called S values, for selected organs and tissues with Iodine-131 distributed in six lymphatic node cluster sites using MCNPX2.6, a well validated Monte Carlo radiation transport code. Our analysis of the calculations indicates that the S values were significantly affected by the location of the lymph node clusters and that the values increased for

  8. Computation of the Lyapunov exponents in the compass-gait model under OGY control via a hybrid Poincaré map

    International Nuclear Information System (INIS)

    Gritli, Hassène; Belghith, Safya

    2015-01-01

    Highlights: • A numerical calculation method of the Lyapunov exponents in the compass-gait model under OGY control is proposed. • A new linearization method of the impulsive hybrid dynamics around a one-periodic hybrid limit cycle is achieved. • We develop a simple analytical expression of a controlled hybrid Poincaré map. • A dimension reduction of the hybrid Poincaré map is realized. • We describe the numerical computation procedure of the Lyapunov exponents via the designed hybrid Poincaré map. - Abstract: This paper aims at providing a numerical calculation method of the spectrum of Lyapunov exponents in a four-dimensional impulsive hybrid nonlinear dynamics of a passive compass-gait model under the OGY control approach by means of a controlled hybrid Poincaré map. We present a four-dimensional simplified analytical expression of such hybrid map obtained by linearizing the uncontrolled impulsive hybrid nonlinear dynamics around a desired one-periodic passive hybrid limit cycle. In order to compute the spectrum of Lyapunov exponents, a dimension reduction of the controlled hybrid Poincaré map is realized. The numerical calculation of the spectrum of Lyapunov exponents using the reduced-dimension controlled hybrid Poincaré map is given in detail. In order to show the effectiveness of the developed method, the spectrum of Lyapunov exponents is calculated as the slope (bifurcation) parameter varies and hence used to predict the walking dynamics behavior of the compass-gait model under the OGY control.

  9. Computational Photophysics in the Presence of an Environment

    Science.gov (United States)

    Nogueira, Juan J.; González, Leticia

    2018-04-01

    Most processes triggered by ultraviolet (UV) or visible (vis) light in nature take place in complex biological environments. The first step in these photophysical events is the excitation of the absorbing system or chromophore to an electronically excited state. Such an excitation can be monitored by the UV-vis absorption spectrum. A precise calculation of the UV-vis spectrum of a chromophore embedded in an environment is a challenging task that requires the consideration of several ingredients, besides an accurate electronic-structure method for the excited states. Two of the most important are an appropriate description of the interactions between the chromophore and the environment and accounting for the vibrational motion of the whole system. In this contribution, we review the most common theoretical methodologies to describe the environment (including quantum mechanics/continuum and quantum mechanics/molecular mechanics models) and to account for vibrational sampling (including Wigner sampling and molecular dynamics). Further, we illustrate in a series of examples how the lack of these ingredients can lead to a wrong interpretation of the electronic features behind the UV-vis absorption spectrum.

  10. Multiscale Computing with the Multiscale Modeling Library and Runtime Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Groen, D.; Ben Belgacem, M.; Kurowski, K.; Hoekstra, A.G.

    2013-01-01

    We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We

  11. Sentiment analysis and ontology engineering an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...

  12. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    Science.gov (United States)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  13. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  14. A FUNCTIONAL MODEL OF COMPUTER-ORIENTED LEARNING ENVIRONMENT OF A POST-DEGREE PEDAGOGICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    Kateryna R. Kolos

    2014-06-01

    Full Text Available The study substantiates the need for a systematic study of the functioning of computer-oriented learning environment of a post-degree pedagogical education; it is determined the definition of “functional model of computer-oriented learning environment of a post-degree pedagogical education”; it is built a functional model of computer-oriented learning environment of a post-degree pedagogical education in accordance with the functions of business, information and communication technology, academic, administrative staff and peculiarities of training courses teachers.

  15. Effect of Genetics, Environment, and Phenotype on the Metabolome of Maize Hybrids Using GC/MS and LC/MS.

    Science.gov (United States)

    Tang, Weijuan; Hazebroek, Jan; Zhong, Cathy; Harp, Teresa; Vlahakis, Chris; Baumhover, Brian; Asiago, Vincent

    2017-06-28

    We evaluated the variability of metabolites in various maize hybrids due to the effect of environment, genotype, phenotype as well as the interaction of the first two factors. We analyzed 480 forage and the same number of grain samples from 21 genetically diverse non-GM Pioneer brand maize hybrids, including some with drought tolerance and viral resistance phenotypes, grown at eight North American locations. As complementary platforms, both GC/MS and LC/MS were utilized to detect a wide diversity of metabolites. GC/MS revealed 166 and 137 metabolites in forage and grain samples, respectively, while LC/MS captured 1341 and 635 metabolites in forage and grain samples, respectively. Univariate and multivariate analyses were utilized to investigate the response of the maize metabolome to the environment, genotype, phenotype, and their interaction. Based on combined percentages from GC/MS and LC/MS datasets, the environment affected 36% to 84% of forage metabolites, while less than 7% were affected by genotype. The environment affected 12% to 90% of grain metabolites, whereas less than 27% were affected by genotype. Less than 10% and 11% of the metabolites were affected by phenotype in forage and grain, respectively. Unsupervised PCA and HCA analyses revealed similar trends, i.e., environmental effect was much stronger than genotype or phenotype effects. On the basis of comparisons of disease tolerant and disease susceptible hybrids, neither forage nor grain samples originating from different locations showed obvious phenotype effects. Our findings demonstrate that the combination of GC/MS and LC/MS based metabolite profiling followed by broad statistical analysis is an effective approach to identify the relative impact of environmental, genetic and phenotypic effects on the forage and grain composition of maize hybrids.

  16. New computer and communications environments for light armored vehicles

    Science.gov (United States)

    Rapanotti, John L.; Palmarini, Marc; Dumont, Marc

    2002-08-01

    Light Armoured Vehicles (LAVs) are being developed to meet the modern requirements of rapid deployment and operations other than war. To achieve these requirements, passive armour is minimized and survivability depends more on sensors, computers and countermeasures to detect and avoid threats. The performance, reliability, and ultimately the cost of these components, will be determined by the trends in computing and communications. These trends and the potential impact on DAS (Defensive Aids Suite) development were investigated and are reported in this paper. Vehicle performance is affected by communication with other vehicles and other ISTAR (Intelligence, Surveillance, Target Acquisition and Reconnaissance) battlefield assets. This investigation includes the networking technology Jini developed by SUN Microsystems, which can be used to interface the vehicle to the ISTAR network. VxWorks by Wind River Systems, is a real time operating system designed for military systems and compatible with Jini. Other technologies affecting computer hardware development include, dynamic reconfiguration, hot swap, alternate pathing, CompactPCI, and Fiber Channel serial communication. To achieve the necessary performance at reasonable cost, and over the long service life of the vehicle, a DAS should have two essential features. A fitted for, but not fitted with approach will provide the necessary rapid deployment without a need to equip the entire fleet. With an expected vehicle service life of 50 years, 5-year technology upgrades can be used to maintain vehicle performance over the entire service life. A federation of modules instead of integrated fused sensors will provide the capability for incremental upgrades and mission configurability. A plug and play capability can be used for both hardware and expendables.

  17. Hybrid Cloud Computing Architecture Optimization by Total Cost of Ownership Criterion

    Directory of Open Access Journals (Sweden)

    Elena Valeryevna Makarenko

    2014-12-01

    Full Text Available Achieving the goals of information security is a key factor in the decision to outsource information technology and, in particular, to decide on the migration of organizational data, applications, and other resources to the infrastructure, based on cloud computing. And the key issue in the selection of optimal architecture and the subsequent migration of business applications and data to the cloud organization information environment is the question of the total cost of ownership of IT infrastructure. This paper focuses on solving the problem of minimizing the total cost of ownership cloud.

  18. Performance dependence of hybrid x-ray computed tomography/fluorescence molecular tomography on the optical forward problem.

    Science.gov (United States)

    Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis

    2009-04-01

    Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.

  19. Understanding the Offender/Environment Dynamic for Computer Crimes

    DEFF Research Database (Denmark)

    Willison, Robert Andrew

    2005-01-01

    practices by possiblyhighlighting new areas for safeguard implementation. To help facilitate a greaterunderstanding of the offender/environment dynamic, this paper assesses the feasibilityof applying criminological theory to the IS security context. More specifically, threetheories are advanced, which focus...... on the offender's behaviour in a criminal setting. Drawing on an account of the Barings Bank collapse, events highlighted in the casestudy are used to assess whether concepts central to the theories are supported by thedata. It is noted that while one of the theories is to be found wanting in terms ofconceptual...

  20. Shell stability analysis in a computer aided engineering (CAE) environment

    Science.gov (United States)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  1. Tablet computers and eBooks. Unlocking the potential for personal learning environments?

    NARCIS (Netherlands)

    Kalz, Marco

    2012-01-01

    Kalz, M. (2012, 9 May). Tablet computers and eBooks. Unlocking the potential for personal learning environments? Invited presentation during the annual conference of the European Association for Distance Learning (EADL), Noordwijkerhout, The Netherlands.

  2. Deception Detection in a Computer-Mediated Environment: Gender, Trust, and Training Issues

    National Research Council Canada - National Science Library

    Dziubinski, Monica

    2003-01-01

    .... This research draws on communication and deception literature to develop a conceptual model proposing relationships between deception detection abilities in a computer-mediated environment, gender, trust, and training...

  3. Potential of Hybrid Computational Phantoms for Retrospective Heart Dosimetry After Breast Radiation Therapy: A Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Moignier, Alexandra, E-mail: alexandra.moignier@irsn.fr [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Derreumaux, Sylvie; Broggio, David; Beurrier, Julien [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Chea, Michel; Boisserie, Gilbert [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France); Franck, Didier; Aubert, Bernard [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Mazeron, Jean-Jacques [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France)

    2013-02-01

    Purpose: Current retrospective cardiovascular dosimetry studies are based on a representative patient or simple mathematic phantoms. Here, a process of patient modeling was developed to personalize the anatomy of the thorax and to include a heart model with coronary arteries. Methods and Materials: The patient models were hybrid computational phantoms (HCPs) with an inserted detailed heart model. A computed tomography (CT) acquisition (pseudo-CT) was derived from HCP and imported into a treatment planning system where treatment conditions were reproduced. Six current patients were selected: 3 were modeled from their CT images (A patients) and the others were modelled from 2 orthogonal radiographs (B patients). The method performance and limitation were investigated by quantitative comparison between the initial CT and the pseudo-CT, namely, the morphology and the dose calculation were compared. For the B patients, a comparison with 2 kinds of representative patients was also conducted. Finally, dose assessment was focused on the whole coronary artery tree and the left anterior descending coronary. Results: When 3-dimensional anatomic information was available, the dose calculations performed on the initial CT and the pseudo-CT were in good agreement. For the B patients, comparison of doses derived from HCP and representative patients showed that the HCP doses were either better or equivalent. In the left breast radiation therapy context and for the studied cases, coronary mean doses were at least 5-fold higher than heart mean doses. Conclusions: For retrospective dose studies, it is suggested that HCP offers a better surrogate, in terms of dose accuracy, than representative patients. The use of a detailed heart model eliminates the problem of identifying the coronaries on the patient's CT.

  4. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  5. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  6. Adaptive hybrid brain-computer interaction: ask a trainer for assistance!

    Science.gov (United States)

    Müller-Putz, Gernot R; Steyrl, David; Faller, Josef

    2014-01-01

    In applying mental imagery brain-computer interfaces (BCIs) to end users, training is a key part for novice users to get control. In general learning situations, it is an established concept that a trainer assists a trainee to improve his/her aptitude in certain skills. In this work, we want to evaluate whether we can apply this concept in the context of event-related desynchronization (ERD) based, adaptive, hybrid BCIs. Hence, in a first session we merged the features of a high aptitude BCI user, a trainer, and a novice user, the trainee, in a closed-loop BCI feedback task and automatically adapted the classifier over time. In a second session the trainees operated the system unassisted. Twelve healthy participants ran through this protocol. Along with the trainer, the trainees achieved a very high overall peak accuracy of 95.3 %. In the second session, where users operated the BCI unassisted, they still achieved a high overall peak accuracy of 83.6%. Ten of twelve first time BCI users successfully achieved significantly better than chance accuracy. Concluding, we can say that this trainer-trainee approach is very promising. Future research should investigate, whether this approach is superior to conventional training approaches. This trainer-trainee concept could have potential for future application of BCIs to end users.

  7. Tools for Brain-Computer Interaction: a general concept for a hybrid BCI (hBCI

    Directory of Open Access Journals (Sweden)

    Gernot R. Mueller-Putz

    2011-11-01

    Full Text Available The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s offer the most reliable signal(s and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a common software framework was investigated.

  8. Hybrid setup for micro- and nano-computed tomography in the hard X-ray range

    Science.gov (United States)

    Fella, Christian; Balles, Andreas; Hanke, Randolf; Last, Arndt; Zabler, Simon

    2017-12-01

    With increasing miniaturization in industry and medical technology, non-destructive testing techniques are an area of ever-increasing importance. In this framework, X-ray microscopy offers an efficient tool for the analysis, understanding, and quality assurance of microscopic samples, in particular as it allows reconstructing three-dimensional data sets of the whole sample's volume via computed tomography (CT). The following article describes a compact X-ray microscope in the hard X-ray regime around 9 keV, based on a highly brilliant liquid-metal-jet source. In comparison to commercially available instruments, it is a hybrid that works in two different modes. The first one is a micro-CT mode without optics, which uses a high-resolution detector to allow scans of samples in the millimeter range with a resolution of 1 μm. The second mode is a microscope, which contains an X-ray optical element to magnify the sample and allows resolving 150 nm features. Changing between the modes is possible without moving the sample. Thus, the instrument represents an important step towards establishing high-resolution laboratory-based multi-mode X-ray microscopy as a standard investigation method.

  9. sBCI-Headset—Wearable and Modular Device for Hybrid Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Tatsiana Malechka

    2015-02-01

    Full Text Available Severely disabled people, like completely paralyzed persons either with tetraplegia or similar disabilities who cannot use their arms and hands, are often considered as a user group of Brain Computer Interfaces (BCI. In order to achieve high acceptance of the BCI by this user group and their supporters, the BCI system has to be integrated into their support infrastructure. Critical disadvantages of a BCI are the time consuming preparation of the user for the electroencephalography (EEG measurements and the low information transfer rate of EEG based BCI. These disadvantages become apparent if a BCI is used to control complex devices. In this paper, a hybrid BCI is described that enables research for a Human Machine Interface (HMI that is optimally adapted to requirements of the user and the tasks to be carried out. The solution is based on the integration of a Steady-state visual evoked potential (SSVEP-BCI, an Event-related (de-synchronization (ERD/ERS-BCI, an eye tracker, an environmental observation camera, and a new EEG head cap for wearing comfort and easy preparation. The design of the new fast multimodal BCI (called sBCI system is described and first test results, obtained in experiments with six healthy subjects, are presented. The sBCI concept may also become useful for healthy people in cases where a “hands-free” handling of devices is necessary.

  10. Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models

    International Nuclear Information System (INIS)

    Lee, Choonsik; Lodwick, Daniel; Hasenauer, Deanna; Williams, Jonathan L; Lee, Choonik; Bolch, Wesley E

    2007-01-01

    Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images-the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid

  11. A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Cloud Computing is a new trend emerging in IT environment with huge requirements of infrastructure and resources. Load Balancing is an important aspect of cloud computing environment. Efficient load balancing scheme ensures efficient resource utilization by provisioning of resources to cloud users on demand basis in pay as you say manner. Load Balancing may even support prioritizing users by applying appropriate scheduling criteria. This paper presents various load balancing schemes in differ...

  12. A synthetic computational environment: To control the spread of respiratory infections in a virtual university

    Science.gov (United States)

    Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong

    2018-02-01

    Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.

  13. Bridging Theory and Practice: Developing Guidelines to Facilitate the Design of Computer-based Learning Environments

    Directory of Open Access Journals (Sweden)

    Lisa D. Young

    2003-10-01

    Full Text Available Abstract. The design of computer-based learning environments has undergone a paradigm shift; moving students away from instruction that was considered to promote technical rationality grounded in objectivism, to the application of computers to create cognitive tools utilized in constructivist environments. The goal of the resulting computer-based learning environment design principles is to have students learn with technology, rather than from technology. This paper reviews the general constructivist theory that has guided the development of these environments, and offers suggestions for the adaptation of modest, generic guidelines, not mandated principles, that can be flexibly applied and allow for the expression of true constructivist ideals in online learning environments.

  14. Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments

    Science.gov (United States)

    Noor, Ahmed K.

    2003-01-01

    The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.

  15. A hybrid NIRS-EEG system for self-paced brain computer interface with online motor imagery.

    Science.gov (United States)

    Koo, Bonkon; Lee, Hwan-Gon; Nam, Yunjun; Kang, Hyohyeong; Koh, Chin Su; Shin, Hyung-Cheul; Choi, Seungjin

    2015-04-15

    For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. An EEG/EOG-based hybrid brain-neural computer interaction (BNCI) system to control an exoskeleton for the paralyzed hand.

    Science.gov (United States)

    Soekadar, Surjo R; Witkowski, Matthias; Vitiello, Nicola; Birbaumer, Niels

    2015-06-01

    The loss of hand function can result in severe physical and psychosocial impairment. Thus, compensation of a lost hand function using assistive robotics that can be operated in daily life is very desirable. However, versatile, intuitive, and reliable control of assistive robotics is still an unsolved challenge. Here, we introduce a novel brain/neural-computer interaction (BNCI) system that integrates electroencephalography (EEG) and electrooculography (EOG) to improve control of assistive robotics in daily life environments. To evaluate the applicability and performance of this hybrid approach, five healthy volunteers (HV) (four men, average age 26.5 ± 3.8 years) and a 34-year-old patient with complete finger paralysis due to a brachial plexus injury (BPI) used EEG (condition 1) and EEG/EOG (condition 2) to control grasping motions of a hand exoskeleton. All participants were able to control the BNCI system (BNCI control performance HV: 70.24 ± 16.71%, BPI: 65.93 ± 24.27%), but inclusion of EOG significantly improved performance across all participants (HV: 80.65 ± 11.28, BPI: 76.03 ± 18.32%). This suggests that hybrid BNCI systems can achieve substantially better control over assistive devices, e.g., a hand exoskeleton, than systems using brain signals alone and thus may increase applicability of brain-controlled assistive devices in daily life environments.

  17. Weighted Local Active Pixel Pattern (WLAPP for Face Recognition in Parallel Computation Environment

    Directory of Open Access Journals (Sweden)

    Gundavarapu Mallikarjuna Rao

    2013-10-01

    Full Text Available Abstract  - The availability of multi-core technology resulted totally new computational era. Researchers are keen to explore available potential in state of art-machines for breaking the bearer imposed by serial computation. Face Recognition is one of the challenging applications on so ever computational environment. The main difficulty of traditional Face Recognition algorithms is lack of the scalability. In this paper Weighted Local Active Pixel Pattern (WLAPP, a new scalable Face Recognition Algorithm suitable for parallel environment is proposed.  Local Active Pixel Pattern (LAPP is found to be simple and computational inexpensive compare to Local Binary Patterns (LBP. WLAPP is developed based on concept of LAPP. The experimentation is performed on FG-Net Aging Database with deliberately introduced 20% distortion and the results are encouraging. Keywords — Active pixels, Face Recognition, Local Binary Pattern (LBP, Local Active Pixel Pattern (LAPP, Pattern computing, parallel workers, template, weight computation.  

  18. Service ORiented Computing EnviRonment (SORCER) for Deterministic Global and Stochastic Optimization

    OpenAIRE

    Raghunath, Chaitra

    2015-01-01

    With rapid growth in the complexity of large scale engineering systems, the application of multidisciplinary analysis and design optimization (MDO) in the engineering design process has garnered much attention. MDO addresses the challenge of integrating several different disciplines into the design process. Primary challenges of MDO include computational expense and poor scalability. The introduction of a distributed, collaborative computational environment results in better...

  19. Dynamic Scaffolding of Socially Regulated Learning in a Computer-Based Learning Environment

    Science.gov (United States)

    Molenaar, Inge; Roda, Claudia; van Boxtel, Carla; Sleegers, Peter

    2012-01-01

    The aim of this study is to test the effects of dynamically scaffolding social regulation of middle school students working in a computer-based learning environment. Dyads in the scaffolding condition (N=56) are supported with computer-generated scaffolds and students in the control condition (N=54) do not receive scaffolds. The scaffolds are…

  20. Dynamic Scaffolding of Socially Regulated Learning in a Computer-Based Learning Environment

    NARCIS (Netherlands)

    Molenaar, I.; Roda, Claudia; van Boxtel, Carla A.M.; Sleegers, P.J.C.

    2012-01-01

    The aim of this study is to test the effects of dynamically scaffolding social regulation of middle school students working in a computer-based learning environment. Dyads in the scaffolding condition (N = 56) are supported with computer-generated scaffolds and students in the control condition (N =

  1. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  2. Degradation of lindane by a novel embedded bio-nano hybrid system in aqueous environment.

    Science.gov (United States)

    Salam, Jaseetha Abdul; Das, Nilanjana

    2015-03-01

    The objective of this study was to evaluate the effect of an embedded bio-nano hybrid system using nanoscale zinc oxide (n-ZnO) and lindane-degrading yeast Candida VITJzN04 for lindane degradation. Nano-embedding of the yeast was done with chemically synthesized n-ZnO particles (50 mg/mL) and was visualized by atomic force microscope (AFM) and scanning electron microscope (SEM). Nanoparticles were embedded substantially on the surfaces of the yeast cells and translocated into the cell cytoplasm without causing any lethal effect to the cell until 50 mg/mL. Lindane (600 mg/L) degradation was studied both in the individual and hybrid system. Rapid reductive-dechlorination of lindane was attained with n-ZnO under illuminated conditions, with the generation of chlorobenzene and benzene as dechlorination products. The bio-nano hybrid was found to be more effective compared to the native yeasts for lindane degradation and resulted in complete removal within 3 days. The kinetic data analysis implied that the half-life of lindane was 9 h for bio-nano hybrid and 28 h for Candida VITJzN04. The enhanced lindane degradation by bio-nano hybrid might be due to increased porosity and permeability of the yeast cell membrane, facilitating the easy entry of lindane into cell cytoplasm and n-ZnO-mediated dechlorination. To the best of our knowledge, this report, for the first time, suggests the use of n-ZnO-mediated dechlorination of lindane and the novel bio-nano hybrid system that reduces the half-life to one third of the time taken by the yeast alone. The embedded bio-nano hybrid system may be exploited as an effective remediation tool for the treatment of lindane-contaminated wastewaters.

  3. Towards a Cloud Computing Environment: Near Real-time Cloud Product Processing and Distribution for Next Generation Satellites

    Science.gov (United States)

    Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.

    2016-12-01

    The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.

  4. Massive calculations of electrostatic potentials and structure maps of biopolymers in a distributed computing environment

    International Nuclear Information System (INIS)

    Akishina, T.P.; Ivanov, V.V.; Stepanenko, V.A.

    2013-01-01

    Among the key factors determining the processes of transcription and translation are the distributions of the electrostatic potentials of DNA, RNA and proteins. Calculations of electrostatic distributions and structure maps of biopolymers on computers are time consuming and require large computational resources. We developed the procedures for organization of massive calculations of electrostatic potentials and structure maps for biopolymers in a distributed computing environment (several thousands of cores).

  5. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  6. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  7. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  8. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    International Nuclear Information System (INIS)

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-01

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three

  9. Mathematical Language Development and Talk Types in Computer Supported Collaborative Learning Environments

    Science.gov (United States)

    Symons, Duncan; Pierce, Robyn

    2015-01-01

    In this study we examine the use of cumulative and exploratory talk types in a year 5 computer supported collaborative learning environment. The focus for students in this environment was to participate in mathematical problem solving, with the intention of developing the proficiencies of problem solving and reasoning. Findings suggest that…

  10. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  11. Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment

    NARCIS (Netherlands)

    Borgdorff, J.; Mamonski, M.; Bosak, B.; Kurowski, K.; Ben Belgacem, M.; Chopard, B.; Groen, D.; Coveney, P.V.; Hoekstra, A.G.

    2014-01-01

    We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and

  12. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers' Touch-Interface User Experiences

    Science.gov (United States)

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users' shopping behavior. In this research, I examine the…

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers’ Touch-Interface User Experiences

    OpenAIRE

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users’ shopping behavior. In this research, I examine the underlying mechanisms between input device environments and shoppers’ decision-making processes. In particular, I investigate the impact of input d...

  15. A quadratic kernel for computing the hybridization number of multiple trees

    NARCIS (Netherlands)

    L.J.J. van Iersel (Leo); S. Linz

    2012-01-01

    htmlabstractIt has recently been shown that the NP-hard problem of calculating the minimum number of hybridization events that is needed to explain a set of rooted binary phylogenetic trees by means of a hybridization network is fixed-parameter tractable if an instance of the problem consists of

  16. Image selection as a service for cloud computing environments

    KAUST Repository

    Filepp, Robert

    2010-12-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe that as the number of images proliferates it will become increasingly difficult for users to decide effectively. Cloud service providers often allow their customers to instantiate standard system images, to modify their instances, and to store images of these customized instances for public or private future use. Storing modified instances as images enables customers to avoid re-provisioning and re-configuration of required resources thereby reducing their future costs. However Cloud service providers generally do not expose details regarding the configurations of the images in a rigorous canonical fashion nor offer services that assist clients in the best target image selection to support client transformation objectives. Rather, they allow customers to enter a free-form description of an image based on client\\'s best effort. This means in order to find a "best fit" image to instantiate, a human user must review potentially thousands of image descriptions, reading each description to evaluate its suitability as a platform to host their source application. Furthermore, the actual content of the selected image may differ greatly from its description. Finally, even images that have been customized and retained for future use may need additional provisioning and customization to accommodate specific needs. In this paper we propose a service that accumulates image configuration details in a canonical fashion and a further service that employs an algorithm to order images per best fit /least cost in conformance to user-specified policies. These services collectively facilitate workload transformation into enterprise cloud environments.

  17. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  18. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  19. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    International Nuclear Information System (INIS)

    Hermes, C.E.; Koo, J.

    1995-01-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC's and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco's E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC's connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software

  20. Urbancontext: A Management Model For Pervasive Environments In User-Oriented Urban Computing

    Directory of Open Access Journals (Sweden)

    Claudia L. Zuniga-Canon

    2014-01-01

    Full Text Available Nowadays, urban computing has gained a lot of interest for guiding the evolution of citiesinto intelligent environments. These environments are appropriated for individuals’ inter-actions changing in their behaviors. These changes require new approaches that allow theunderstanding of how urban computing systems should be modeled.In this work we present UrbanContext, a new model for designing of urban computingplatforms that applies the theory of roles to manage the individual’s context in urban envi-ronments. The theory of roles helps to understand the individual’s behavior within a socialenvironment, allowing to model urban computing systems able to adapt to individuals statesand their needs.UrbanContext collects data in urban atmospheres and classifies individuals’ behaviorsaccording to their change of roles, to optimize social interaction and offer secure services.Likewise, UrbanContext serves as a generic model to provide interoperability, and to facilitatethe design, implementation and expansion of urban computing systems.

  1. A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs

    Science.gov (United States)

    Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.

    2016-12-01

    Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non

  2. A hybrid agent-based computational economics and optimization approach for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Zahra Pourabdollahi

    2017-12-01

    Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.

  3. A hybrid computational model to explore the topological characteristics of epithelial tissues.

    Science.gov (United States)

    González-Valverde, Ismael; García-Aznar, José Manuel

    2017-11-01

    Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Dynamic modelling of an adsorption storage tank using a hybrid approach combining computational fluid dynamics and process simulation

    Science.gov (United States)

    Mota, J.P.B.; Esteves, I.A.A.C.; Rostam-Abadi, M.

    2004-01-01

    A computational fluid dynamics (CFD) software package has been coupled with the dynamic process simulator of an adsorption storage tank for methane fuelled vehicles. The two solvers run as independent processes and handle non-overlapping portions of the computational domain. The codes exchange data on the boundary interface of the two domains to ensure continuity of the solution and of its gradient. A software interface was developed to dynamically suspend and activate each process as necessary, and be responsible for data exchange and process synchronization. This hybrid computational tool has been successfully employed to accurately simulate the discharge of a new tank design and evaluate its performance. The case study presented here shows that CFD and process simulation are highly complementary computational tools, and that there are clear benefits to be gained from a close integration of the two. ?? 2004 Elsevier Ltd. All rights reserved.

  5. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  6. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Science.gov (United States)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  7. A Scheme for Verification on Data Integrity in Mobile Multicloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Laicheng Cao

    2016-01-01

    Full Text Available In order to verify the data integrity in mobile multicloud computing environment, a MMCDIV (mobile multicloud data integrity verification scheme is proposed. First, the computability and nondegeneracy of verification can be obtained by adopting BLS (Boneh-Lynn-Shacham short signature scheme. Second, communication overhead is reduced based on HVR (Homomorphic Verifiable Response with random masking and sMHT (sequence-enforced Merkle hash tree construction. Finally, considering the resource constraints of mobile devices, data integrity is verified by lightweight computing and low data transmission. The scheme improves shortage that mobile device communication and computing power are limited, it supports dynamic data operation in mobile multicloud environment, and data integrity can be verified without using direct source file block. Experimental results also demonstrate that this scheme can achieve a lower cost of computing and communications.

  8. Assessing the Therapeutic Environment in Hybrid Models of Treatment: Prisoner Perceptions of Staff

    Science.gov (United States)

    Kubiak, Sheryl Pimlott

    2009-01-01

    Hybrid treatment models within prisons are staffed by both criminal justice and treatment professionals. Because these models may be indicative of future trends, examining the perceptions of prisoners/participants may provide important information. This study examines the perceptions of male and female inmates in three prisons, comparing those in…

  9. College Faculty Understanding of Hybrid Teaching Environments and Their Levels of Trainability by Departments

    Science.gov (United States)

    Martinucci, Kenneth P.; Stein, Daniel; Wittmann, Helen C.; Morote, Elsa-Sofia

    2015-01-01

    We explored whether the knowledge of hybrid teaching (conceptions) or incorrect knowledge (misconceptions) or lack of knowledge differed among faculty from various teaching areas--education, social sciences, business, art and humanities, and math and sciences--in New York. One hundred twenty-eight faculty members responded to a test of their…

  10. Mineral content in grains of seven food-grade sorghum hybrids grown in Mediterranean environment

    Science.gov (United States)

    Sorghum is a major crop used for food, feed and industrial purposes worldwide. The objective of this study was to determine the mineral content in grains of seven white food-grade sorghum hybrids bred and adapted for growth in the central USA and grown in a Mediterranean area of Southern Italy. The ...

  11. Beyond the photocopy machine : Document delivery in a hybrid library environment

    NARCIS (Netherlands)

    Dekker, R.; Waaijers, L.

    2001-01-01

    Document delivery bridges the gap between where the customer is and where the document is. Libraries have to offer user-friendly access to hybrid collections, and design and implement document delivery mechanisms from paper originals to provide a seamless integration between delivery from electronic

  12. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  13. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    Science.gov (United States)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that

  14. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  15. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  16. An extended Intelligent Water Drops algorithm for workflow scheduling in cloud computing environment

    Directory of Open Access Journals (Sweden)

    Shaymaa Elsherbiny

    2018-03-01

    Full Text Available Cloud computing is emerging as a high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. Many resource management methods may enhance the efficiency of the whole cloud computing system. The key part of cloud computing resource management is resource scheduling. Optimized scheduling of tasks on the cloud virtual machines is an NP-hard problem and many algorithms have been presented to solve it. The variations among these schedulers are due to the fact that the scheduling strategies of the schedulers are adapted to the changing environment and the types of tasks. The focus of this paper is on workflows scheduling in cloud computing, which is gaining a lot of attention recently because workflows have emerged as a paradigm to represent complex computing problems. We proposed a novel algorithm extending the natural-based Intelligent Water Drops (IWD algorithm that optimizes the scheduling of workflows on the cloud. The proposed algorithm is implemented and embedded within the workflows simulation toolkit and tested in different simulated cloud environments with different cost models. Our algorithm showed noticeable enhancements over the classical workflow scheduling algorithms. We made a comparison between the proposed IWD-based algorithm with other well-known scheduling algorithms, including MIN-MIN, MAX-MIN, Round Robin, FCFS, and MCT, PSO and C-PSO, where the proposed algorithm presented noticeable enhancements in the performance and cost in most situations.

  17. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

    Energy Technology Data Exchange (ETDEWEB)

    Lingerfelt, Eric J [ORNL; Endeve, Eirik [ORNL; Ovchinnikov, Oleg S [ORNL; Borreguero Calvo, Jose M [ORNL; Park, Byung H [ORNL; Archibald, Richard K [ORNL; Symons, Christopher T [ORNL; Kalinin, Sergei V [ORNL; Messer, Bronson [ORNL; Shankar, Mallikarjun [ORNL; Jesse, Stephen [ORNL

    2016-01-01

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

  18. Simultaneous detection of P300 and steady-state visually evoked potentials for hybrid brain-computer interface.

    Science.gov (United States)

    Combaz, Adrien; Van Hulle, Marc M

    2015-01-01

    We study the feasibility of a hybrid Brain-Computer Interface (BCI) combining simultaneous visual oddball and Steady-State Visually Evoked Potential (SSVEP) paradigms, where both types of stimuli are superimposed on a computer screen. Potentially, such a combination could result in a system being able to operate faster than a purely P300-based BCI and encode more targets than a purely SSVEP-based BCI. We analyse the interactions between the brain responses of the two paradigms, and assess the possibility to detect simultaneously the brain activity evoked by both paradigms, in a series of 3 experiments where EEG data are analysed offline. Despite differences in the shape of the P300 response between pure oddball and hybrid condition, we observe that the classification accuracy of this P300 response is not affected by the SSVEP stimulation. We do not observe either any effect of the oddball stimulation on the power of the SSVEP response in the frequency of stimulation. Finally results from the last experiment show the possibility of detecting both types of brain responses simultaneously and suggest not only the feasibility of such hybrid BCI but also a gain over pure oddball- and pure SSVEP-based BCIs in terms of communication rate.

  19. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  20. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  1. Bridging Social and Semantic Computing - Design and Evaluation of User Interfaces for Hybrid Systems

    Science.gov (United States)

    Bostandjiev, Svetlin Alex I.

    2012-01-01

    The evolution of the Web brought new interesting problems to computer scientists that we loosely classify in the fields of social and semantic computing. Social computing is related to two major paradigms: computations carried out by a large amount of people in a collective intelligence fashion (i.e. wikis), and performing computations on social…

  2. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  3. An Advanced Environment for Hybrid Modeling of Biological Systems Based on Modelica

    Directory of Open Access Journals (Sweden)

    Proß Sabrina

    2011-03-01

    Full Text Available Biological systems are often very complex so that an appropriate formalism is needed for modeling their behavior. Hybrid Petri Nets, consisting of time-discrete Petri Net elements as well as continuous ones, have proven to be ideal for this task. Therefore, a new Petri Net library was implemented based on the object-oriented modeling language Modelica which allows the modeling of discrete, stochastic and continuous Petri Net elements by differential, algebraic and discrete equations. An appropriate Modelica-tool performs the hybrid simulation with discrete events and the solution of continuous differential equations. A special sub-library contains so-called wrappers for specific reactions to simplify the modeling process.

  4. Silica- and silylated europium-based luminescent hybrids: new analysis tools for biological environments

    International Nuclear Information System (INIS)

    Pereira Duarte, Adriana

    2012-01-01

    The association of the very interesting luminescence properties of the lanthanide chelates with the physicochemical properties of inorganic matrix such as silica is a promising way to obtain new probes or luminescent markers for biology analyses. In this idea, this work focuses on the preparation of new hybrid materials based on the grafting of new europium(III) complexes on silica nanoparticles. These europium complexes were developed in our group using bifunctional ligands containing both complexing and grafting sites. Intrinsic characteristic of the ligands gives us the ability to make a covalent bond between the material surface and the complex. Two different methodologies were used; the first one is the direct grafting reaction involving the complex and silica nanoparticles (i.e. dense or meso-porous particles). The second one is the Stoeber reaction, where the SiO 2 nanoparticles were prepared in presence of the europium complex. The last methodology has an additional difficult, because of the presence of silylated europium complex, it needs a closer control of the physicochemical conditions. The new organic-inorganic hybrid materials, obtained in this work, present an interesting luminescence behavior and this one is depending on the localization of the europium complex, i.e. on the surface or within the nanoparticles. In addition, the obtained hybrids present the nano-metric dimension and the complex is not leachable. Analyses were realized to describe the luminescence properties, beyond surface and structural characteristics. Initial results show that the new hybrids are promising candidates for luminescent bio-markers, particularly for the time-resolved analysis. (author) [fr

  5. Integration of Computed Tomography and Three-Dimensional Echocardiography for Hybrid Three-Dimensional Printing in Congenital Heart Disease.

    Science.gov (United States)

    Gosnell, Jordan; Pietila, Todd; Samuel, Bennett P; Kurup, Harikrishnan K N; Haw, Marcus P; Vettukattil, Joseph J

    2016-12-01

    Three-dimensional (3D) printing is an emerging technology aiding diagnostics, education, and interventional, and surgical planning in congenital heart disease (CHD). Three-dimensional printing has been derived from computed tomography, cardiac magnetic resonance, and 3D echocardiography. However, individually the imaging modalities may not provide adequate visualization of complex CHD. The integration of the strengths of two or more imaging modalities has the potential to enhance visualization of cardiac pathomorphology. We describe the feasibility of hybrid 3D printing from two imaging modalities in a patient with congenitally corrected transposition of the great arteries (L-TGA). Hybrid 3D printing may be useful as an additional tool for cardiologists and cardiothoracic surgeons in planning interventions in children and adults with CHD.

  6. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  7. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  8. A Multi-Compartment Hybrid Computational Model Predicts Key Roles for Dendritic Cells in Tuberculosis Infection

    Directory of Open Access Journals (Sweden)

    Simeone Marino

    2016-10-01

    Full Text Available Tuberculosis (TB is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB. The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs, take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung to lung-draining lymph nodes (LNs where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim describing in silico immune cell, bacterial (Mtb and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form, lung draining lymph node (LN, site of generation of adaptive immunity and blood (a measurable compartment. Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM, while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs. In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is

  9. Research on Digital Forensic Readiness Design in a Cloud Computing-Based Smart Work Environment

    Directory of Open Access Journals (Sweden)

    Sangho Park

    2018-04-01

    Full Text Available Recently, the work environments of organizations have been in the process of transitioning into smart work environments by applying cloud computing technology in the existing work environment. The smart work environment has the characteristic of being able to access information assets inside the company from outside the company through cloud computing technology, share information without restrictions on location by using mobile terminals, and provide a work environment where work can be conducted effectively in various locations and mobile environments. Thus, in the cloud computing-based smart work environment, changes are occurring in terms of security risks, such as an increase in the leakage risk of an organization’s information assets through mobile terminals which have a high risk of loss and theft and increase the hacking risk of wireless networks in mobile environments. According to these changes in security risk, the reactive digital forensic method, which investigates digital evidence after the occurrence of security incidents, appears to have a limit which has led to a rise in the necessity of proactive digital forensic approaches wherein security incidents can be addressed preemptively. Accordingly, in this research, we design a digital forensic readiness model at the level of preemptive prevention by considering changes in the cloud computing-based smart work environment. Firstly, we investigate previous research related to the cloud computing-based smart work environment and digital forensic readiness and analyze a total of 50 components of digital forensic readiness. In addition, through the analysis of the corresponding preceding research, we design seven detailed areas, namely, outside the organization environment, within the organization guideline, system information, terminal information, user information, usage information, and additional function. Then, we design a draft of the digital forensic readiness model in the cloud

  10. COMPUTATIONAL MODELS USED FOR MINIMIZING THE NEGATIVE IMPACT OF ENERGY ON THE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Oprea D.

    2012-04-01

    Full Text Available Optimizing energy system is a problem that is extensively studied for many years by scientists. This problem can be studied from different views and using different computer programs. The work is characterized by one of the following calculation methods used in Europe for modelling, power system optimization. This method shall be based on reduce action of energy system on environment. Computer program used and characterized in this article is GEMIS.

  11. ReputationPro: The Efficient Approaches to Contextual Transaction Trust Computation in E-Commerce Environments

    OpenAIRE

    Zhang, Haibin; Wang, Yan; Zhang, Xiuzhen; Lim, Ee-Peng

    2013-01-01

    In e-commerce environments, the trustworthiness of a seller is utterly important to potential buyers, especially when the seller is unknown to them. Most existing trust evaluation models compute a single value to reflect the general trust level of a seller without taking any transaction context information into account. In this paper, we first present a trust vector consisting of three values for Contextual Transaction Trust (CTT). In the computation of three CTT values, the identified three ...

  12. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    OpenAIRE

    Shin-ichi Kuribayashi

    2013-01-01

    The widespread use of cloud computing services is expected to deteriorate a Quality of Service and toincrease the power consumption of ICT devices, since the distance to a server becomes longer than before. Migration of virtual machines over a wide area can solve many problems such as load balancing and power saving in cloud computing environments. This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine is moved to a distant center, in order to preve...

  13. Nuclides.net: A computational environment for nuclear data and applications in radioprotection and radioecology

    International Nuclear Information System (INIS)

    Berthou, V.; Galy, J.; Leutzenkirchen, K.

    2004-01-01

    An interactive multimedia tool, Nuclides.net, has been developed at the Institute for Transuranium Elements. The Nuclides.net 'integrated environment' is a suite of computer programs ranging from a powerful user-friendly interface, which allows the user to navigate the nuclides chart and explore the properties of nuclides, to various computational modules for decay calculations, dosimetry and shielding calculations, etc. The product is particularly suitable for environmental radioprotection and radioecology. (authors)

  14. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    OpenAIRE

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s mul...

  15. Blockchain-based database to ensure data integrity in cloud computing environments

    OpenAIRE

    Gaetani, Edoardo; Aniello, Leonardo; Baldoni, Roberto; Lombardi, Federico; Margheri, Andrea; Sassone, Vladimiro

    2017-01-01

    Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinati...

  16. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    OpenAIRE

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  17. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm.

    Science.gov (United States)

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-15

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  18. Data of NODDI diffusion metrics in the brain and computer simulation of hybrid diffusion imaging (HYDI acquisition scheme

    Directory of Open Access Journals (Sweden)

    Chandana Kodiweera

    2016-06-01

    Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].

  19. Enhanced Survey and Proposal to secure the data in Cloud Computing Environment

    OpenAIRE

    MR.S.SUBBIAH; DR.S.SELVA MUTHUKUMARAN; DR.T.RAMKUMAR

    2013-01-01

    Cloud computing have the power to eliminate the cost of setting high end computing infrastructure. It is a promising area or design to give very flexible architecture, accessible through the internet. In the cloud computing environment the data will be reside at any of the data centers. Due to that, some data center may leak the data stored on there, beyond the reach and control of the users. For this kind of misbehaving data centers, the service providers should take care of the security and...

  20. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  1. Computationally Probing the Performance of Hybrid, Heterogeneous, and Homogeneous Iridium-Based Catalysts for Water Oxidation

    Energy Technology Data Exchange (ETDEWEB)

    García-Melchor, Max [SUNCAT Center for Interface Science and Catalysis, Department of Chemical Engineering, Stanford University, Stanford CA (United States); Vilella, Laia [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST),Tarragona (Spain); Departament de Quimica, Universitat Autonoma de Barcelona, Barcelona (Spain); López, Núria [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST), Tarragona (Spain); Vojvodic, Aleksandra [SUNCAT Center for Interface Science and Catalysis, SLAC National Accelerator Laboratory, Menlo Park CA (United States)

    2016-04-29

    An attractive strategy to improve the performance of water oxidation catalysts would be to anchor a homogeneous molecular catalyst on a heterogeneous solid surface to create a hybrid catalyst. The idea of this combined system is to take advantage of the individual properties of each of the two catalyst components. We use Density Functional Theory to determine the stability and activity of a model hybrid water oxidation catalyst consisting of a dimeric Ir complex attached on the IrO2(110) surface through two oxygen atoms. We find that homogeneous catalysts can be bound to its matrix oxide without losing significant activity. Hence, designing hybrid systems that benefit from both the high tunability of activity of homogeneous catalysts and the stability of heterogeneous systems seems feasible.

  2. Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.

    Science.gov (United States)

    Sanchez, Yerly; Pinzon, David; Zheng, Bin

    2017-10-01

    To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.

  3. Structural analysis of magnetic fusion energy systems in a combined interactive/batch computer environment

    International Nuclear Information System (INIS)

    Johnson, N.E.; Singhal, M.K.; Walls, J.C.; Gray, W.H.

    1979-01-01

    A system of computer programs has been developed to aid in the preparation of input data for and the evaluation of output data from finite element structural analyses of magnetic fusion energy devices. The system utilizes the NASTRAN structural analysis computer program and a special set of interactive pre- and post-processor computer programs, and has been designed for use in an environment wherein a time-share computer system is linked to a batch computer system. In such an environment, the analyst must only enter, review and/or manipulate data through interactive terminals linked to the time-share computer system. The primary pre-processor programs include NASDAT, NASERR and TORMAC. NASDAT and TORMAC are used to generate NASTRAN input data. NASERR performs routine error checks on this data. The NASTRAN program is run on a batch computer system using data generated by NASDAT and TORMAC. The primary post-processing programs include NASCMP and NASPOP. NASCMP is used to compress the data initially stored on magnetic tape by NASTRAN so as to facilitate interactive use of the data. NASPOP reads the data stored by NASCMP and reproduces NASTRAN output for selected grid points, elements and/or data types

  4. Peat hybrid sorbents for treatment of wastewaters and remediation of polluted environment

    Science.gov (United States)

    Klavins, Maris; Burlakovs, Juris; Robalds, Artis; Ansone-Bertina, Linda

    2015-04-01

    For remediation of soils and purification of polluted waters, wastewaters, sorbents might be considered as an prospective group of materials and amongst them peat have a special role due to low cost, biodegradability, high number of functional groups, well developed surface area and combination of hydrophilic/hydrophobic structural elements. Peat as sorbent have good application potential for removal of trace metals, and we have demonstrated peat sorption capacities, sorption kinetics, thermodynamics in respect to metals with different valencies - Tl(I), Cu(II), Cr(III). However peat sorption capacity in respect to nonmetallic (anionic species) elements is low. Also peat mechanical properties do not support application in large scale column processes. To expand peat application possibilities the approach of biomass based hybrid sorbents has been elaborated. The concept "hybrid sorbent" in our understanding means natural, biomass based sorbent modified, covered with another sorbent material, thus combining two types of sorbent properties, sorbent functionalities, surface properties etc. As the "covering layer" both inorganic substances, mineral phases (iron oxohydroxides, oxyapatite) both organic polymers (using graft polymerization) were used. The obtained sorbents were characterised by their spectral properties, surface area, elemental composition. The obtained hybrid sorbents were tested for sorption of compounds in anionic speciation forms, for example of arsenic, antimony, tellurium and phosphorous compounds in comparison with weakly basic anionites. The highest sorption capacity was observed when peat sorbents modified with iron compounds were used. Sorption of different arsenic speciation forms onto iron-modified peat sorbents was investigated as a function of pH and temperature. It was established that sorption capacity increases with a rise in temperature, and the calculation of sorption process thermodynamic parameters indicates the spontaneity of sorption

  5. Computer modeling of the dynamics of surface tension on rotating fluids in low and microgravity environments

    Science.gov (United States)

    Hung, R. J.; Tsao, Y. D.; Hong, B. B.; Leslie, Fred W.

    1989-01-01

    Time-dependent evolutions of the profile of the free surface (bubble shapes) for a cylindrical container partially filled with a Newtonian fluid of constant density, rotating about its axis of symmetry, have been studied. Numerical computations have been carried out with the following situations: (1) linear functions of spin-up and spin-down in low- and microgravity environments, (2) linear functions of increasing and decreasing gravity environments at high- and low-rotating cylinder speeds, and (3) step functions of spin-up and spin-down in a low-gravity environment.

  6. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  7. A case of dual ectopy thyroid along the thyroglossal tract demonstrated on 99mTc-Pertechnatate hybrid single photon emission computed tomography/computed tomography

    International Nuclear Information System (INIS)

    Agarwal, Krishan Kant; Karunanithi, Sellam; Jain, Sachin; Tripathi, Madhavi

    2014-01-01

    Ectopic thyroid tissue (ETT) refers to the presence of thyroid tissue in locations other than the normal anterior neck region between the second and fourth tracheal cartilages. Multiple ectopia of the thyroid is extremely rare. Here we report a case of 10-year-old girl with anterior midline neck swelling and hypothyroidism with dual ectopia of thyroid gland without orthotopic thyroid gland. Planar 99 m-technetium pertechnatate scan identified ETT corresponding to the palpable neck swelling. Single photon emission computed tomography/computed tomography (SPECT/CT) demonstrated ETT in two locations, one corresponding to the palpable mass and another in the in the sublingual location. This case thus demonstrates the important role of hybrid SPECT/CT in the identification of dual ectopia along the thyroglossal tract

  8. Secondary School Students' Attitudes towards Mathematics Computer--Assisted Instruction Environment in Kenya

    Science.gov (United States)

    Mwei, Philip K.; Wando, Dave; Too, Jackson K.

    2012-01-01

    This paper reports the results of research conducted in six classes (Form IV) with 205 students with a sample of 94 respondents. Data represent students' statements that describe (a) the role of Mathematics teachers in a computer-assisted instruction (CAI) environment and (b) effectiveness of CAI in Mathematics instruction. The results indicated…

  9. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    Science.gov (United States)

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  10. The Use of Engineering Design Concept for Computer Programming Course: A Model of Blended Learning Environment

    Science.gov (United States)

    Tritrakan, Kasame; Kidrakarn, Pachoen; Asanok, Manit

    2016-01-01

    The aim of this research is to develop a learning model which blends factors from learning environment and engineering design concept for learning in computer programming course. The usage of the model was also analyzed. This study presents the design, implementation, and evaluation of the model. The research methodology is divided into three…

  11. Computer Graphics Orientation and Training in a Corporate/Production Environment.

    Science.gov (United States)

    McDevitt, Marsha Jean

    This master's thesis provides an overview of a computer graphics production environment and proposes a realistic approach to orientation and on-going training for employees working within a fast-paced production schedule. Problems involved in meeting the training needs of employees are briefly discussed in the first chapter, while the second…

  12. Visual Perspectives within Educational Computer Games: Effects on Presence and Flow within Virtual Immersive Learning Environments

    Science.gov (United States)

    Scoresby, Jon; Shelton, Brett E.

    2011-01-01

    The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…

  13. The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment

    Science.gov (United States)

    Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz , Sarah Jayne

    2013-01-01

    The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text…

  14. Understanding Student Retention in Computer Science Education: The Role of Environment, Gains, Barriers and Usefulness

    Science.gov (United States)

    Giannakos, Michail N.; Pappas, Ilias O.; Jaccheri, Letizia; Sampson, Demetrios G.

    2017-01-01

    Researchers have been working to understand the high dropout rates in computer science (CS) education. Despite the great demand for CS professionals, little is known about what influences individuals to complete their CS studies. We identify gains of studying CS, the (learning) environment, degree's usefulness, and barriers as important predictors…

  15. Encountering the Expertise Reversal Effect with a Computer-Based Environment on Electrical Circuit Analysis

    Science.gov (United States)

    Reisslein, Jana; Atkinson, Robert K.; Seeling, Patrick; Reisslein, Martin

    2006-01-01

    This study examined the effectiveness of a computer-based environment employing three example-based instructional procedures (example-problem, problem-example, and fading) to teach series and parallel electrical circuit analysis to learners classified by two levels of prior knowledge (low and high). Although no differences between the…

  16. Detecting and Understanding the Impact of Cognitive and Interpersonal Conflict in Computer Supported Collaborative Learning Environments

    Science.gov (United States)

    Prata, David Nadler; Baker, Ryan S. J. d.; Costa, Evandro d. B.; Rose, Carolyn P.; Cui, Yue; de Carvalho, Adriana M. J. B.

    2009-01-01

    This paper presents a model which can automatically detect a variety of student speech acts as students collaborate within a computer supported collaborative learning environment. In addition, an analysis is presented which gives substantial insight as to how students' learning is associated with students' speech acts, knowledge that will…

  17. MOO: Using a Computer Gaming Environment to Teach about Community Arts

    Science.gov (United States)

    Garber, Elizabeth

    2004-01-01

    In this paper, the author discusses the use of an interactive computer technology, "MOO" (Multi-user domain, Object-Oriented), in her art education classes for preservice teachers. A MOO is a text-based environment wherein interactivity is centered on text exchanges made between users based on problems or other materials created by teachers. The…

  18. Hybrid heating systems optimization of residential environment to have thermal comfort conditions by numerical simulation.

    Science.gov (United States)

    Jahantigh, Nabi; Keshavarz, Ali; Mirzaei, Masoud

    2015-01-01

    The aim of this study is to determine optimum hybrid heating systems parameters, such as temperature, surface area of a radiant heater and vent area to have thermal comfort conditions. DOE, Factorial design method is used to determine the optimum values for input parameters. A 3D model of a virtual standing thermal manikin with real dimensions is considered in this study. Continuity, momentum, energy, species equations for turbulent flow and physiological equation for thermal comfort are numerically solved to study heat, moisture and flow field. K - ɛRNG Model is used for turbulence modeling and DO method is used for radiation effects. Numerical results have a good agreement with the experimental data reported in the literature. The effect of various combinations of inlet parameters on thermal comfort is considered. According to Pareto graph, some of these combinations that have significant effect on the thermal comfort require no more energy can be used as useful tools. A better symmetrical velocity distribution around the manikin is also presented in the hybrid system.

  19. Possibilities of the new hybrid technology single photon emission computer technology/computer tomography (SPECT/CT) and the first impressions of its application

    International Nuclear Information System (INIS)

    Kostadinova, I.

    2010-01-01

    With the help of the new hybrid technique SPECT/ CT it is possible, using the only investigation, to acquire a combine image of the investigated organ, visualizing its function and structure. Combining the possibilities of the new multimodality method, which combines the possibilities of the Single Photon Emission Computer Tomography - SPECT and Computer Tomography - CT, it is possible to precisely localize the pathologically changed organs function. With the further combination of the tomographic gamma camera with diagnostic CT, a detailed morphological evaluation of the finding was possible. The main clinical application of the new hybrid diagnostic is in the fields of cardiology, oncology, orthopedics with more and more extension of those, not connected with oncology, such as - thyroid, parathyroid, brain (especially localization of the epileptic foci), visualization of local infection and recently for the purposes of the radiotherapy planning. According to the literature data, around 35% of SPECT-investigations have to be combined with CT in order to increase the specificity of the diagnosis, which changes the interpretation of the result in 56% of the cases. After installation of the SPECT/CT camera in the University hospital 'Alexandrovska' in January 2009, the following changes have been done: the number of the investigated patients have increased, including number of heart, thyroid (especially scintigraphy with 131I), bones and parathyroid glands. As a result of the application of the hybrid technique, a shortage of the investigated time was realized and a decrease prize in comparison with the individual application of the investigations. Summarizing the literature data and the preliminary impression of the first multimodality scanner in our country in the University hospital 'Alexandrovska' it could be said, that there is continuously increasing information for the new clinical applications of SPECT/CT. It is now accepted, that its usage will increase in

  20. FMT-XCT: in vivo animal studies with hybrid fluorescence molecular tomography-X-ray computed tomography.

    Science.gov (United States)

    Ale, Angelique; Ermolayev, Vladimir; Herzog, Eva; Cohrs, Christian; de Angelis, Martin Hrabé; Ntziachristos, Vasilis

    2012-06-01

    The development of hybrid optical tomography methods to improve imaging performance has been suggested over a decade ago and has been experimentally demonstrated in animals and humans. Here we examined in vivo performance of a camera-based hybrid fluorescence molecular tomography (FMT) system for 360° imaging combined with X-ray computed tomography (XCT). Offering an accurately co-registered, information-rich hybrid data set, FMT-XCT has new imaging possibilities compared to stand-alone FMT and XCT. We applied FMT-XCT to a subcutaneous 4T1 tumor mouse model, an Aga2 osteogenesis imperfecta model and a Kras lung cancer mouse model, using XCT information during FMT inversion. We validated in vivo imaging results against post-mortem planar fluorescence images of cryoslices and histology data. Besides offering concurrent anatomical and functional information, FMT-XCT resulted in the most accurate FMT performance to date. These findings indicate that addition of FMT optics into the XCT gantry may be a potent upgrade for small-animal XCT systems.

  1. Error compensation for hybrid-computer solution of linear differential equations

    Science.gov (United States)

    Kemp, N. H.

    1970-01-01

    Z-transform technique compensates for digital transport delay and digital-to-analog hold. Method determines best values for compensation constants in multi-step and Taylor series projections. Technique also provides hybrid-calculation error compared to continuous exact solution, plus system stability properties.

  2. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  3. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    International Nuclear Information System (INIS)

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M.

    1995-01-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed

  4. NOSTOS: a paper-based ubiquitous computing healthcare environment to support data capture and collaboration.

    Science.gov (United States)

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2003-01-01

    In this paper, we present a new approach to clinical workplace computerization that departs from the window-based user interface paradigm. NOSTOS is an experimental computer-augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk-up displays, headsets, a smart desk, and sensors to enhance an existing paper-based practice with computer power. The physical interfaces allow clinicians to retain mobile paper-based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper-based clinical work environment.

  5. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  6. Hybrid Finite Element Developments for Rotorcraft Interior Noise Computations within a Multidisciplinary Design Environment, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — One of the main attributes contributing to the civil competitiveness of rotorcraft, is the continuously increasing expectations for passenger comfort which is...

  7. A hybrid input–output multi-objective model to assess economic–energy–environment trade-offs in Brazil

    International Nuclear Information System (INIS)

    Carvalho, Ariovaldo Lopes de; Antunes, Carlos Henggeler; Freire, Fausto; Henriques, Carla Oliveira

    2015-01-01

    A multi-objective linear programming (MOLP) model based on a hybrid Input–Output (IO) framework is presented. This model aims at assessing the trade-offs between economic, energy, environmental (E3) and social objectives in the Brazilian economic system. This combination of multi-objective models with Input–Output Analysis (IOA) plays a supplementary role in understanding the interactions between the economic and energy systems, and the corresponding impacts on the environment, offering a consistent framework for assessing the effects of distinct policies on these systems. Firstly, the System of National Accounts (SNA) is reorganized to include the National Energy Balance, creating a hybrid IO framework that is extended to assess Greenhouse Gas (GHG) emissions and the employment level. The objective functions considered are the maximization of GDP (gross domestic product) and employment levels, as well as the minimization of energy consumption and GHG emissions. An interactive method enabling a progressive and selective search of non-dominated solutions with distinct characteristics and underlying trade-offs is utilized. Illustrative results indicate that the maximization of GDP and the employment levels lead to an increase of both energy consumption and GHG emissions, while the minimization of either GHG emissions or energy consumption cause negative impacts on GDP and employment. - Highlights: • A hybrid Input–Output multi-objective model is applied to the Brazilian economy. • Objective functions are GDP, employment level, energy consumption and GHG emissions. • Interactive search process identifies trade-offs between the competing objectives. • Positive correlations between GDP growth and employment. • Positive correlations between energy consumption and GHG emissions

  8. Hybrid brain-computer interfaces and hybrid neuroprostheses for restoration of upper limb functions in individuals with high-level spinal cord injury.

    Science.gov (United States)

    Rohm, Martin; Schneiders, Matthias; Müller, Constantin; Kreilinger, Alex; Kaiser, Vera; Müller-Putz, Gernot R; Rupp, Rüdiger

    2013-10-01

    The bilateral loss of the grasp function associated with a lesion of the cervical spinal cord severely limits the affected individuals' ability to live independently and return to gainful employment after sustaining a spinal cord injury (SCI). Any improvement in lost or limited grasp function is highly desirable. With current neuroprostheses, relevant improvements can be achieved in end users with preserved shoulder and elbow, but missing hand function. The aim of this single case study is to show that (1) with the support of hybrid neuroprostheses combining functional electrical stimulation (FES) with orthoses, restoration of hand, finger and elbow function is possible in users with high-level SCI and (2) shared control principles can be effectively used to allow for a brain-computer interface (BCI) control, even if only moderate BCI performance is achieved after extensive training. The individual in this study is a right-handed 41-year-old man who sustained a traumatic SCI in 2009 and has a complete motor and sensory lesion at the level of C4. He is unable to generate functionally relevant movements of the elbow, hand and fingers on either side. He underwent extensive FES training (30-45min, 2-3 times per week for 6 months) and motor imagery (MI) BCI training (415 runs in 43 sessions over 12 months). To meet individual needs, the system was designed in a modular fashion including an intelligent control approach encompassing two input modalities, namely an MI-BCI and shoulder movements. After one year of training, the end user's MI-BCI performance ranged from 50% to 93% (average: 70.5%). The performance of the hybrid system was evaluated with different functional assessments. The user was able to transfer objects of the grasp-and-release-test and he succeeded in eating a pretzel stick, signing a document and eating an ice cream cone, which he was unable to do without the system. This proof-of-concept study has demonstrated that with the support of hybrid FES

  9. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    Science.gov (United States)

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  10. Creating and Understanding Hybrid Interfaces of Multifunctional Composite Laminates for Extreme Environments

    Data.gov (United States)

    National Aeronautics and Space Administration — Due to increasing needs for lightweight and multifunctional structures and materials that can operate at and sustain the extreme environment such as high temperature...

  11. Overview of hybrid fiber-coaxial network deployment in the deregulated UK environment

    Science.gov (United States)

    Cox, Alan L.

    1995-11-01

    Cable operators in the U.K. enjoy unprecedented license to construct networks and operate cable TV and telecommunications services within their franchise areas. In general, operators have built hybrid-fiber-coax (HFC) networks for cable TV in parallel with fiber-copper-pair networks for telephony. The commonly used network architectures are reviewed, together with their present and future capacities. Despite this dual-technology approach, there is considerable interest in the integration of telephony services onto the HFC network and the development of new interactive services for which HFC may be more suitable than copper pairs. Certain technological and commercial developments may have considerable significance for HFC networks and their operators. These include the digitalization of TV distribution and the rising demand for high-rate digital access lines. Possible scenarios are discussed.

  12. Morphable 3D-Mosaics: A Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

    Directory of Open Access Journals (Sweden)

    Nikos Komodakis

    2008-01-01

    Full Text Available This paper presents a hybrid (geometry- & image-based framework suitable for providing photorealistic walkthroughs of large, complex outdoor scenes at interactive frame rates. To this end, based just on a sparse set of real stereoscopic views from the scene, a set of morphable 3D-mosaics is automatically constructed first, and then, during rendering, a continuous morphing between those 3D-mosaics that are nearby to the current viewpoint is taking place. The morphing is both photometric, as well as geometric, while we also ensure that it proceeds in a physically valid manner, thus remaining transparent to the user. The effectiveness of our framework has been demonstrated in the 3D visual reconstruction of the Samaria Gorge in Crete, which is one of the largest and most beautiful gorges in Europe.

  13. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  14. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  15. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  16. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    International Nuclear Information System (INIS)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads per MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications

  17. A hybrid FDTD-Rayleigh integral computational method for the simulation of the ultrasound measurement of proximal femur.

    Science.gov (United States)

    Cassereau, Didier; Nauleau, Pierre; Bendjoudi, Aniss; Minonzio, Jean-Gabriel; Laugier, Pascal; Bossy, Emmanuel; Grimal, Quentin

    2014-07-01

    The development of novel quantitative ultrasound (QUS) techniques to measure the hip is critically dependent on the possibility to simulate the ultrasound propagation. One specificity of hip QUS is that ultrasounds propagate through a large thickness of soft tissue, which can be modeled by a homogeneous fluid in a first approach. Finite difference time domain (FDTD) algorithms have been widely used to simulate QUS measurements but they are not adapted to simulate ultrasonic propagation over long distances in homogeneous media. In this paper, an hybrid numerical method is presented to simulate hip QUS measurements. A two-dimensional FDTD simulation in the vicinity of the bone is coupled to the semi-analytic calculation of the Rayleigh integral to compute the wave propagation between the probe and the bone. The method is used to simulate a setup dedicated to the measurement of circumferential guided waves in the cortical compartment of the femoral neck. The proposed approach is validated by comparison with a full FDTD simulation and with an experiment on a bone phantom. For a realistic QUS configuration, the computation time is estimated to be sixty times less with the hybrid method than with a full FDTD approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  19. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    Science.gov (United States)

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  20. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    Energy Technology Data Exchange (ETDEWEB)

    Schlicher, Bob G [ORNL; Kulesz, James J [ORNL; Abercrombie, Robert K [ORNL; Kruse, Kara L [ORNL

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  1. Computer Modeling of Radiative Transfer in Hybrid-Stabilized Argon–Water Electric Arc

    Czech Academy of Sciences Publication Activity Database

    Jeništa, Jiří; Takana, H.; Nishiyama, H.; Křenek, Petr; Bartlová, M.; Aubrecht, V.

    2011-01-01

    Roč. 39, č. 11 (2011), s. 2892-2893 ISSN 0093-3813 Institutional research plan: CEZ:AV0Z20430508 Keywords : Divergence of radiation flux * hybrid-stabilized electric arc * mass flow rate * partial characteristics * radiation flux Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.174, year: 2011 http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=27

  2. Computational analysis of electrical conduction in hybrid nanomaterials with embedded non-penetrating conductive particles

    Science.gov (United States)

    Cai, Jizhe; Naraghi, Mohammad

    2016-08-01

    In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.

  3. Second Language Writing Anxiety, Computer Anxiety, and Performance in a Classroom versus a Web-Based Environment

    Science.gov (United States)

    Dracopoulos, Effie; Pichette, François

    2011-01-01

    This study examined the impact of writing anxiety and computer anxiety on language learning for 45 ESL adult learners enrolled in an English grammar and writing course. Two sections of the course were offered in a traditional classroom setting whereas two others were given in a hybrid form that involved distance learning. Contrary to previous…

  4. Protein loop modeling using a new hybrid energy function and its application to modeling in inaccurate structural environments.

    Directory of Open Access Journals (Sweden)

    Hahnbeom Park

    Full Text Available Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.

  5. Secure encapsulation and publication of biological services in the cloud computing environment.

    Science.gov (United States)

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  6. Phenology, sterility and inheritance of two environment genic male sterile (EGMS) lines for hybrid rice

    NARCIS (Netherlands)

    El-Namaky, R.; Oort, van P.A.J.

    2017-01-01

    Background: There is still limited quantitative understanding of how environmental factors affect sterility of Environment-conditioned genic male sterility (EGMS) lines. A model was developed for this purpose and tested based on experimental data from Ndiaye (Senegal) in 2013-2015. For the two

  7. Robust localisation of automated guided vehicles for computer-integrated manufacturing environments

    Directory of Open Access Journals (Sweden)

    Dixon, R. C.

    2013-05-01

    Full Text Available As industry moves toward an era of complete automation and mass customisation, automated guided vehicles (AGVs are used as material handling systems. However, the current techniques that provide navigation, control, and manoeuvrability of automated guided vehicles threaten to create bottlenecks and inefficiencies in manufacturing environments that strive towards the optimisation of part production. This paper proposes a decentralised localisation technique for an automated guided vehicle without any non-holonomic constraints. Incorporation of these vehicles into the material handling system of a computer-integrated manufacturing environment would increase the characteristics of robustness, efficiency, flexibility, and advanced manoeuvrability.

  8. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  9. Helminths of wild hybrid marmosets (Callithrix sp. living in an environment with high human activity

    Directory of Open Access Journals (Sweden)

    Alexandre de Oliveira Tavela

    Full Text Available The objective of this study was to identify the helminth fauna in hybrid, non-native marmosets, through analysis of fecal samples. The study involved 51 marmosets (genus Callithrix from five groups living in places with levels of human impact in Viçosa-MG. The marmosets were caught using a multiple-entrance trap and were anaesthetized. Feces were collected, refrigerated and analyzed by means of the sedimentation technique (Hoffmann-Pons-Janner. Eggs and parasites were identified, but not counted. Most of the marmosets (86% were parasitized by at least one genus of helminths. Among the infected marmosets, 37% presented co-infection. The intestinal helminths comprised four different taxa: Primasubulura jacchi, Ancylostomatidae, Prosthenorchis sp. and Dilepididae.P. jacchi and Ancylostomatidae had higher prevalences (> 80% and > 40%, respectively and were found in all marmoset groups. Dilepididae species were found in almost all the groups, but only accounted for around 30% of the marmosets. Prosthenorchis sp. showed a relatively low prevalence (< 10% and was only found in one group. Although two parasites are commonly found in marmosets and other primates (P. jacchi and Prosthenorchis sp., our study is the first record for Ancylostomatidae and Dilepididae. Factors like marmosets' feeding behavior and their contact with humans and other species of nonhuman primates seem to be determinants of infection among marmosets.

  10. A computational environment for creating and testing reduced chemical kinetic mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Montgomery, C.J.; Swensen, D.A.; Harding, T.V.; Cremer, M.A.; Bockelie, M.J. [Reaction Engineering International, Salt Lake City, UT (USA)

    2002-02-01

    This paper describes software called computer assisted reduced mechanism problem solving environment (CARM-PSE) that gives the engineer the ability to rapidly set up, run and examine large numbers of problems comparing detailed and reduced (approximate) chemistry. CARM-PSE integrates the automatic chemical mechanism reduction code CARM and the codes that simulate perfectly stirred reactors and plug flow reactors into a user-friendly computational environment. CARM-PSE gives the combustion engineer the ability to easily test chemical approximations over many hundreds of combinations of inputs in a multidimensional parameter space. The demonstration problems compare detailed and reduced chemical kinetic calculations for methane-air combustion, including nitrogen oxide formation, in a stirred reactor and selective non-catalytic reduction of NOx, in coal combustion flue gas.

  11. Method and system for rendering and interacting with an adaptable computing environment

    Science.gov (United States)

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  12. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  13. Synthesis, Characterization And Modeling Of Functionally Graded Multifunctional Hybrid Composites For Extreme Environments

    Science.gov (United States)

    2017-04-04

    Conference on Advanced Computational Engineering and Experimenting(ACE-X2010) July 8-9, 2010, Hotel Concordia La Fayette, Paris, France. 4. J. N...Multifunctional and Functionally Graded Materials, Oct. 19-22, 2014, Taua Resort, SP, Brazil . 25. Reddy, J. N., “Non-classical Theories of Beams...Guest and Plenary Lecture, the Fourth International Symposium on Solid Mechanics - MecSol 2013, Porto Alegre, Rio Grande do Sul, Brazil , 18-19 April

  14. A Drawing and Multi-Representational Computer Environment for Beginners' Learning of Programming Using C: Design and Pilot Formative Evaluation

    Science.gov (United States)

    Kordaki, Maria

    2010-01-01

    This paper presents both the design and the pilot formative evaluation study of a computer-based problem-solving environment (named LECGO: Learning Environment for programming using C using Geometrical Objects) for the learning of computer programming using C by beginners. In its design, constructivist and social learning theories were taken into…

  15. Effects Of Social Networking Sites (SNSs) On Hyper Media Computer Mediated Environments (HCMEs)

    OpenAIRE

    Yoon C. Cho

    2011-01-01

    Social Networking Sites (SNSs) are known as tools to interact and build relationships between users/customers in Hyper Media Computer Mediated Environments (HCMEs). This study explored how social networking sites play a significant role in communication between users. While numerous researchers examined the effectiveness of social networking websites, few studies investigated which factors affected customers attitudes and behavior toward social networking sites. In this paper, the authors inv...

  16. Do Social Computing Make You Happy? A Case Study of Nomadic Children in Mixed Environments

    DEFF Research Database (Denmark)

    Christensen, Bent Guldbjerg

    2005-01-01

    In this paper I describe a perspective on ambient, ubiquitous, and pervasive computing called the happiness perspective. By using the happiness perspective, the application domain and how the technology is used and experienced, becomes a central and integral part of perceiving ambient technology....... will use the perspective in a case study on field test experiments with nomadic children in mixed environments using the eBag system....

  17. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  18. Applications of the pipeline environment for visual informatics and genomics computations

    Directory of Open Access Journals (Sweden)

    Genco Alex

    2011-07-01

    Full Text Available Abstract Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The

  19. Improving Communicative Competence through Synchronous Communication in Computer-Supported Collaborative Learning Environments: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Xi Huang

    2018-01-01

    Full Text Available Computer-supported collaborative learning facilitates the extension of second language acquisition into social practice. Studies on its achievement effects speak directly to the pedagogical notion of treating communicative practice in synchronous computer-mediated communication (SCMC: real-time communication that takes place between human beings via the instrumentality of computers in forms of text, audio and video communication, such as live chat and chatrooms as socially-oriented meaning construction. This review begins by considering the adoption of social interactionist views to identify key paradigms and supportive principles of computer-supported collaborative learning. A special focus on two components of communicative competence is then presented to explore interactional variables in synchronous computer-mediated communication along with a review of research. There follows a discussion on a synthesis of interactional variables in negotiated interaction and co-construction of knowledge from psycholinguistic and social cohesion perspectives. This review reveals both possibilities and disparities of language socialization in promoting intersubjective learning and diversifying the salient use of interactively creative language in computer-supported collaborative learning environments in service of communicative competence.

  20. Learning Design Patterns for Hybrid Synchronous Video-Mediated Learning Environments

    DEFF Research Database (Denmark)

    Weitze, Charlotte Lærke

    2016-01-01

    This article describes an innovative learning environment where remote and face-to-face full-time general upper secondary adult students jointly participate in the same live classes at VUC Storstrøm, an adult learning centre in Denmark. The teachers developed new learning designs as a part of the...... activating and equal learning designs for the students. This article is written on the basis of a chapter in the PhD–thesis by the author....

  1. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  2. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  4. Study on User Authority Management for Safe Data Protection in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Su-Hyun Kim

    2015-03-01

    Full Text Available In cloud computing environments, user data are encrypted using numerous distributed servers before storing such data. Global Internet service companies, such as Google and Yahoo, recognized the importance of Internet service platforms and conducted self-research and development to create and utilize large cluster-based cloud computing platform technology based on low-priced commercial nodes. As diverse data services become possible in distributed computing environments, high-capacity distributed management is emerging as a major issue. Meanwhile, because of the diverse forms of using high-capacity data, security vulnerability and privacy invasion by malicious attackers or internal users can occur. As such, when various sensitive data are stored in cloud servers and used from there, the problem of data spill might occur because of external attackers or the poor management of internal users. Data can be managed through encryption to prevent such problems. However, existing simple encryption methods involve problems associated with the management of access to data stored in cloud environments. Therefore, in the present paper, a technique for data access management by user authority, based on Attribute-Based Encryption (ABE and secret distribution techniques, is proposed.

  5. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Science.gov (United States)

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  6. Improving science and mathematics education with computational modelling in interactive engagement environments

    Science.gov (United States)

    Neves, Rui Gomes; Teodoro, Vítor Duarte

    2012-09-01

    A teaching approach aiming at an epistemologically balanced integration of computational modelling in science and mathematics education is presented. The approach is based on interactive engagement learning activities built around computational modelling experiments that span the range of different kinds of modelling from explorative to expressive modelling. The activities are designed to make a progressive introduction to scientific computation without requiring prior development of a working knowledge of programming, generate and foster the resolution of cognitive conflicts in the understanding of scientific and mathematical concepts and promote performative competency in the manipulation of different and complementary representations of mathematical models. The activities are supported by interactive PDF documents which explain the fundamental concepts, methods and reasoning processes using text, images and embedded movies, and include free space for multimedia enriched student modelling reports and teacher feedback. To illustrate, an example from physics implemented in the Modellus environment and tested in undergraduate university general physics and biophysics courses is discussed.

  7. .i.Daphnia./i. hybridization along ecological gradients in pelagic environments: the potential for the presence of hybrid zones in plankton

    Czech Academy of Sciences Publication Activity Database

    Petrusek, A.; Seďa, Jaromír; Macháček, Jiří; Ruthová, Š.; Šmilauer, P.

    2008-01-01

    Roč. 363, č. 1505 (2008), s. 2931-2941 ISSN 0962-8436 R&D Projects: GA ČR(CZ) GA206/04/0190 Institutional research plan: CEZ:AV0Z60170517 Keywords : interspecific hybridization * Daphnia longispina complex * hybrid zones * canyon-shaped reservoirs * ITS-RFLP * allozyme electrophoresis Subject RIV: EH - Ecology, Behaviour Impact factor: 5.556, year: 2008

  8. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  9. A web-based, collaborative modeling, simulation, and parallel computing environment for electromechanical systems

    Directory of Open Access Journals (Sweden)

    Xiaoliang Yin

    2015-03-01

    Full Text Available Complex electromechanical system is usually composed of multiple components from different domains, including mechanical, electronic, hydraulic, control, and so on. Modeling and simulation for electromechanical system on a unified platform is one of the research hotspots in system engineering at present. It is also the development trend of the design for complex electromechanical system. The unified modeling techniques and tools based on Modelica language provide a satisfactory solution. To meet with the requirements of collaborative modeling, simulation, and parallel computing for complex electromechanical systems based on Modelica, a general web-based modeling and simulation prototype environment, namely, WebMWorks, is designed and implemented. Based on the rich Internet application technologies, an interactive graphic user interface for modeling and post-processing on web browser was implemented; with the collaborative design module, the environment supports top-down, concurrent modeling and team cooperation; additionally, service-oriented architecture–based architecture was applied to supply compiling and solving services which run on cloud-like servers, so the environment can manage and dispatch large-scale simulation tasks in parallel on multiple computing servers simultaneously. An engineering application about pure electric vehicle is tested on WebMWorks. The results of simulation and parametric experiment demonstrate that the tested web-based environment can effectively shorten the design cycle of the complex electromechanical system.

  10. Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters

    Science.gov (United States)

    Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel

    2010-01-01

    HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.

  11. Short term load forecasting of anomalous load using hybrid soft computing methods

    Science.gov (United States)

    Rasyid, S. A.; Abdullah, A. G.; Mulyadi, Y.

    2016-04-01

    Load forecast accuracy will have an impact on the generation cost is more economical. The use of electrical energy by consumers on holiday, show the tendency of the load patterns are not identical, it is different from the pattern of the load on a normal day. It is then defined as a anomalous load. In this paper, the method of hybrid ANN-Particle Swarm proposed to improve the accuracy of anomalous load forecasting that often occur on holidays. The proposed methodology has been used to forecast the half-hourly electricity demand for power systems in the Indonesia National Electricity Market in West Java region. Experiments were conducted by testing various of learning rate and learning data input. Performance of this methodology will be validated with real data from the national of electricity company. The result of observations show that the proposed formula is very effective to short-term load forecasting in the case of anomalous load. Hybrid ANN-Swarm Particle relatively simple and easy as a analysis tool by engineers.

  12. A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Ramiyani, Sara Saeidi; Sarvar, Rahim; Moud, Hashem Izadi; Mousavi, Seyyed Mohammad

    2013-01-01

    This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.

  13. Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.

    Science.gov (United States)

    Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R

    2012-01-01

    We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.

  14. Computer simulations of upper-hybrid and electron cyclotron resonance heating

    International Nuclear Information System (INIS)

    Lin, A.T.; Lin, C.C.

    1983-01-01

    A 2 1/2 -dimensional relativistic electromagnetic particle code is used to investigate the dynamic behavior of electron heating around the electron cyclotron and upper-hybrid layers when an extraordinary wave is obliquely launched from the high-field side into a magnetized plasma. With a large angle of incidence most of the radiation wave energy converts into electrostatic electron Bernstein waves at the upper-hybrid layer. These mode-converted waves propagate back to the cyclotron layer and deposit their energy in the electrons through resonant interactions dominated first by the Doppler broadening and later by the relativistic mass correction. The line shape for both mechanisms has been observed in the simulations. At a later stage, the relativistic resonance effects shift the peak of the temperature profile to the high-field side. The heating ultimately causes the extraordinary wave to be substantially absorbed by the high-energy electrons. The steep temperature gradient created by the electron cyclotron heating eventually reflects a substantial part of the incident wave energy. The diamagnetic effects due to the gradient of the mode-converted Bernstein wave pressure enhance the spreading of the electron heating from the original electron cyclotron layer

  15. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological Univ., Cookeville, TN (United States); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hicks, Susan Elaine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Koch, Scott M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Naughton, III, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pogge, James R [Tennessee Technological Univ., Cookeville, TN (United States); Scott, Stephen L [Tennessee Technological Univ., Cookeville, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sorrillo, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-01

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges for the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.

  16. Hybrid brain-computer interface for biomedical cyber-physical system application using wireless embedded EEG systems.

    Science.gov (United States)

    Chai, Rifai; Naik, Ganesh R; Ling, Sai Ho; Nguyen, Hung T

    2017-01-07

    One of the key challenges of the biomedical cyber-physical system is to combine cognitive neuroscience with the integration of physical systems to assist people with disabilities. Electroencephalography (EEG) has been explored as a non-invasive method of providing assistive technology by using brain electrical signals. This paper presents a unique prototype of a hybrid brain computer interface (BCI) which senses a combination classification of mental task, steady state visual evoked potential (SSVEP) and eyes closed detection using only two EEG channels. In addition, a microcontroller based head-mounted battery-operated wireless EEG sensor combined with a separate embedded system is used to enhance portability, convenience and cost effectiveness. This experiment has been conducted with five healthy participants and five patients with tetraplegia. Generally, the results show comparable classification accuracies between healthy subjects and tetraplegia patients. For the offline artificial neural network classification for the target group of patients with tetraplegia, the hybrid BCI system combines three mental tasks, three SSVEP frequencies and eyes closed, with average classification accuracy at 74% and average information transfer rate (ITR) of the system of 27 bits/min. For the real-time testing of the intentional signal on patients with tetraplegia, the average success rate of detection is 70% and the speed of detection varies from 2 to 4 s.

  17. Multilevel and Hybrid Architecture for Device Abstraction and Context Information Management in Smart Home Environments

    Science.gov (United States)

    Peláez, Víctor; González, Roberto; San Martín, Luis Ángel; Campos, Antonio; Lobato, Vanesa

    Hardware device management, and context information acquisition and abstraction are key factors to develop the ambient intelligent paradigm in smart homes. This work presents an architecture that addresses these two problems and provides a usable framework to develop applications easily. In contrast to other proposals, this work addresses performance issues specifically. Results show that the execution performance of the developed prototype is suitable for deployment in a real environment. In addition, the modular design of the system allows the user to develop applications using different techniques and different levels of abstraction.

  18. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Science.gov (United States)

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  19. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Sonia Yassa

    2013-01-01

    Full Text Available We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  20. Hybrid Computational Model for High-Altitude Aeroassist Vehicles, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  1. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  2. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    Science.gov (United States)

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  3. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  4. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  5. The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

    Directory of Open Access Journals (Sweden)

    Ra Inta

    2012-01-01

    Full Text Available The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”, built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard. Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

  6. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle

    Science.gov (United States)

    Cerezo, Rebeca; Esteban, María; Sánchez-Santillán, Miguel; Núñez, José C.

    2017-01-01

    Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs). Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques. Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment) Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples. Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance. Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages. PMID:28883801

  7. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle.

    Science.gov (United States)

    Cerezo, Rebeca; Esteban, María; Sánchez-Santillán, Miguel; Núñez, José C

    2017-01-01

    Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs) . Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques. Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment) Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples. Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance. Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages.

  8. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle

    Directory of Open Access Journals (Sweden)

    Rebeca Cerezo

    2017-08-01

    Full Text Available Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs. Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques.Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples.Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance.Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages.

  9. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  10. Multi-VO support in IHEP's distributed computing environment

    International Nuclear Information System (INIS)

    Yan, T; Suo, B; Zhao, X H; Zhang, X M; Ma, Z T; Yan, X F; Lin, T; Deng, Z Y; Li, W D; Belov, S; Pelevanyuk, I; Zhemchugov, A; Cai, H

    2015-01-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting. (paper)

  11. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  12. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  13. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    Science.gov (United States)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  14. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    International Nuclear Information System (INIS)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach

  15. On an interesting project with hybrid ventilation in an urban environment with heavy traffic

    International Nuclear Information System (INIS)

    Mysen, Mads

    2001-01-01

    Kampen school, built in 1888, is located centrally in Oslo, Norway. Problematic indoor climate made it necessary to rehabilitate the school. This rehabilitation project is used as a 'case' in a large international research project that deals with energy-efficient rehabilitation of educational buildings. The project aims to (1) demonstrate that schools can be rehabilitated by means of ventilation systems that exploit the natural driving forces without this entailing considerable extra costs, (2) demonstrate that natural sunlight can be utilized as an energy-conserving measure at the same time as human need for daylight is satisfied, (3) demonstrate the importance of optimum selection of armatures and light sources with respect to energy and comfort, (4) demonstrate that these solutions imply reduced energy and maintenance costs such as to be profitable in a life-cycle perspective, (5) demonstrate that these solutions can inspire learning even in urban environments, and (6) demonstrate and exploit the potential for reduced energy consumption by demand-controlled ventilation and electric lighting according to area, natural driving forces and availability of daylight

  16. Titan's plasma environment: 3D hybrid simulation and comparison with observations

    Science.gov (United States)

    Lipatov, A. S.; Sittler, E. C.; Hartle, R. E.

    2007-12-01

    A multiscale combined (fluid--kinetic) numerical method allows us to use more realistic plasma models at Titan. This method takes into account charge-exchange and photoionization processes. We study the Titan's plasma environment in case of high, intermediate and low exosphere density in Chamberlain models. The background ions H+, O+ and pickup ions H2+, CH4+ and N2+ are described in a kinetic approximation, where the electrons are approximated as a fluid. We also include an immobile ionosphere at the height 1300km. In this report we consider the multiscale spatial structure for plasma and electromagnetic field, and the velocity distribution of ions that results from the coupling between background ions and pickup ions. Special attention will be paid for the comparisons our numerical results with Voyager and Cassini observations (see e.g. [Sittler, Hartle, et al., 2005; Hartle, Sittler et al., 2006]). We shall estimate of mass loading rate for Ta, energy input to upper atmosphere from ambient +pickup ions, and the T9 encounter with two crossings. \

  17. Computing the binding affinity of a ligand buried deep inside a protein with the hybrid steered molecular dynamics

    International Nuclear Information System (INIS)

    Villarreal, Oscar D.; Yu, Lili; Rodriguez, Roberto A.; Chen, Liao Y.

    2017-01-01

    Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanics of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by

  18. Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure

    International Nuclear Information System (INIS)

    Yokohama, Noriya

    2013-01-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. (author)

  19. Determination of a tissue-level failure evaluation standard for rat femoral cortical bone utilizing a hybrid computational-experimental method.

    Science.gov (United States)

    Fan, Ruoxun; Liu, Jie; Jia, Zhengbin; Deng, Ying; Liu, Jun

    2018-01-01

    Macro-level failure in bone structure could be diagnosed by pain or physical examination. However, diagnosing tissue-level failure in a timely manner is challenging due to the difficulty in observing the interior mechanical environment of bone tissue. Because most fractures begin with tissue-level failure in bone tissue caused by continually applied loading, people attempt to monitor the tissue-level failure of bone and provide corresponding measures to prevent fracture. Many tissue-level mechanical parameters of bone could be predicted or measured; however, the value of the parameter may vary among different specimens belonging to a kind of bone structure even at the same age and anatomical site. These variations cause difficulty in representing tissue-level bone failure. Therefore, determining an appropriate tissue-level failure evaluation standard is necessary to represent tissue-level bone failure. In this study, the yield and failure processes of rat femoral cortical bones were primarily simulated through a hybrid computational-experimental method. Subsequently, the tissue-level strains and the ratio between tissue-level failure and yield strains in cortical bones were predicted. The results indicated that certain differences existed in tissue-level strains; however, slight variations in the ratio were observed among different cortical bones. Therefore, the ratio between tissue-level failure and yield strains for a kind of bone structure could be determined. This ratio may then be regarded as an appropriate tissue-level failure evaluation standard to represent the mechanical status of bone tissue.

  20. A Hybrid Computing Testbed for Mobile Threat Detection and Enhanced Research and Education in Information

    Science.gov (United States)

    2014-11-20

    after by the cloud computing and cybersecurity industry. By participating in this project, the students gain strong competitiveness in the job market ...team in May 2013 and another Ph.D. student, Michael Grace, has been helping Samsung Mobile on the KNOX team to better ensure OS kernel integrity since January 2014. 2