WorldWideScience

Sample records for performance virtual machines

  1. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  2. Performance and portability of the SciBy virtual machine

    DEFF Research Database (Denmark)

    Andersen, Rasmus; Vinter, Brian

    2010-01-01

    The Scientific Bytecode Virtual Machine is a virtual machine designed specifically for performance, security, and portability of scientific applications deployed in a Grid environment. The performance overhead normally incurred by virtual machines is mitigated using native optimized scientific li...

  3. A Performance Survey on Stack-based and Register-based Virtual Machines

    OpenAIRE

    Fang, Ruijie; Liu, Siqi

    2016-01-01

    Virtual machines have been widely adapted for high-level programming language implementations and for providing a degree of platform neutrality. As the overall use and adaptation of virtual machines grow, the overall performance of virtual machines has become a widely-discussed topic. In this paper, we present a survey on the performance differences of the two most widely adapted types of virtual machines - the stack-based virtual machine and the register-based virtual machine - using various...

  4. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  5. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  6. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    Science.gov (United States)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  7. vSphere virtual machine management

    CERN Document Server

    Fitzhugh, Rebecca

    2014-01-01

    This book follows a step-by-step tutorial approach with some real-world scenarios that vSphere businesses will be required to overcome every day. This book also discusses creating and configuring virtual machines and also covers monitoring virtual machine performance and resource allocation options. This book is for VMware administrators who want to build their knowledge of virtual machine administration and configuration. It's assumed that you have some experience with virtualization administration and vSphere.

  8. Quantum Virtual Machine (QVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  9. Performance of machine-learning scoring functions in structure-based virtual screening.

    Science.gov (United States)

    Wójcikowski, Maciej; Ballester, Pedro J; Siedlecki, Pawel

    2017-04-25

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and -0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary).

  10. Some measurements of Java-to-bytecode compiler performance in the Java Virtual Machine

    OpenAIRE

    Daly, Charles; Horgan, Jane; Power, James; Waldron, John

    2001-01-01

    In this paper we present a platform independent analysis of the dynamic profiles of Java programs when executing on the Java Virtual Machine. The Java programs selected are taken from the Java Grande Forum benchmark suite, and five different Java-to-bytecode compilers are analysed. The results presented describe the dynamic instruction usage frequencies.

  11. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  12. Performance of machine learning methods for ligand-based virtual screening.

    Science.gov (United States)

    Plewczynski, Dariusz; Spieser, Stéphane A H; Koch, Uwe

    2009-05-01

    Computational screening of compound databases has become increasingly popular in pharmaceutical research. This review focuses on the evaluation of ligand-based virtual screening using active compounds as templates in the context of drug discovery. Ligand-based screening techniques are based on comparative molecular similarity analysis of compounds with known and unknown activity. We provide an overview of publications that have evaluated different machine learning methods, such as support vector machines, decision trees, ensemble methods such as boosting, bagging and random forests, clustering methods, neuronal networks, naïve Bayesian, data fusion methods and others.

  13. Virtual Machine Language

    Science.gov (United States)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  14. Using a vision cognitive algorithm to schedule virtual machines

    OpenAIRE

    Zhao Jiaqi; Mhedheb Yousri; Tao Jie; Jrad Foued; Liu Qinghuai; Streit Achim

    2014-01-01

    Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the...

  15. Machine learning in virtual screening.

    Science.gov (United States)

    Melville, James L; Burke, Edmund K; Hirst, Jonathan D

    2009-05-01

    In this review, we highlight recent applications of machine learning to virtual screening, focusing on the use of supervised techniques to train statistical learning algorithms to prioritize databases of molecules as active against a particular protein target. Both ligand-based similarity searching and structure-based docking have benefited from machine learning algorithms, including naïve Bayesian classifiers, support vector machines, neural networks, and decision trees, as well as more traditional regression techniques. Effective application of these methodologies requires an appreciation of data preparation, validation, optimization, and search methodologies, and we also survey developments in these areas.

  16. Virtual Machine Logbook - Enabling virtualization for ATLAS

    International Nuclear Information System (INIS)

    Yao Yushu; Calafiura, Paolo; Leggett, Charles; Poffet, Julien; Cavalli, Andrea; Frederic, Bapst

    2010-01-01

    ATLAS software has been developed mostly on CERN linux cluster lxplus or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project we are developing a suite of tools and CernVM plug-in extensions to promote the use of virtualization for ATLAS analysis and software development. The Virtual Machine Logbook (VML), in particular, is an application to organize work of physicists on multiple projects, logging their progress, and speeding up ''context switches'' from one project to another. An important feature of VML is the ability to share with a single 'click' the status of a given project with other colleagues. VML builds upon the save and restore capabilities of mainstream virtualization software like VMware, and provides a technology-independent client interface to them. A lot of emphasis in the design and implementation has gone into optimizing the save and restore process to makepractical to store many VML entries on a typical laptop disk or to share a VML entry over the network. At the same time, taking advantage of CernVM's plugin capabilities, we are extending the CernVM platform to help increase the usability of ATLAS software. For example, we added the ability to start the ATLAS event display on any computer running CernVM simply by clicking a button in a web browser. We want to integrate seamlessly VML with CernVM unique file system design to distribute efficiently ATLAS software on every physicist computer. The CernVM File System (CVMFS) download files on-demand via HTTP, and cache it locally for future use. This reduces by one order of magnitude the download sizes, making practical for a developer to work with multiple software releases on a virtual machine.

  17. Untyped Memory in the Java Virtual Machine

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    We have implemented a virtual execution environment that executes legacy binary code on top of the type-safe Java Virtual Machine by recompiling native code instructions to type-safe bytecode. As it is essentially impossible to infer static typing into untyped machine code, our system emulates...... untyped memory on top of Java’s type system. While this approach allows to execute native code on any off-the-shelf JVM, the resulting runtime performance is poor. We propose a set of virtual machine extensions that add type-unsafe memory objects to JVM. We contend that these JVM extensions do not relax...... Java’s type system as the same functionality can be achieved in pure Java, albeit much less efficiently....

  18. Machine Learning Consensus Scoring Improves Performance Across Targets in Structure-Based Virtual Screening.

    Science.gov (United States)

    Ericksen, Spencer S; Wu, Haozhen; Zhang, Huikun; Michael, Lauren A; Newton, Michael A; Hoffmann, F Michael; Wildman, Scott A

    2017-07-24

    In structure-based virtual screening, compound ranking through a consensus of scores from a variety of docking programs or scoring functions, rather than ranking by scores from a single program, provides better predictive performance and reduces target performance variability. Here we compare traditional consensus scoring methods with a novel, unsupervised gradient boosting approach. We also observed increased score variation among active ligands and developed a statistical mixture model consensus score based on combining score means and variances. To evaluate performance, we used the common performance metrics ROCAUC and EF1 on 21 benchmark targets from DUD-E. Traditional consensus methods, such as taking the mean of quantile normalized docking scores, outperformed individual docking methods and are more robust to target variation. The mixture model and gradient boosting provided further improvements over the traditional consensus methods. These methods are readily applicable to new targets in academic research and overcome the potentially poor performance of using a single docking method on a new target.

  19. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  20. Managing virtual machines with Vac and Vcycle

    Science.gov (United States)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  1. Virtual Class Support at the Virtual Machine Level

    DEFF Research Database (Denmark)

    Nielsen, Anders Bach; Ernst, Erik

    2009-01-01

    This paper describes how virtual classes can be supported in a virtual machine.  Main-stream virtual machines such as the Java Virtual Machine and the .NET platform dominate the world today, and many languages are being executed on these virtual machines even though their embodied design choices...... conflict with the design choices of the virtual machine.  For instance, there is a non-trivial mismatch between the main-stream virtual machines mentioned above and dynamically typed languages.  One language concept that creates an even greater mismatch is virtual classes, in particular because fully...... general support for virtual classes requires generation of new classes at run-time by mixin composition.  Languages like CaesarJ and ObjectTeams can express virtual classes restricted to the subset that does not require run-time generation of classes, because of the restrictions imposed by the Java...

  2. An object-oriented extension for debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Pizzi, Jr, Robert G. [Univ. of California, Davis, CA (United States)

    1994-12-01

    A computer is nothing more then a virtual machine programmed by source code to perform a task. The program`s source code expresses abstract constructs which are compiled into some lower level target language. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low-level target implementation information to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design and data into the source code. We introduce OODIE, an object-oriented extension to programming languages that allows programmers to specify a virtual environment by describing the meaning of the design and data of a virtual machine. This specification is translated into symbolic information such that an augmented debugger can present engineers with a programmable debugging environment specifically tailored for the virtual machine that is to be debugged.

  3. Enhancing MINIX 3 Input/Output performance using a virtual machine approach

    OpenAIRE

    Pessolani, Pablo Andrés; González, César Daniel

    2010-01-01

    MINIX 3 is an open-source operating system designed to be highly reliable, flexible, and secure. The kernel is extremely small and user processes, specialized servers and device drivers run as user-mode insulated processes. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability. The drawbacks of running device drivers in usermode are the performance penalties on input/output ports access, kernel data structures access, interrupt indirect manage...

  4. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  5. Optimal Placement Algorithms for Virtual Machines

    OpenAIRE

    Bellur, Umesh; Rao, Chetan S; SD, Madhu Kumar

    2010-01-01

    Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amo...

  6. Human Machine Interfaces for Teleoperators and Virtual Environments

    Science.gov (United States)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  7. Tensor Network Quantum Virtual Machine (TNQVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  8. A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

    OpenAIRE

    Drescher, Michael Stuart

    2015-01-01

    The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates it...

  9. Using a vision cognitive algorithm to schedule virtual machines

    Directory of Open Access Journals (Sweden)

    Zhao Jiaqi

    2014-09-01

    Full Text Available Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption

  10. Virtual NC machine model with integrated knowledge data

    International Nuclear Information System (INIS)

    Sidorenko, Sofija; Dukovski, Vladimir

    2002-01-01

    The concept of virtual NC machining was established for providing a virtual product that could be compared with an appropriate designed product, in order to make NC program correctness evaluation, without real experiments. This concept is applied in the intelligent CAD/CAM system named VIRTUAL MANUFACTURE. This paper presents the first intelligent module that enables creation of the virtual models of existed NC machines and virtual creation of new ones, applying modular composition. Creation of a virtual NC machine is carried out via automatic knowledge data saving (features of the created NC machine). (Author)

  11. Virtual Machine Language Controls Remote Devices

    Science.gov (United States)

    2014-01-01

    Kennedy Space Center worked with Blue Sun Enterprises, based in Boulder, Colorado, to enhance the company's virtual machine language (VML) to control the instruments on the Regolith and Environment Science and Oxygen and Lunar Volatiles Extraction mission. Now the NASA-improved VML is available for crewed and uncrewed spacecraft, and has potential applications on remote systems such as weather balloons, unmanned aerial vehicles, and submarines.

  12. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  13. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  14. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  15. A Review of Virtual Machine Attack Based on Xen

    Directory of Open Access Journals (Sweden)

    Ren xun-yi

    2016-01-01

    Full Text Available Virtualization technology as the foundation of cloud computing gets more and more attention because the cloud computing has been widely used. Analyzing the threat with the security of virtual machine and summarizing attack about virtual machine based on XEN to predict visible security hidden recently. Base on this paper can provide a reference for the further research on the security of virtual machine.

  16. CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions

    Science.gov (United States)

    Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.

    Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.

  17. Virtual Machine Images Management in Cloud Environments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Nowadays, the demand for scalability in distributed systems has led a design philosophy in which virtual resources need to be configured in a flexible way to provide services to a large number of users. The configuration and management of such an architecture is challenging (e.g.: 100,000 compute cores on the private cloud together with thousands of cores on external cloud resources). There is the need to process CPU intensive work whilst ensuring that the resources are shared fairly between different users of the system, and guarantee that all nodes are up to date with new images containing the latest software configurations. Different types of automated systems can be used to facilitate the orchestration. CERN’s current system, composed of different technologies such as OpenStack, Packer, Puppet, Rundeck and Docker will be introduced and explained, together with the process used to create new Virtual Machines images at CERN.

  18. A modification of Java virtual machine for counting bytecode commands

    OpenAIRE

    Nikolaj, Janko

    2014-01-01

    The objective of the thesis was to implement or modify an existing Java virtual machine (JVM) in a way that it will allow insight into statistics of the executed Java instructions of an executed user program. The functionality will allow analysis of the algorithms in Java environment. After studying the theory of Java and Java virtual machine, we decided to modify an existing Java virtual machine. We chose JamVM which is a lightweight, open-source Java virtual machine under GNU license. The i...

  19. A portable virtual machine target for proof-carrying code

    DEFF Research Database (Denmark)

    Franz, Michael; Chandra, Deepak; Gal, Andreas

    2005-01-01

    Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and mor...... simultaneously providing efficient justin-time compilation and target-machine independence. In particular, our approach reduces the complexity of the required proofs, resulting in fewer proof obligations that need to be discharged at the target machine....

  20. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  1. A Journey from Interpreters to Compilers and Virtual Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2003-01-01

    We review a simple sequence of steps to stage a programming-language interpreter into a compiler and virtual machine. We illustrate the applicability of this derivation with a number of existing virtual machines, mostly for functional languages. We then outline its relevance for todays language...

  2. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  3. On the Impossibility of Detecting Virtual Machine Monitors

    Science.gov (United States)

    Gueron, Shay; Seifert, Jean-Pierre

    Virtualization based upon Virtual Machines is a central building block of Trusted Computing, and it is believed to offer isolation and confinement of privileged instructions among other security benefits. However, it is not necessarily bullet-proof — some recent publications have shown that Virtual Machine technology could potentially allow the installation of undetectable malware root kits. As a result, it was suggested that such virtualization attacks could be mitigated by checking if a threatened system runs in a virtualized or in a native environment. This naturally raises the following problem: Can a program determine whether it is running in a virtualized environment, or in a native machine environment? We prove here that, under a classical VM model, this problem is not decidable. Further, although our result seems to be quite theoretic, we also show that it has practical implications on related virtualization problems.

  4. VIRTUAL MACHINES IN EDUCATION – CNC MILLING MACHINE WITH SINUMERIK 840D CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    Ireneusz Zagórski

    2014-11-01

    Full Text Available Machining process nowadays could not be conducted without its inseparable element: cutting edge and frequently numerically controlled milling machines. Milling and lathe machining centres comprise standard equipment in many companies of the machinery industry, e.g. automotive or aircraft. It is for that reason that tertiary education should account for this rising demand. This entails the introduction into the curricula the forms which enable visualisation of machining, milling process and virtual production as well as virtual machining centres simulation. Siemens Virtual Machine (Virtual Workshop sets an example of such software, whose high functionality offers a range of learning experience, such as: learning the design of machine tools, their configuration, basic operation functions as well as basics of CNC.

  5. Virtualization Security Combining Mandatory Access Control and Virtual Machine Introspection

    OpenAIRE

    Win, Thu Yein; Tianfield, Huaglory; Mair, Quentin

    2014-01-01

    Virtualization has become a target for attacks in cloud computing environments. Existing approaches to protecting the virtualization environment against the attacks are limited in protection scope and are with high overheads. This paper proposes a novel virtualization security solution which aims to provide comprehensive protection of the virtualization environment.

  6. Preserving access to ALEPH computing environment via virtual machines

    International Nuclear Information System (INIS)

    Coscetti, Simone; Boccali, Tommaso; Arezzini, Silvia; Maggi, Marcello

    2014-01-01

    The ALEPH Collaboration [1] took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, which ask checks with data at the Z and WW production energies. An attempt to revive and preserve the ALEPH Computing Environment is presented; the aim is not only the preservation of the data files (usually called bit preservation), but of the full environment a physicist would need to perform brand new analyses. Technically, a Virtual Machine approach has been chosen, using the VirtualBox platform. Concerning simulated events, the full chain from event generators to physics plots is possible, and reprocessing of data events is also functioning. Interactive tools like the DALI event display can be used on both data and simulated events. The Virtual Machine approach is suited for both interactive usage, and for massive computing using Cloud like approaches.

  7. A Real-Time Java Virtual Machine for Avionics (Preprint)

    National Research Council Canada - National Science Library

    Armbruster, Austin; Pla, Edward; Baker, Jason; Cunei, Antonio; Flack, Chapman; Pizlo, Filip; Vitek, Jan; Proch zka, Marek; Holmes, David

    2006-01-01

    ...) in the DARPA Program Composition for Embedded System (PCES) program. Within the scope of PCES, Purdue University and the Boeing Company collaborated on the development of Ovm, an open source implementation of the RTSJ virtual machine...

  8. Virtual C Machine and Integrated Development Environment for ATMS Controllers.

    Science.gov (United States)

    2000-04-01

    The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...

  9. LHCb experience with running jobs in virtual machines

    Science.gov (United States)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  10. LHCb experience with running jobs in virtual machines

    CERN Document Server

    McNab, A; Luzzi, C

    2015-01-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites mana...

  11. Mobile virtual synchronous machine for vehicle-to-grid applications

    Energy Technology Data Exchange (ETDEWEB)

    Pelczar, Christopher

    2012-03-20

    The Mobile Virtual Synchronous Machine (VISMA) is a power electronics device for Vehicle to Grid (V2G) applications which behaves like an electromechanical synchronous machine and offers the same beneficial properties to the power network, increasing the inertia in the system, stabilizing the grid voltage, and providing a short-circuit current in case of grid faults. The VISMA performs a real-time simulation of a synchronous machine and calculates the phase currents that an electromagnetic synchronous machine would produce under the same local grid conditions. An inverter with a current controller feeds the currents calculated by the VISMA into the grid. In this dissertation, the requirements for a machine model suitable for the Mobile VISMA are set, and a mathematical model suitable for use in the VISMA algorithm is found and tested in a custom-designed simulation environment prior to implementation on the Mobile VISMA hardware. A new hardware architecture for the Mobile VISMA based on microcontroller and FPGA technologies is presented, and experimental hardware is designed, implemented, and tested. The new architecture is designed in such a way that allows reducing the size and cost of the VISMA, making it suitable for installation in an electric vehicle. A simulation model of the inverter hardware and hysteresis current controller is created, and the simulations are verified with various experiments. The verified model is then used to design a new type of PWM-based current controller for the Mobile VISMA. The performance of the hysteresis- and PWM-based current controllers is evaluated and compared for different operational modes of the VISMA and configurations of the inverter hardware. Finally, the behavior of the VISMA during power network faults is examined. A desired behavior of the VISMA during network faults is defined, and experiments are performed which verify that the VISMA, inverter hardware, and current controllers are capable of supporting this

  12. Analysis towards VMEM File of a Suspended Virtual Machine

    Science.gov (United States)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  13. Two Approaches for the Management of Virtual Machines on Grid Infrastructures

    International Nuclear Information System (INIS)

    Tapiador, D.; Rubio-Montero, A. J.; Juedo, E.; Montero, R. S.; Llorente, I. M.

    2007-01-01

    Virtual machines are a promising technology to overcome some of the problems found in current Grid infrastructures, like heterogeneity, performance partitioning or application isolation. This work shows a comparison between two strategies to manage virtual machines in Globus Grids. The first alternative is a straightforward deployment that does not require additional middle ware to be installed. It is only based on standard Grid services and is not bound to a given virtualization technology. Although this option is fully functional, it is only suitable for single process batch jobs. The second solution makes use of the Virtual Workspace Service which allows a remote client to securely negotiate and manage a virtual resource. This approach better exploits the potential benefits offered by the virtualization technology and provides a wider application range. (Author)

  14. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  15. Virtual machine migration in an over-committed cloud

    KAUST Repository

    Zhang, Xiangliang

    2012-04-01

    While early emphasis of Infrastructure as a Service (IaaS) clouds was on providing resource elasticity to end users, providers are increasingly interested in over-committing their resources to maximize the utilization and returns of their capital investments. In principle, over-committing resources hedges that users - on average - only need a small portion of their leased resources. When such hedge fails (i.e., resource demand far exceeds available physical capacity), providers must mitigate this provider-induced overload, typically by migrating virtual machines (VMs) to underutilized physical machines. Recent works on VM placement and migration assume the availability of target physical machines [1], [2]. However, in an over-committed cloud data center, this is not the case. VM migration can even trigger cascading overloads if performed haphazardly. In this paper, we design a new VM migration algorithm (called Scattered) that minimizes VM migrations in over-committed data centers. Compared to a traditional implementation, our algorithm can balance host utilization across all time epochs. Using real-world data traces from an enterprise cloud, we show that our migration algorithm reduces the risk of overload, minimizes the number of needed migrations, and has minimal impact on communication cost between VMs. © 2012 IEEE.

  16. Case study of virtual reality in CNC machine tool exhibition

    Directory of Open Access Journals (Sweden)

    Kao Yung-Chou

    2017-01-01

    Full Text Available Exhibition and demonstration are generally used in the promotion and sale-assistance of manufactured products. However, the transportation cost of the real goods from the vender factory to the exposition venue is generally expensive for huge and heavy commodity. With the advancement of computing, graphics, mobile apps, and mobile hardware the 3D visibility technology is getting more and more popular to be adopted in visual-assisted communication such as amusement games. Virtual reality (VR technology has therefore being paid great attention in emulating expensive small and/or huge and heavy equipment. Virtual reality can be characterized as 3D extension with Immersion, Interaction and Imagination. This paper was then be focused on the study of virtual reality in the assistance of CNC machine tool demonstration and exhibition. A commercial CNC machine tool was used in this study to illustrate the effectiveness and usability of using virtual reality for an exhibition. The adopted CNC machine tool is a large and heavy mill-turn machine with the width up to eleven meters and weighted about 35 tons. A head-mounted display (HMD was attached to the developed VR CNC machine tool for the immersion viewing. A user can see around the 3D scene of the large mill-turn machine and the operation of the virtual CNC machine can be actuated by bare hand. Coolant was added to demonstrate more realistic operation while collision detection function was also added to remind the operator. The developed VR demonstration system has been presented in the 2017 Taipei International Machine Tool Show (TIMTOS 2017. This case study has shown that young engineers and/or students are very impressed by the VR-based demonstration while elder persons could not adapt themselves easily to the VR-based scene because of eyesight issues. However, virtual reality has successfully being adopted and integrated with the CNC machine tool in an international show. Another machine tool on

  17. VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS

    Directory of Open Access Journals (Sweden)

    POPESCU Adrian

    2015-11-01

    Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.

  18. Data preparation for municipal virtual assistant using machine learning

    OpenAIRE

    Jovan, Leon Noe

    2016-01-01

    The main goal of this master’s thesis was to develop a procedure that will automate the construction of the knowledge base for a virtual assistant that answers questions about municipalities in Slovenia. The aim of the procedure is to replace or facilitate manual preparation of the virtual assistant's knowledge base. Theoretical backgrounds of different machine learning fields, such as multilabel classification, text mining and learning from weakly labeled data were examined to gain a better ...

  19. System Center 2012 R2 Virtual Machine Manager cookbook

    CERN Document Server

    Cardoso, Edvaldo Alessandro

    2014-01-01

    This book is a step-by-step guide packed with recipes that cover architecture design and planning. The book is also full of deployment tips, techniques, and solutions. If you are a solutions architect, technical consultant, administrator, or any other virtualization enthusiast who needs to use Microsoft System Center Virtual Machine Manager in a real-world environment, then this is the book for you. We assume that you have previous experience with Windows 2012 R2 and Hyper-V.

  20. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2014-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  1. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2016-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  2. Development of Web-based Virtual Training Environment for Machining

    Science.gov (United States)

    Yang, Zhixin; Wong, S. F.

    2010-05-01

    With the booming in the manufacturing sector of shoe, garments and toy, etc. in pearl region, training the usage of various facilities and design the facility layout become crucial for the success of industry companies. There is evidence that the use of virtual training may provide benefits in improving the effect of learning and reducing risk in the physical work environment. This paper proposed an advanced web-based training environment that could demonstrate the usage of a CNC machine in terms of working condition and parameters selection. The developed virtual environment could provide training at junior level and advanced level. Junior level training is to explain machining knowledge including safety factors, machine parameters (ex. material, speed, feed rate). Advanced level training enables interactive programming of NG coding and effect simulation. Operation sequence was used to assist the user to choose the appropriate machining condition. Several case studies were also carried out with animation of milling and turning operations.

  3. Reducing Deadline Miss Rate for Grid Workloads running in Virtual Machines: a deadline-aware and adaptive approach

    CERN Document Server

    Khalid, Omer; Anthony, Richard; Petridis, Miltos

    2011-01-01

    This thesis explores three major areas of research; integration of virutalization into sci- entific grid infrastructures, evaluation of the virtualization overhead on HPC grid job’s performance, and optimization of job execution times to increase their throughput by reducing job deadline miss rate. Integration of the virtualization into the grid to deploy on-demand virtual machines for jobs in a way that is transparent to the end users and have minimum impact on the existing system poses a significant challenge. This involves the creation of virtual machines, decompression of the operating system image, adapting the virtual environ- ment to satisfy software requirements of the job, constant update of the job state once it’s running with out modifying batch system or existing grid middleware, and finally bringing the host machine back to a consistent state. To facilitate this research, an existing and in production pilot job framework has been modified to deploy virtual machines on demand on the grid using...

  4. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    Science.gov (United States)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  5. Virtual Things for Machine Learning Applications

    OpenAIRE

    Bovet , Gérôme; Ridi , Antonio; Hennebert , Jean

    2014-01-01

    International audience; Internet-of-Things (IoT) devices, especially sensors are pro-ducing large quantities of data that can be used for gather-ing knowledge. In this field, machine learning technologies are increasingly used to build versatile data-driven models. In this paper, we present a novel architecture able to ex-ecute machine learning algorithms within the sensor net-work, presenting advantages in terms of privacy and data transfer efficiency. We first argument that some classes of ...

  6. RBMK full scope simulator gets virtual refuelling machine

    International Nuclear Information System (INIS)

    Khoudiakov, M.; Slonimsky, V.; Mitrofanov, S.

    2006-01-01

    The paper describes a continuation of efforts of an international Russian-Norwegian joint team to drastically increase operational safety during the refuelling process of an RBMK-type reactor by implementing a training simulator based on an innovative Virtual Reality (VR) approach. During the preceding stage of the project a display-based simulator was extended with VR models of the real Refueling Machine (RM) and its environment in order to improve both the learning process and operation's effectiveness. The simulator's challenge is to support the performance (operational activity) of RM operational staff firstly and to take major part in developing basic knowledge and skills as well as to keep skilled staff in close touch with the complex machinery of the Refueling Machine. At the given 2nd stage the functional scope of the VR-simulator was greatly enhanced - firstly, by connecting to the RBMK-unit full-scope simulator, and, secondly, by a training program and simulator model upgrade. (author)

  7. Simulation of machine-maintenance training in virtual environment

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Tezuka, Tetsuo; Kashiwa, Ken-ichiro; Ishii, Hirotake

    1997-01-01

    The periodical inspection of nuclear power plants needs a lot of workforces with a high degree of technical skill for the maintenance of various sorts of machines. Therefore, a new type of maintenance training system is required, where trainees can get training safely, easily and effectively. In this study we developed a training simulation system for disassembling a check valve in virtual environment (VE). The features of this system are as follows: Firstly, the trainees can execute tasks even in wrong order, and can experience the resultant conditions. In order to realize this environment, we developed a new Petri-net model for representing the objects' states in VE. This Petri-net model has several original characteristics, which make it easier to manage the change of the objects' states. Furthermore, we made a support system for constructing the Petri-net model of machine-disassembling training, because the Petri-net model is apt to become of large size. The effectiveness of this support system is shown through the system development. Secondly, this system can perform appropriate tasks to be done next in VE whenever the trainee wants even after some mistakes have been made. The effectiveness of this function has also been confirmed by experiments. (author)

  8. An Embeddable Virtual Machine for State Space Generation

    NARCIS (Netherlands)

    Weber, M.; Bosnacki, D.; Edelkamp, S.

    2007-01-01

    The semantics of modelling languages are not always specified in a precise and formal way, and their rather complex underlying models make it a non-trivial exercise to reuse them in newly developed tools. We report on experiments with a virtual machine-based approach for state space generation. The

  9. Adapting Virtual Machine Techniques for Seamless Aspect Support

    NARCIS (Netherlands)

    Bockisch, Christoph; Arnold, Matthew; Dinkelaker, Tom; Mezini, Mira

    2006-01-01

    Current approaches to compiling aspect-oriented programs are inefficient. This inefficiency has negative effects on the productivity of the development process and is especially prohibitive for dynamic aspect deployment. In this work, we present how well-known virtual machine techniques can be used

  10. New approach for virtual machines consolidation in heterogeneous computing systems

    Czech Academy of Sciences Publication Activity Database

    Fesl, Jan; Cehák, J.; Doležalová, Marie; Janeček, J.

    2016-01-01

    Roč. 9, č. 12 (2016), s. 321-332 ISSN 1738-9968 Institutional support: RVO:60077344 Keywords : consolidation * virtual machine * distributed Subject RIV: JD - Computer Applications, Robotics http://www.sersc.org/journals/IJHIT/vol9_no12_2016/29.pdf

  11. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  12. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    Science.gov (United States)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  13. Concept of Operations for a Virtual Machine for C3I Applications

    National Research Council Canada - National Science Library

    Bagrodia, Rajive

    1997-01-01

    .... This 12-month research endeavor, entitled "Concept of Operations for a Virtual Machine for C31 Applications," examined issues in using a concurrent virtual machine for the design of C31 applications...

  14. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...... then execute on top of an existing software portability platform such as Java. The result is a considerably reduced implementation effort....

  15. Managing the Virtual Machine Lifecycle of the CernVM Project

    International Nuclear Information System (INIS)

    Charalampidis, I; Blomer, J; Buncic, P; Harutyunyan, A; Larsen, D

    2012-01-01

    CernVM is a virtual software appliance designed to support the development cycle and provide a runtime environment for LHC applications. It consists of a minimal Linux distribution, a specially tuned file system designed to deliver application software on demand, and contextualization tools. The maintenance of these components involves a variety of different procedures and tools that cannot always connect with each other. Additionally, most of these procedures need to be performed frequently. Currently, in the CernVM project, every time we build a new virtual machine image, we have to perform the whole process manually, because of the heterogeneity of the tools involved. The overall process is error-prone and time-consuming. Therefore, to simplify and aid this continuous maintenance process, we are developing a framework that combines these virtually unrelated tools with a single, coherent interface. To do so, we identified all the involved procedures and their tools, tracked their dependencies and organized them into logical groups (e.g. build, test, instantiate). These groups define the procedures that are performed throughout the lifetime of a virtual machine. In this paper we describe the Virtual Machine Lifecycle and the framework we developed (iAgent) in order to simplify the maintenance process.

  16. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  17. A discrete Fourier transform for virtual memory machines

    Science.gov (United States)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  18. A Reference Model for Virtual Machine Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  19. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  20. A critical survey of live virtual machine migration techniques

    Directory of Open Access Journals (Sweden)

    Anita Choudhary

    2017-11-01

    Full Text Available Abstract Virtualization techniques effectively handle the growing demand for computing, storage, and communication resources in large-scale Cloud Data Centers (CDC. It helps to achieve different resource management objectives like load balancing, online system maintenance, proactive fault tolerance, power management, and resource sharing through Virtual Machine (VM migration. VM migration is a resource-intensive procedure as VM’s continuously demand appropriate CPU cycles, cache memory, memory capacity, and communication bandwidth. Therefore, this process degrades the performance of running applications and adversely affects efficiency of the data centers, particularly when Service Level Agreements (SLA and critical business objectives are to be met. Live VM migration is frequently used because it allows the availability of application service, while migration is performed. In this paper, we make an exhaustive survey of the literature on live VM migration and analyze the various proposed mechanisms. We first classify the types of Live VM migration (single, multiple and hybrid. Next, we categorize VM migration techniques based on duplication mechanisms (replication, de-duplication, redundancy, and compression and awareness of context (dependency, soft page, dirty page, and page fault and evaluate the various Live VM migration techniques. We discuss various performance metrics like application service downtime, total migration time and amount of data transferred. CPU, memory and storage data is transferred during the process of VM migration and we identify the category of data that needs to be transferred in each case. We present a brief discussion on security threats in live VM migration and categories them in three different classes (control plane, data plane, and migration module. We also explain the security requirements and existing solutions to mitigate possible attacks. Specific gaps are identified and the research challenges in improving

  1. Virtual Machine Scheduling in Dedicated Computing Clusters

    CERN Document Server

    Boettger, Stefan; Zicari, V Roberto

    2014-01-08

    Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects i...

  2. A Cooperative Approach to Virtual Machine Based Fault Injection

    Energy Technology Data Exchange (ETDEWEB)

    Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL; Vallee, Geoffroy R [ORNL; Aderholdt, William Ferrol [ORNL; Scott, Stephen L [Tennessee Technological University (TTU)

    2017-01-01

    Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM). We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.

  3. An RTT-Aware Virtual Machine Placement Method

    Directory of Open Access Journals (Sweden)

    Li Quan

    2017-12-01

    Full Text Available Virtualization is a key technology for mobile cloud computing (MCC and the virtual machine (VM is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm.

  4. Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-01-01

    Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.

  5. Machine performance assessment and enhancement for a hexapod machine

    Energy Technology Data Exchange (ETDEWEB)

    Mou, J.I. [Arizona State Univ., Tempe, AZ (United States); King, C. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems Center

    1998-03-19

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.

  6. Integrating Heuristic and Machine-Learning Methods for Efficient Virtual Machine Allocation in Data Centers

    OpenAIRE

    Pahlevan, Ali; Qu, Xiaoyu; Zapater Sancho, Marina; Atienza Alonso, David

    2017-01-01

    Modern cloud data centers (DCs) need to tackle efficiently the increasing demand for computing resources and address the energy efficiency challenge. Therefore, it is essential to develop resource provisioning policies that are aware of virtual machine (VM) characteristics, such as CPU utilization and data communication, and applicable in dynamic scenarios. Traditional approaches fall short in terms of flexibility and applicability for large-scale DC scenarios. In this paper we propose a heur...

  7. Modeling and simulation of five-axis virtual machine based on NX

    Science.gov (United States)

    Li, Xiaoda; Zhan, Xianghui

    2018-04-01

    Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.

  8. Virtual reality solutions for the design of machine tools in practice

    OpenAIRE

    Zickner, H.; Neugebauer, Reimund; Weidlich, D.

    2006-01-01

    At the Virtual Reality Centre Production Engineering (VRCP) the Institute for Machine Tools and Production Processes (IWP) of the Chemnitz University of Technology and the Fraunhofer Institute for Machine Tools and Forming Technology (IWU) have developed several practical Virtual Reality (VR) based solutions for the industry. Some practical examples will show the benefits gained by the application of Virtual Reality techniques in the design process of machine tools and assembly lines.

  9. Increasing performance in KVM virtualization within a Tier-1 environment

    International Nuclear Information System (INIS)

    Chierici, Andrea; Salomoni, Davide

    2012-01-01

    This work shows the optimizations we have been investigating and implementing at the KVM (Kernel-based Virtual Machine) virtualization layer in the INFN Tier-1 at CNAF, based on more than a year of experience in running thousands of virtual machines in a production environment used by several international collaborations. These optimizations increase the adaptability of virtualization solutions to demanding applications like those run in our institute (High-Energy Physics). We will show performance differences among different filesystems (like ext3 vs ext4) when used as KVM host local storage. We will provide guidelines for solid state disks (SSD) adoption, for deployment of SR-IOV (Single Root I/O Virtualization) enabled hardware and what is the best solution to distribute and instantiate read-only virtual machine images. This work has been driven by the project called Worker Nodes on Demand Service (WNoDeS), a framework designed to offer local, grid or cloud-based access to computing and storage resources, preserving maximum compatibility with existing computing center policies and workflows.

  10. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach.

    Science.gov (United States)

    Pasupa, Kitsuchart; Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets-Maximum Unbiased Validation Dataset-which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6.

  11. A Virtual Inertia Control Strategy for DC Microgrids Analogized with Virtual Synchronous Machines

    DEFF Research Database (Denmark)

    Wu, Wenhua; Chen, Yandong; Luo, An

    2017-01-01

    In a DC microgrid (DC-MG), the dc bus voltage is vulnerable to power fluctuation derived from the intermittent distributed energy or local loads variation. In this paper, a virtual inertia control strategy for DC-MG through bidirectional grid-connected converters (BGCs) analogized with virtual...... synchronous machine (VSM) is proposed to enhance the inertia of the DC-MG, and to restrain the dc bus voltage fluctuation. The small-signal model of the BGC system is established, and the small-signal transfer function between the dc bus voltage and the dc output current of the BGC is deduced. The dynamic...... for the BGC is introduced to smooth the dynamic response of the dc bus voltage. By analyzing the control system stability, the appropriate virtual inertia control parameters are selected. Finally, simulations and experiments verified the validity of the proposed control strategy....

  12. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  13. Application of virtual machine technology to real-time mapping of Thomson scattering data to flux coordinates for the LHD

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yoshida, Masanobu; Suzuki, Chihiro; Suzuki, Yasuhiro; Ida, Katsumi; Nagayama, Yoshio; Akiyama, Tsuyoshi; Kawahata, Kazuo; Narihara, Kazumichi; Tokuzawa, Tokihiko; Yamada, Ichihiro

    2012-01-01

    Highlights: ► We have developed a mapping system of the electron temperature profile to the flux coordinates. ► To increases the performance, multiple virtual machines are used. ► The virtual machine technology is flexible when increasing the number of computers. - Abstract: This paper presents a system called “TSMAP” that maps electron temperature profiles to flux coordinates for the Large Helical Device (LHD). Considering the flux surface is isothermal, TSMAP searches an equilibrium database for the LHD equilibrium that fits the electron temperature profile. The equilibrium database is built through many VMEC computations of the helical equilibria. Because the number of equilibria is large, the most important technical issue for realizing the TSMAP system is computational performance. Therefore, we use multiple personal computers to enhance performance when building the database for TSMAP. We use virtual machines on multiple Linux computers to run the TSMAP program. Virtual machine technology is flexible, allowing the number of computers to be easily increased. This paper discusses how the use of virtual machine technology enhances the performance of TSMAP calculations when multiple CPU cores are used.

  14. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  15. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Directory of Open Access Journals (Sweden)

    Supriya Kinger

    2014-01-01

    Full Text Available Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  16. Prediction based proactive thermal virtual machine scheduling in green clouds.

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  17. The Design and Realization of Virtual Machine of Embedded Soft PLC Running System

    Directory of Open Access Journals (Sweden)

    Qingzhao Zeng

    2014-11-01

    Full Text Available Currently soft PLC has been the focus of study object for many countries. Soft PLC system consists of the developing system and running system. A Virtual Machine is an important part in running system even in the whole soft PLC system. It explains and performs intermediate code generated by the developing system and updates I/O status of PLC in order to complete its control function. This paper introduced the implementation scheme and execution process of the embedded soft PLC running system Virtual Machine, and mainly introduced its software implementation method, including the realization of the input sampling program, the realization of the instruction execution program and the realization of output refresh program. Besides, an operation code matching method was put forward in the instruction execution program design. Finally, the test takes PowerPC/P1010 (Freescale as the hardware platform and Vxworks as the operating system, the system test result shows that accuracy, the real-time performance and reliability of Virtual Machine.

  18. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  19. Secure Hardware Performance Analysis in Virtualized Cloud Environment

    Directory of Open Access Journals (Sweden)

    Chee-Heng Tan

    2013-01-01

    Full Text Available The main obstacle in mass adoption of cloud computing for database operations is the data security issue. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to real data for diagnostic and remediation purposes. The proposed mechanisms utilized TPC-H benchmark to achieve 2 objectives. First, the underlying hardware performance and consistency is supervised via a control system, which is constructed using a combination of TPC-H queries, linear regression, and machine learning techniques. Second, linear programming techniques are employed to provide input to the algorithms that construct stress-testing scenarios in the virtual machine, using the combination of TPC-H queries. These stress-testing scenarios serve 2 purposes. They provide the boundary resource threshold verification to the first control system, so that periodic training of the synthetic data sets for performance evaluation is not constrained by hardware inadequacy, particularly when the resources in the virtual machine are scaled up or down which results in the change of the utilization threshold. Secondly, they provide a platform for response time verification on critical transactions, so that the expected Quality of Service (QoS from these transactions is assured.

  20. Exploiting GPUs in Virtual Machine for BioCloud

    OpenAIRE

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that ena...

  1. VirtualSpace: A vision of a machine-learned virtual space environment

    Science.gov (United States)

    Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.

    2017-12-01

    Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.

  2. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    Science.gov (United States)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  3. Using Machine Learning to Predict Student Performance

    OpenAIRE

    Pojon, Murat

    2017-01-01

    This thesis examines the application of machine learning algorithms to predict whether a student will be successful or not. The specific focus of the thesis is the comparison of machine learning methods and feature engineering techniques in terms of how much they improve the prediction performance. Three different machine learning methods were used in this thesis. They are linear regression, decision trees, and naïve Bayes classification. Feature engineering, the process of modification ...

  4. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  5. DISTRIBUTED SYSTEM FOR HUMAN MACHINE INTERACTION IN VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Abraham Obed Chan-Canche

    2017-07-01

    Full Text Available The communication networks built by multiple devices and sensors are becoming more frequent. These device networks allow human-machine interaction development which aims to improve the human performance generating an adaptive environment in response to the information provided by it. The problem of this work is the quick integration of a device network that allows the development of a flexible immersive environment for different uses.

  6. Clustered Data Management in Virtual Docker Networks Spanning Geo-Redundant Data Centers : A Performance Evaluation Study of Docker Networking

    OpenAIRE

    Alansari, Hayder

    2017-01-01

    Software containers in general and Docker in particular is becoming more popular both in software development and deployment. Software containers are intended to be a lightweight virtualization that provides the isolation of virtual machines with a performance that is close to native. Docker does not only provide virtual isolation but also virtual networking to connect the isolated containers in the desired way. Many alternatives exist when it comes to the virtual networking provided by Docke...

  7. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Science.gov (United States)

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  8. Tunnel Boring Machine Performance Study. Final Report

    Science.gov (United States)

    1984-06-01

    Full face tunnel boring machine "TBM" performance during the excavation of 6 tunnels in sedimentary rock is considered in terms of utilization, penetration rates and cutter wear. The construction records are analyzed and the results are used to inves...

  9. Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings

    Directory of Open Access Journals (Sweden)

    Young Min Kim

    2016-06-01

    Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.

  10. Global detection of live virtual machine migration based on cellular neural networks.

    Science.gov (United States)

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  11. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2014-01-01

    Full Text Available In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM migration detection algorithm based on the cellular neural networks (CNNs, is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation allowing the VM migration detection to be performed better.

  12. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    Science.gov (United States)

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Minimizing Total Busy Time with Application to Energy-efficient Scheduling of Virtual Machines in IaaS clouds

    OpenAIRE

    Quang-Hung, Nguyen; Thoai, Nam

    2016-01-01

    Infrastructure-as-a-Service (IaaS) clouds have become more popular enabling users to run applications under virtual machines. Energy efficiency for IaaS clouds is still challenge. This paper investigates the energy-efficient scheduling problems of virtual machines (VMs) onto physical machines (PMs) in IaaS clouds along characteristics: multiple resources, fixed intervals and non-preemption of virtual machines. The scheduling problems are NP-hard. Most of existing works on VM placement reduce ...

  14. Comparison of confirmed inactive and randomly selected compounds as negative training examples in support vector machine-based virtual screening.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2013-07-22

    The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomly selected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

  15. An Automatic Decision-Making Mechanism for Virtual Machine Live Migration in Private Clouds

    Directory of Open Access Journals (Sweden)

    Ming-Tsung Kao

    2014-01-01

    Full Text Available Due to the increasing number of computer hosts deployed in an enterprise, automatic management of electronic applications is inevitable. To provide diverse services, there will be increases in procurement, maintenance, and electricity costs. Virtualization technology is getting popular in cloud computing environment, which enables the efficient use of computing resources and reduces the operating cost. In this paper, we present an automatic mechanism to consolidate virtual servers and shut down the idle physical machines during the off-peak hours, while activating more machines at peak times. Through the monitoring of system resources, heavy system loads can be evenly distributed over physical machines to achieve load balancing. By integrating the feature of load balancing with virtual machine live migration, we successfully develop an automatic private cloud management system. Experimental results demonstrate that, during the off-peak hours, we can save power consumption of about 69 W by consolidating the idle virtual servers. And the load balancing implementation has shown that two machines with 80% and 40% CPU loads can be uniformly balanced to 60% each. And, through the use of preallocated virtual machine images, the proposed mechanism can be easily applied to a large amount of physical machines.

  16. Elevating Virtual Machine Introspection for Fine-Grained Process Monitoring: Techniques and Applications

    Science.gov (United States)

    Srinivasan, Deepa

    2013-01-01

    Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…

  17. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    Science.gov (United States)

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  18. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pais Pitta de Lacerda Ruivo, Tiago [IIT, Chicago; Bernabeu Altayo, Gerard [Fermilab; Garzoglio, Gabriele [Fermilab; Timm, Steven [Fermilab; Kim, Hyun-Woo [Fermilab; Noh, Seo-Young [KISTI, Daejeon; Raicu, Ioan [IIT, Chicago

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  19. Complementary Machine Intelligence and Human Intelligence in Virtual Teaching Assistant for Tutoring Program Tracing

    Science.gov (United States)

    Chou, Chih-Yueh; Huang, Bau-Hung; Lin, Chi-Jen

    2011-01-01

    This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing machine intelligence and human intelligence to develop the VTA. The first mechanism applies machine intelligence to extend human intelligence (teacher answers) to evaluate…

  20. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  1. Exploiting GPUs in Virtual Machine for BioCloud

    Science.gov (United States)

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  2. Exploiting GPUs in Virtual Machine for BioCloud

    Directory of Open Access Journals (Sweden)

    Heeseung Jo

    2013-01-01

    Full Text Available Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  3. Exploiting GPUs in virtual machine for BioCloud.

    Science.gov (United States)

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  4. Controlling a virtual forehand prosthesis using an adaptive and affective Human-Machine Interface.

    Science.gov (United States)

    Rezazadeh, I Mohammad; Firoozabadi, S M P; Golpayegani, S M R Hashemi; Hu, H

    2011-01-01

    This paper presents the design of an adaptable Human-Machine Interface (HMI) for controlling virtual forearm prosthesis. Direct physical performance measures (obtained score and completion time) for the requested tasks were calculated. Furthermore, bioelectric signals from the forehead were recorded using one pair of electrodes placed on the frontal region of the subject head to extract the mental (affective) measures while performing the tasks. By employing the proposed algorithm and above measures, the proposed HMI can adapt itself to the subject's mental states, thus improving the usability of the interface. The quantitative results from 15 subjects show that the proposed HMI achieved better physical performance measures in comparison to a conventional non-adaptive myoelectric controller (p < 0.001).

  5. Indicators of ADHD symptoms in virtual learning context using machine learning technics

    Directory of Open Access Journals (Sweden)

    Laura Patricia Mancera Valetts

    2015-12-01

    Full Text Available Rev.esc.adm.neg This paper presents a user model for students performing virtual learning processes. This model is used to infer the presence of Attention Deficit Hyperactivity Disorder (ADHD indicators in a student. The user model is built considering three user characteristics, which can be also used as variables in different contexts. These variables are: behavioral conduct (BC, executive functions performance (EFP, and emotional state (ES. For inferring the ADHD symptomatic profile of a student and his/her emotional alterations, these features are used as input in a set of classification rules. Based on the testing of the proposed model, training examples are obtained. These examples are used to prepare a classification machine learning algorithm for performing, and improving, the task of profiling a student. The proposed user model can provide the first step to adapt learning resources in e-learning platforms to people with attention problems, specifically, young-adult students with ADHD.

  6. MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES

    Directory of Open Access Journals (Sweden)

    S. M. Aleksankov

    2015-11-01

    Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.

  7. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    Science.gov (United States)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master

  8. A Study of Applications of Machine Learning Based Classification Methods for Virtual Screening of Lead Molecules.

    Science.gov (United States)

    Vyas, Renu; Bapat, Sanket; Jain, Esha; Tambe, Sanjeev S; Karthikeyan, Muthukumarasamy; Kulkarni, Bhaskar D

    2015-01-01

    The ligand-based virtual screening of combinatorial libraries employs a number of statistical modeling and machine learning methods. A comprehensive analysis of the application of these methods for the diversity oriented virtual screening of biological targets/drug classes is presented here. A number of classification models have been built using three types of inputs namely structure based descriptors, molecular fingerprints and therapeutic category for performing virtual screening. The activity and affinity descriptors of a set of inhibitors of four target classes DHFR, COX, LOX and NMDA have been utilized to train a total of six classifiers viz. Artificial Neural Network (ANN), k nearest neighbor (k-NN), Support Vector Machine (SVM), Naïve Bayes (NB), Decision Tree--(DT) and Random Forest--(RF). Among these classifiers, the ANN was found as the best classifier with an AUC of 0.9 irrespective of the target. New molecular fingerprints based on pharmacophore, toxicophore and chemophore (PTC), were used to build the ANN models for each dataset. A good accuracy of 87.27% was obtained using 296 chemophoric binary fingerprints for the COX-LOX inhibitors compared to pharmacophoric (67.82%) and toxicophoric (70.64%). The methodology was validated on the classical Ames mutagenecity dataset of 4337 molecules. To evaluate it further, selectivity and promiscuity of molecules from five drug classes viz. anti-anginal, anti-convulsant, anti-depressant, anti-arrhythmic and anti-diabetic were studied. The TPC fingerprints computed for each category were able to capture the drug-class specific features using the k-NN classifier. These models can be useful for selecting optimal molecules for drug design.

  9. Virtual reality boosts performance at AREVA Projects

    International Nuclear Information System (INIS)

    Bernasconi, F.

    2017-01-01

    AREVA Projects is one of the 6 business units of New AREVA and it is dedicated to engineering works in a vast fan of activities from mining to waste management via uranium chemistry and nuclear fuel recycling. AREVA projects has opted for innovation to improve performance. Since 2012 virtual reality has been used through the creation of a room equipped with a high-definition screen and stereoscopic goggles. At the beginning virtual reality was used to test and validate procedures for handling equipment thanks to a dynamical digital simulation of this equipment. Now virtual reality is massively used to validate the design phase of projects without having to fabricate a physical mock-up which saves time. The next step in the use of virtual reality is the implementation of a new version of devices like helmets, gloves... that will allow a better interaction with the virtual world. The continuously increasing of computer power is always pushing back the limits of what is possible in virtual reality. (A.C.)

  10. Commissioning of Theratron phoenix telecobalt machine and its performance assessment

    International Nuclear Information System (INIS)

    Rajendran, M.; Reddy, K.D.; Reddy, R.M.; Reddy, J.M.; Reddy, B.V.N.; Kumar, K.; Gopi, S.; Rajan, Dharani; Janardhanan

    2002-01-01

    Teletherapy machines like cobalt-60 unit and linear accelerator are extensively used for radiotherapy. Theratron phoenix machines have been installed. A brief report on the performance of this machine is presented

  11. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows...

  12. A virtualized software based on the NVIDIA cuFFT library for image denoising: performance analysis

    DEFF Research Database (Denmark)

    Galletti, Ardelio; Marcellino, Livia; Montella, Raffaele

    2017-01-01

    Abstract Generic Virtualization Service (GVirtuS) is a new solution for enabling GPGPU on Virtual Machines or low powered devices. This paper focuses on the performance analysis that can be obtained using a GPGPU virtualized software. Recently, GVirtuS has been extended in order to support CUDA...... ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem...

  13. Towards the development of run times leveraging virtualization for high performance computing

    International Nuclear Information System (INIS)

    Diakhate, F.

    2010-12-01

    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  14. INFORMATION INFRASTRUCTURE OF THE EDUCATIONAL ENVIRONMENT WITH VIRTUAL MACHINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Artem D. Beresnev

    2014-09-01

    Full Text Available Subject of research. Information infrastructure for the training environment with application of technology of virtual computers for small pedagogical systems (separate classes, author's courses is created and investigated. Research technique. The life cycle model of information infrastructure for small pedagogical systems with usage of virtual computers in ARIS methodology is constructed. The technique of information infrastructure formation with virtual computers on the basis of process approach is offered. The model of an event chain in combination with the environment chart is used as the basic model. For each function of the event chain the necessary set of means of information and program support is defined. Technique application is illustrated on the example of information infrastructure design for the educational environment taking into account specific character of small pedagogical systems. Advantages of the designed information infrastructure are: the maximum usage of open or free components; the usage of standard protocols (mainly, HTTP and HTTPS; the maximum portability (application servers can be started up on any of widespread operating systems; uniform interface to management of various virtualization platforms, possibility of inventory of contents of the virtual computer without its start, flexible inventory management of the virtual computer by means of adjusted chains of rules. Approbation. Approbation of obtained results was carried out on the basis of training center "Institute of Informatics and Computer Facilities" (Tallinn, Estonia. Technique application within the course "Computer and Software Usage" gave the possibility to get half as much the number of refusals for components of the information infrastructure demanding intervention of the technical specialist, and also the time for elimination of such malfunctions. Besides, the pupils who have got broader experience with computer and software, showed better results

  15. A mathematical framework for virtual IMRT QA using machine learning.

    Science.gov (United States)

    Valdes, G; Scheuermann, R; Hung, C Y; Olszanski, A; Bellerive, M; Solberg, T D

    2016-07-01

    It is common practice to perform patient-specific pretreatment verifications to the clinical delivery of IMRT. This process can be time-consuming and not altogether instructive due to the myriad sources that may produce a failing result. The purpose of this study was to develop an algorithm capable of predicting IMRT QA passing rates a priori. From all treatment, 498 IMRT plans sites were planned in eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. 3%/3 mm local dose/distance-to-agreement (DTA) was recorded using a commercial 2D diode array. Each plan was characterized by 78 metrics that describe different aspects of their complexity that could lead to disagreements between the calculated and measured dose. A Poisson regression with Lasso regularization was trained to learn the relation between the plan characteristics and each passing rate. Passing rates 3%/3 mm local dose/DTA can be predicted with an error smaller than 3% for all plans analyzed. The most important metrics to describe the passing rates were determined to be the MU factor (MU per Gy), small aperture score, irregularity factor, and fraction of the plan delivered at the corners of a 40 × 40 cm field. The higher the value of these metrics, the worse the passing rates. The Virtual QA process predicts IMRT passing rates with a high likelihood, allows the detection of failures due to setup errors, and it is sensitive enough to detect small differences between matched Linacs.

  16. Holistic virtual machine scheduling in cloud datacenters towards minimizing total energy

    OpenAIRE

    Li, Xiang; Garraghan, Peter; Jiang, Xiaohong; Wu, Zhaohui; Xu, Jie

    2018-01-01

    Energy consumed by Cloud datacenters has dramatically increased, driven by rapid uptake of applications and services globally provisioned through virtualization. By applying energy-aware virtual machine scheduling, Cloud providers are able to achieve enhanced energy efficiency and reduced operation cost. Energy consumption of datacenters consists of computing energy and cooling energy. However, due to the complexity of energy and thermal modeling of realistic Cloud datacenter operation, tradi...

  17. Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control

    Science.gov (United States)

    2017-12-01

    of the system call, we additionally check for 35 a match on the newname. As enforced by our SACL, the first part ensures that if the user or group...file, as per the SACL- enforced policy. Figure 3.8 shows the code for the permission checks done in the case of the open() and openat() system calls...maximum 200 words) When an operating system (OS) runs on a virtual machine (VM), a hypervisor, the software that facilitates virtualization of computer

  18. Motion in Human and Machine: A Virtual Fatigue Approach

    NARCIS (Netherlands)

    Potkonjak, V.; Kostic, D.; Rasic, M.; Djordjevic, G.

    2002-01-01

    Achieving human-like behavior of a robot is a key issue of the paper. Redundancy in the inverse kinematics problem is resolved using a biological analogue. It is shown that by means of "virtual fatigue" functions, it is possible to generate robot movements similar to movements of a human arm subject

  19. Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance

    Science.gov (United States)

    Zhan, Yihong; Bai, Yu; Liu, Ziheng

    As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.

  20. Extending the features of RBMK refuelling machine simulator with a training tool based on virtual reality

    International Nuclear Information System (INIS)

    Khoudiakov, M.; Slonimsky, V.; Mitrofanov, S.

    2004-01-01

    The paper describes a continuation of efforts of an international Russian - Norwegian joint team to improve operational safety during the refuelling process of an RBMK-type reactor by implementing a training simulator based on an innovative Virtual Reality (VR) approach. During the preceding 1st stage of the project a display-based simulator was extended with VR models of the real Refuelling Machine (RM) and its environment in order to improve both the learning process and operation's effectiveness. The simulator's challenge is to support the performance (operational activity) of RM operational staff firstly by helping them to develop basic knowledge and skills as well as to keep skilled staff in close touch with the complex machinery of the Refuelling Machine. During the 2nd stage of the joint project the functional scope of the VR-simulator was greatly enhanced - firstly, by connecting to the RBMK-unit full-scope simulator, and, secondly, by including a training program and simulator model upgrade. The present 3rd stage of the Project is primarily oriented towards the improvement of the training process for maintenance and operational personnel by means of a development of the Training Support Methodology and Courses (TSMC) to be based on Virtual Reality and enlarged functionality of 3D and process modelling. The TMSC development is based on Russian and International Regulatory Bodies requirements and recommendations. Design, development and creation of a specialised VR-based Training System for RM Maintenance Personnel are very important for the Russian RBMK plants. The main goal is to create a powerful, autonomous VR-based simulator for training technical maintenance personnel on the Refuelling Machine. VR based training is expected to improve the effect of training compared to the current training based on traditional methods using printed documentation. The LNPP management and the Regulatory Bodies supported this goal. The VR-based Training System should

  1. Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory

    Science.gov (United States)

    Martinez-Roman, Javier; Perez-Cruz, Juan; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Roger-Folch, Jose; Riera-Guasp, Martin; Sapena-Baño, Angel

    2015-01-01

    Undergraduate courses in electrical machines often include an introduction to their magnetic circuits and to the various magnetic materials used in their construction and their properties. The students must learn to be able to recognize and compare the permeability, saturation, and losses of these magnetic materials, relate each material to its…

  2. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  3. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  4. A novel artificial bee colony approach of live virtual machine migration policy using Bayes theorem.

    Science.gov (United States)

    Xu, Gaochao; Ding, Yan; Zhao, Jia; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  5. A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing

    Directory of Open Access Journals (Sweden)

    Jia Zhao

    2013-01-01

    Full Text Available Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA. This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  6. A location selection policy of live virtual machine migration for power saving and load balancing.

    Science.gov (United States)

    Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  7. A Novel Artificial Bee Colony Approach of Live Virtual Machine Migration Policy Using Bayes Theorem

    Directory of Open Access Journals (Sweden)

    Gaochao Xu

    2013-01-01

    Full Text Available Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration’s ability and local exploitation’s ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  8. Virtual Machine Replication on Achieving Energy-Efficiency in a Cloud

    Directory of Open Access Journals (Sweden)

    Subrota K. Mondal

    2016-07-01

    Full Text Available The rapid growth in cloud service demand has led to the establishment of large-scale virtualized data centers in which virtual machines (VMs are used to handle user requests for service. A user’s request cannot be completed if the VM fails. Replication mechanisms can be used to mitigate the impact of failures. Further, data centers consume a large amount of energy resulting in high operating costs and contributing to significant greenhouse gas (GHG emissions. In this paper, we focus on Infrastructure as a Service (IaaS cloud where user job requests are processed by VMs and analyze the effectiveness of VM replications in terms of job completion time performance as well as energy consumption. Three different schemes: cold, warm, and hot replications are considered. The trade-offs between job completion time and energy consumption in different replication schemes are characterized through comprehensive analytical models which capture VM state transitions and associated power consumption patterns. The effectiveness of replication schemes are demonstrated through experimental results. To verify the validity of the proposed analytical models, we extend the widely used cloud simulator CloudSim and compare the simulation results with analytical solutions.

  9. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    Science.gov (United States)

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  10. CloudGC: Recycling Idle Virtual Machines in the Cloud

    OpenAIRE

    Zhang , Bo; Al-Dhuraibi , Yahya; Rouvoy , Romain; Paraiso , Fawaz; Seinturier , Lionel

    2017-01-01

    International audience; Cloud computing conveys the image of a pool of unlimited virtual resources that can be quickly and easily provisioned to accommodate the user requirements. However, this flexibility may require to adjust physical resources at the infrastructure level to keep the pace of user requests. While elasticity can be considered as the de facto solution to support this issue, this elasticity can still be broken by budget requirements or physical limitations of a private cloud. I...

  11. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, R.; Verhoeven, S.; Vass, M.; Vriend, G.; Esch, I.J. de; Lusher, S.J.; Leurs, R.; Ridder, L.; Kooistra, A.J.; Ritschel, T.; Graaf, C. de

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  12. 3D-e-Chem-VM : Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; De Esch, Iwan J P; Lusher, Scott J.; Leurs, Rob; Ridder, Lars; Kooistra, Albert J.; Ritschel, Tina; de Graaf, C.

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  13. VMIL 2011 : the 5th Workshop on Virtual Machines and Intermediate Languages

    NARCIS (Netherlands)

    Rajan, Hridesh; Hauptmann, Michael; Bockisch, Christoph; Dyer, Robert

    2011-01-01

    The VMIL workshop is a forum for research in virtual machines and intermediate languages. It is dedicated to identifying programming mechanisms and constructs that are currently realized as code transformations or implemented in libraries but should rather be supported at VM level. Candidates for

  14. 6th Workshop on Virtual Machines and Intermediate Languages (VMIL’12)

    NARCIS (Netherlands)

    Rajan, Hridesh; Hauptmann, Michael; Bockisch, Christoph; Blackburn, Steve

    2012-01-01

    The VMIL workshop is a forum for research in virtual machines and intermediate languages. It is dedicated to identifying programming mechanisms and constructs that are currently realized as code transformations or implemented in libraries but should rather be supported at VM level. Candidates for

  15. Seamless live migration of virtual machines over the MAN/WAN

    NARCIS (Netherlands)

    Travostino, F.; Daspit, P.; Gommans, L.; Jog, C.; de Laat, C.; Mambretti, J.; Monga, I.; van Oudenaarde, B.; Raghunath, S.; Wang, P.Y.

    2006-01-01

    The “VM Turntable” demonstrator at iGRID 2005 pioneered the integration of Virtual Machines (VMs) with deterministic “lightpath” network services across a MAN/WAN. The results provide for a new stage of virtualization—one for which computation is no longer localized within a data center but rather

  16. Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses

    Science.gov (United States)

    Harrigan, Kevin A.

    2009-01-01

    Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…

  17. Software architecture standard for simulation virtual machine, version 2.0

    Science.gov (United States)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  18. Micro-CernVM: slashing the cost of building and deploying virtual machines

    International Nuclear Information System (INIS)

    Blomer, J; Berzano, D; Buncic, P; Charalampidis, I; Ganis, G; Lestaris, G; Meusel, R; Nicolaou, V

    2014-01-01

    The traditional virtual machine (VM) building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System (CernVM-FS) has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.

  19. Automated Analysis of ARM Binaries using the Low-Level Virtual Machine Compiler Framework

    Science.gov (United States)

    2011-03-01

    Maintenance ABACAS offers a level of flexibility in software development that would be very useful later in the software engineering life cycle. New... Blackjacking : security threats to blackberry devices, PDAs and cell phones in the enterprise. Indianapolis, Indiana, U.S.A.: Wiley Publishing, 2007...AUTOMATED ANALYSIS OF ARM BINARIES USING THE LOW- LEVEL VIRTUAL MACHINE COMPILER FRAMEWORK THESIS Jeffrey B. Scott

  20. The man, the machine and the sacred: when the virtual reality reenchants the world

    Directory of Open Access Journals (Sweden)

    Olivier NANNIPIERI

    2011-01-01

    Full Text Available The rationality associated with the technical progress were able to let believe that the world became disillusioned. Now, far from disillusioning the world, certain technical devices reveal the sacred dimension inherent to any human activity. Indeed, paradoxically, we shall show that the human-machine interaction producing virtual environments is an experience of the sacred.

  1. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  2. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  3. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    Science.gov (United States)

    2015-09-28

    in the same LAN ; this setup resembles the typical setup in a virtualized datacenter where protected and backup hosts are connected by an internal LAN ... Virtual Machines 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0393 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Kang G. Shin 5d. PROJECT...Distribution A - Approved for Public Release 13. SUPPLEMENTARY NOTES None 14. ABSTRACT Continuous replication and live migration of Virtual Machines (VMs

  4. Efficient Hybrid Genetic Based Multi Dimensional Host Load Aware Algorithm for Scheduling and Optimization of Virtual Machines

    OpenAIRE

    Thiruvenkadam, T; Karthikeyani, V

    2014-01-01

    Mapping the virtual machines to the physical machines cluster is called the VM placement. Placing the VM in the appropriate host is necessary for ensuring the effective resource utilization and minimizing the datacenter cost as well as power. Here we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We developed the algorithm based on two different methods, first initial VM packing is done by...

  5. Applying machine learning techniques for forecasting flexibility of virtual power plants

    DEFF Research Database (Denmark)

    MacDougall, Pamela; Kosek, Anna Magdalena; Bindner, Henrik W.

    2016-01-01

    network as well as the multi-variant linear regression. It is found that it is possible to estimate the longevity of flexibility with machine learning. The linear regression algorithm is, on average, able to estimate the longevity with a 15% error. However, there was a significant improvement with the ANN...... approach to investigating the longevity of aggregated response of a virtual power plant using historic bidding and aggregated behaviour with machine learning techniques. The two supervised machine learning techniques investigated and compared in this paper are, multivariate linear regression and single...... algorithm achieving, on average, a 5.3% error. This is lowered 2.4% when learning for the same virtual power plant. With this information it would be possible to accurately offer residential VPP flexibility for market operations to safely avoid causing further imbalances and financial penalties....

  6. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  7. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  8. Planificación del proceso de fresado de una pieza compleja utilizando una máquina herramienta virtual//Milling process planning of a complex workpiece using a virtual machine tool

    Directory of Open Access Journals (Sweden)

    Jorge‐Andrés García‐Barbosa

    2014-08-01

    Full Text Available Se diseñó y se fabricó exitosamente una pieza experimental compleja compuesta de superficies con curvatura cero, positiva y negativa. Se planificó y se ejecutó el proceso de fabricación por maquinado usando el proceso de fresado con herramientas de punta esférica en un centro de maquinado vertical equipado con un cuarto eje de rotación externo. Para la planificación, simulación y verificación del proceso se desarrolló un modelo virtual de la máquina herramienta disponible y sus accesorios en un sistema comercial de maquinado asistido por computador. Se implementó el montaje virtual del sistema de manufactura con el que se verificó y se ajustó el proceso hasta observar un buen desempeño. Se comprobaron así las ventajas de utilizar los recientes métodos virtuales ofrecidos por varios sistemas de maquinado asistido por computador para la simulación del proceso, especialmente cuando se trata de componentes complejos procesados en máquinas herramienta de más de tres ejes.Palabras claves: máquinas herramienta virtuales, planificación de procesos, maquinado de piezas complejas, simulación y verificación de procesos, maquinado multiejes.______________________________________________________________________________AbstractWe designed and successfully manufactured a complex experimental piece composed of surfaces with zero, positive and negative curvatures. We planned and executed the machining manufacturing process by using milling process with end ball nose tools on a vertical machining center equipped with a fourth external rotational axis. For planning, simulation and verification of the machiningprocess, we developed a virtual model of the machine tool and its accessories in a commercial system for computer aided machining. By mounting the virtual manufacturing system, we verified the process and adjusted it until observe a good performance. We tested and confirmed the advantages of using the recent virtual methods for

  9. Exploiting the ALICE HLT for PROOF by scheduling of Virtual Machines

    International Nuclear Information System (INIS)

    Meoni, Marco; Boettger, Stefan; Zelnicek, Pierre; Kebschull, Udo; Lindenstruth, Volker

    2011-01-01

    The HLT (High-Level Trigger) group of the ALICE experiment at the LHC has prepared a virtual Parallel ROOT Facility (PROOF) enabled cluster (HAF - HLT Analysis Facility) for fast physics analysis, detector calibration and reconstruction of data samples. The HLT-Cluster currently consists of 2860 CPU cores and 175TB of storage. Its purpose is the online filtering of the relevant part of data produced by the particle detector. However, data taking is not running continuously and exploiting unused cluster resources for other applications is highly desirable and improves the usage-cost ratio of the HLT cluster. As such, unused computing resources are dedicated to a PROOF-enabled virtual cluster available to the entire collaboration. This setup is especially aimed at the prototyping phase of analyses that need a high number of development iterations and a short response time, e.g. tuning of analysis cuts, calibration and alignment. HAF machines are enabled and disabled upon user request to start or complete analysis tasks. This is achieved by a virtual machine scheduling framework which dynamically assigns and migrates virtual machines running PROOF workers to unused physical resources. Using this approach we extend the HLT usage scheme to running both online and offline computing, thereby optimizing the resource usage.

  10. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    Science.gov (United States)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  11. A Load Balancing Scheme Using Federate Migration Based on Virtual Machines for Cloud Simulations

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM, and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated.

  12. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  13. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  14. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  15. Measuring performance in virtual reality phacoemulsification surgery

    Science.gov (United States)

    Söderberg, Per; Laurell, Carl-Gustaf; Simawi, Wamidh; Skarman, Eva; Nordh, Leif; Nordqvist, Per

    2008-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification surgery. The current work aimed at developing a relative performance index that characterizes the performance of an individual trainee. We recorded measurements of 28 response variables during three iterated surgical sessions in 9 experienced cataract surgeons, separately for the sculpting phase and the evacuation phase of phacoemulsification surgery and compared their outcome to that of a reference group of naive trainees. We defined an individual overall performance index, an individual class specific performance index and an individual variable specific performance index. We found that on an average the experienced surgeons performed at a lower level than a reference group of naive trainees but that this was particularly attributed to a few surgeons. When their overall performance index was further analyzed as class specific performance index and variable specific performance index it was found that the low level performance was attributed to a behavior that is acceptable for an experienced surgeon but not for a naive trainee. It was concluded that relative performance indices should use a reference group that corresponds to the measured individual since the definition of optimal surgery may vary among trainee groups depending on their level of experience.

  16. Parallel Compilation on Virtual Machines in a Development Cloud Environment

    Science.gov (United States)

    2013-09-01

    the potential impact of a possible course of action. 1-2 2. Approach We performed a simple experiment to determine whether the multiple CPUs...PERFORMING ORGANIZATION NAME(S) AND ADDRESSES 8. PERFORMING ORGANIZATION REPORT NUMBER D-4996 H13 -001206 Institute for Defense Analyses 4850 Mark

  17. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    Science.gov (United States)

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  18. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Directory of Open Access Journals (Sweden)

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  19. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    Science.gov (United States)

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  20. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    Science.gov (United States)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  1. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    Energy Technology Data Exchange (ETDEWEB)

    Timm, S. [Fermilab; Cooper, G. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Grassano, D. [Fermilab; Tiradani, A. [Fermilab; Krishnamurthy, R. [IIT, Chicago; Vinayagam, S. [IIT, Chicago; Raicu, I. [IIT, Chicago; Wu, H. [IIT, Chicago; Ren, S. [IIT, Chicago; Noh, S. Y. [KISTI, Daejeon

    2017-11-22

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  2. Using simulation and virtual machines to identify information assurance requirements

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2010-04-01

    The US military is changing its philosophy, approach, and technologies used for warfare. In the process of achieving this vision for high-speed, highly mobile warfare, there are a number of issues that must be addressed and solved; issues that are not addressed by commercial systems because Department of Defense (DoD) Information Technology (IT) systems operate in an environment different from the commercial world. The differences arise from the differences in the scope and skill used in attacks upon DoD systems, the interdependencies between DoD software systems used for network centric warfare (NCW), and the need to rely upon commercial software components in virtually every DoD system. As a result, while NCW promises more effective and efficient means for employing DoD resources, it also increases the vulnerability and allure of DoD systems to cyber attack. A further challenge arises due to the rapid changes in software and information assurance (IA) requirements and technologies over the course of a project. Therefore, the four challenges that must be addressed are determining how to specify the information assurance requirements for a DoD system, minimizing changes to commercial software, incorporation of new system and IA requirements in a timely manner with minimal impact, and insuring that the interdependencies between systems do not result in cyber attack vulnerabilities. In this paper, we address all four issues. In addition to addressing the four challenges outlined above, the interdependencies and interconnections between systems indicate that the IA requirements for a system must consider two important facets of a system's IA defensive capabilities. The facets are the types of IA attacks that the system must repel and the ability of a system to insure that any IA attack that penetrates the system is contained within the system and does not spread. The IA requirements should be derived from threat assessments for the system as well as for the need to

  3. Simulation of Digital Control Computer of Nuclear Power Plant Based on Virtual Machine Technology

    International Nuclear Information System (INIS)

    Hou, Xue Yan; Li, Shu; Li, Qing

    2011-01-01

    Based on analyzing DCC (Digital Control Computer) instruction sets, memory map, display controllers and I/O system, virtual machine of DCC (abbr. VM DCC) has been developed. The executive and control programs, same as running on NPP (Nuclear Power Plant) unit's DCC, can run on the VM DCC smoothly and get same control results. Dual VM DCC system has been successfully applied in NPP FSS(Full Scope Simulator) training. It not only improves FSS's fidelity but also makes maintaining easier

  4. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    Science.gov (United States)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  5. Toward Confirming a Framework for Securing the Virtual Machine Image in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Raid Khalid Hussein

    2017-04-01

    Full Text Available The concept of cloud computing has arisen thanks to academic work in the fields of utility computing, distributed computing, virtualisation, and web services. By using cloud computing, which can be accessed from anywhere, newly-launched businesses can minimise their start-up costs. Among the most important notions when it comes to the construction of cloud computing is virtualisation. While this concept brings its own security risks, these risks are not necessarily related to the cloud. The main disadvantage of using cloud computing is linked to safety and security. This is because anybody which chooses to employ cloud computing will use someone else’s hard disk and CPU in order to sort and store data. In cloud environments, a great deal of importance is placed on guaranteeing that the virtual machine image is safe and secure. Indeed, a previous study has put forth a framework with which to protect the virtual machine image in cloud computing. As such, the present study is primarily concerned with confirming this theoretical framework so as to ultimately secure the virtual machine image in cloud computing. This will be achieved by carrying out interviews with experts in the field of cloud security.

  6. Effect of machining fluid on the process performance of wire electrical discharge machining of nanocomposite ceramic

    Directory of Open Access Journals (Sweden)

    Zhang Chengmao

    2015-01-01

    Full Text Available Wire electric discharge machining (WEDM promise to be effective and economical techniques for the production of tools and parts from conducting ceramic blanks. However, the manufacturing of nanocomposite ceramics blanks with these processes is a long and costly process. This paper presents a new process of machining nanocomposite ceramics using WEDM. WEDM uses water based emulsion, polyvinyl alcohol and distilled water as the machining fluid. Machining fluid is a primary factor that affects the material removal rate and surface quality of WEDM. The effects of emulsion concentration, polyvinyl alcohol concentration and distilled water of the machining fluid on the process performance have been investigated.

  7. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    Science.gov (United States)

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  8. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  9. Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening.

    Science.gov (United States)

    Cang, Zixuan; Mu, Lin; Wei, Guo-Wei

    2018-01-01

    This work introduces a number of algebraic topology approaches, including multi-component persistent homology, multi-level persistent homology, and electrostatic persistence for the representation, characterization, and description of small molecules and biomolecular complexes. In contrast to the conventional persistent homology, multi-component persistent homology retains critical chemical and biological information during the topological simplification of biomolecular geometric complexity. Multi-level persistent homology enables a tailored topological description of inter- and/or intra-molecular interactions of interest. Electrostatic persistence incorporates partial charge information into topological invariants. These topological methods are paired with Wasserstein distance to characterize similarities between molecules and are further integrated with a variety of machine learning algorithms, including k-nearest neighbors, ensemble of trees, and deep convolutional neural networks, to manifest their descriptive and predictive powers for protein-ligand binding analysis and virtual screening of small molecules. Extensive numerical experiments involving 4,414 protein-ligand complexes from the PDBBind database and 128,374 ligand-target and decoy-target pairs in the DUD database are performed to test respectively the scoring power and the discriminatory power of the proposed topological learning strategies. It is demonstrated that the present topological learning outperforms other existing methods in protein-ligand binding affinity prediction and ligand-decoy discrimination.

  10. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    Science.gov (United States)

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  11. Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening

    Science.gov (United States)

    Mu, Lin

    2018-01-01

    This work introduces a number of algebraic topology approaches, including multi-component persistent homology, multi-level persistent homology, and electrostatic persistence for the representation, characterization, and description of small molecules and biomolecular complexes. In contrast to the conventional persistent homology, multi-component persistent homology retains critical chemical and biological information during the topological simplification of biomolecular geometric complexity. Multi-level persistent homology enables a tailored topological description of inter- and/or intra-molecular interactions of interest. Electrostatic persistence incorporates partial charge information into topological invariants. These topological methods are paired with Wasserstein distance to characterize similarities between molecules and are further integrated with a variety of machine learning algorithms, including k-nearest neighbors, ensemble of trees, and deep convolutional neural networks, to manifest their descriptive and predictive powers for protein-ligand binding analysis and virtual screening of small molecules. Extensive numerical experiments involving 4,414 protein-ligand complexes from the PDBBind database and 128,374 ligand-target and decoy-target pairs in the DUD database are performed to test respectively the scoring power and the discriminatory power of the proposed topological learning strategies. It is demonstrated that the present topological learning outperforms other existing methods in protein-ligand binding affinity prediction and ligand-decoy discrimination. PMID:29309403

  12. Virtual screening approach to identifying influenza virus neuraminidase inhibitors using molecular docking combined with machine-learning-based scoring function.

    Science.gov (United States)

    Zhang, Li; Ai, Hai-Xin; Li, Shi-Meng; Qi, Meng-Yuan; Zhao, Jian; Zhao, Qi; Liu, Hong-Sheng

    2017-10-10

    In recent years, an epidemic of the highly pathogenic avian influenza H7N9 virus has persisted in China, with a high mortality rate. To develop novel anti-influenza therapies, we have constructed a machine-learning-based scoring function (RF-NA-Score) for the effective virtual screening of lead compounds targeting the viral neuraminidase (NA) protein. RF-NA-Score is more accurate than RF-Score, with a root-mean-square error of 1.46, Pearson's correlation coefficient of 0.707, and Spearman's rank correlation coefficient of 0.707 in a 5-fold cross-validation study. The performance of RF-NA-Score in a docking-based virtual screening of NA inhibitors was evaluated with a dataset containing 281 NA inhibitors and 322 noninhibitors. Compared with other docking-rescoring virtual screening strategies, rescoring with RF-NA-Score significantly improved the efficiency of virtual screening, and a strategy that averaged the scores given by RF-NA-Score, based on the binding conformations predicted with AutoDock, AutoDock Vina, and LeDock, was shown to be the best strategy. This strategy was then applied to the virtual screening of NA inhibitors in the SPECS database. The 100 selected compounds were tested in an in vitro H7N9 NA inhibition assay, and two compounds with novel scaffolds showed moderate inhibitory activities. These results indicate that RF-NA-Score improves the efficiency of virtual screening for NA inhibitors, and can be used successfully to identify new NA inhibitor scaffolds. Scoring functions specific for other drug targets could also be established with the same method.

  13. Effective Cost Mechanism for Cloudlet Retransmission and Prioritized VM Scheduling Mechanism over Broker Virtual Machine Communication Framework

    OpenAIRE

    Raj, Gaurav; Setia, Sonika

    2012-01-01

    In current scenario cloud computing is most widely increasing platform for task execution. Lot of research is going on to cut down the cost and execution time. In this paper, we propose an efficient algorithm to have an effective and fast execution of task assigned by the user. We proposed an effective communication framework between broker and virtual machine for assigning the task and fetching the results in optimum time and cost using Broker Virtual Machine Communication Framework (BVCF). ...

  14. Dynamic Performances of Asynchronous Machines | Ubeku ...

    African Journals Online (AJOL)

    The per-phase parameters of a 1.5 hp, 380 V, 50 Hz, 4 poles, 3 phase asynchronous machine used in the simulation were computed with reading obtained from a dc, no-load and blocked rotor tests carried out on the machine in the laboratory. The results obtained from the computer simulations confirmed the capabilities ...

  15. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    Directory of Open Access Journals (Sweden)

    Yukun Huang

    2014-01-01

    Full Text Available JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  16. Merging assistance function with task distribution model to enhance user performance in collaborative virtual environment

    International Nuclear Information System (INIS)

    Khalid, S.; Alam, A.

    2016-01-01

    Collaborative Virtual Environments (CVEs) falls under Virtual Reality (VR) where two or more users manipulate objects collaboratively. In this paper we have made some experiments to make assembly from constituents parts scattered in Virtual Environment (VE) based on task distribution model using assistance functions for checking and enhancing user performance. The CVEs subjects setting on distinct connected machines via local area network. In this perspective, we consider the effects of assistance function with oral communication on collaboration, co-presence and users performance. Twenty subjects performed collaboratively an assembly task on static and dynamic based task distribution. We examine the degree of influence of assistance function with oral communications on user's performance based on task distribution model. The results show that assistance functions with oral communication based on task distribution model not only increase user performance but also enhance the sense of copresence and awareness. (author)

  17. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  18. Virtual reality hardware and graphic display options for brain-machine interfaces.

    Science.gov (United States)

    Marathe, Amar R; Carey, Holle L; Taylor, Dawn M

    2008-01-15

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

  19. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    Science.gov (United States)

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  20. The Influence of Virtual Learning Environments in Students' Performance

    Science.gov (United States)

    Alves, Paulo; Miranda, Luísa; Morais, Carlos

    2017-01-01

    This paper focuses mainly on the relation between the use of a virtual learning environment (VLE) and students' performance. Therefore, virtual learning environments are characterised and a study is presented emphasising the frequency of access to a VLE and its relation with the students' performance from a public higher education institution…

  1. Simulation of Digital Control Computer of Nuclear Power Plant Based on Virtual Machine Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Xue Yan; Li, Shu; Li, Qing [China Nuclear Power Operation Technology Co., Wuhan (China)

    2011-08-15

    Based on analyzing DCC (Digital Control Computer) instruction sets, memory map, display controllers and I/O system, virtual machine of DCC (abbr. VM DCC) has been developed. The executive and control programs, same as running on NPP (Nuclear Power Plant) unit's DCC, can run on the VM DCC smoothly and get same control results. Dual VM DCC system has been successfully applied in NPP FSS(Full Scope Simulator) training. It not only improves FSS's fidelity but also makes maintaining easier.

  2. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  3. Thinking computers and virtual persons essays on the intentionality of machines

    CERN Document Server

    Dietrich, Eric

    1994-01-01

    Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines explains how computations are meaningful and how computers can be cognitive agents like humans. This book focuses on the concept that cognition is computation.Organized into four parts encompassing 13 chapters, this book begins with an overview of the analogy between intentionality and phlogiston, the 17th-century principle of burning. This text then examines the objection to computationalism that it cannot prevent arbitrary attributions of content to the various data structures and representations involved in a c

  4. VPN (Virtual Private Network) Performance Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Calixto; Goncalves, Joao G.M.; Sequeira, Vitor [Joint Research Centre, Ispra (Italy). Inst. for the Protection and Security of the Citizen; Vandaele, Roland; Meylemans, Paul [European Commission, DG-TREN (Luxembourg)

    2003-05-01

    Virtual Private Networks (VPN) is an important technology allowing for secure communications through insecure transmission media (i.e., Internet) by adding authentication and encryption to the existing protocols. This paper describes some VPN performance indicators measured over international communication links. An ISDN based VPN link was established between the Joint Research Centre, Ispra site, Italy, and EURATOM Safeguards in Luxembourg. This link connected two EURATOM Safeguards FAST surveillance stations, and used different vendor solutions hardware (Cisco router 1720 and Nokia CC-500 Gateway). To authenticate and secure this international link, we have used several methods at the different levels of the seven-layered ISO network protocol stack (i.e., Callback feature, CHAP - Challenge Handshake Protocol - authentication protocol). The tests made involved the use of different encryption algorithms and the way session secret keys are periodically renewed, considering these elements influence significantly the transmission throughput. Future tests will include the use of a wide variety of wireless media transmission and terminal equipment technologies, in particular PDAs (Personal Digital Assistants) and Notebook PCs. These tests aim at characterising the functionality of VPNs whenever field inspectors wish to contact headquarters to access information from a central archive database or transmit local measurements or documents. These technologies cover wireless transmission needs at different geographical scales: roombased level Bluetooth, floor or building level Wi-Fi and region or country level GPRS.

  5. The differential induction machine: Theory and performance

    Indian Academy of Sciences (India)

    feasibility of taking a turn. ... Thus the axial gap between the two rotors is also ... inductance than a normal machine due to the separating gap between the two rotors .... Crelerot O, Bernot F, Kauffmann J F 1993 Study of an electrical differential ...

  6. Performance Analysis of Abrasive Waterjet Machining Process at Low Pressure

    Science.gov (United States)

    Murugan, M.; Gebremariam, MA; Hamedon, Z.; Azhari, A.

    2018-03-01

    Normally, a commercial waterjet cutting machine can generate water pressure up to 600 MPa. This range of pressure is used to machine a wide variety of materials. Hence, the price of waterjet cutting machine is expensive. Therefore, there is a need to develop a low cost waterjet machine in order to make the technology more accessible for the masses. Due to its low cost, such machines may only be able to generate water pressure at a much reduced rate. The present study attempts to investigate the performance of abrasive water jet machining process at low cutting pressure using self-developed low cost waterjet machine. It aims to study the feasibility of machining various materials at low pressure which later can aid in further development of an effective low cost water jet machine. A total of three different materials were machined at a low pressure of 34 MPa. The materials are mild steel, aluminium alloy 6061 and plastics Delrin®. Furthermore, a traverse rate was varied between 1 to 3 mm/min. The study on cutting performance at low pressure for different materials was conducted in terms of depth penetration, kerf taper ratio and surface roughness. It was found that all samples were able to be machined at low cutting pressure with varied qualities. Also, the depth of penetration decreases with an increase in the traverse rate. Meanwhile, the surface roughness and kerf taper ratio increase with an increase in the traverse rate. It can be concluded that a low cost waterjet machine with a much reduced rate of water pressure can be successfully used for machining certain materials with acceptable qualities.

  7. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, G; Scheuermann, R; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States); Chan, M; Deasy, J [Memorial Sloan-Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.

  8. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    International Nuclear Information System (INIS)

    Valdes, G; Scheuermann, R; Solberg, T; Chan, M; Deasy, J

    2016-01-01

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.

  9. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  10. An efficient approach for improving virtual machine placement in cloud computing environment

    Science.gov (United States)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  11. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  12. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  13. Performance analysis of a composite dual-winding reluctance machine

    International Nuclear Information System (INIS)

    Anih, Linus U.; Obe, Emeka S.

    2009-01-01

    The electromagnetic energy conversion process of a composite dual-winding asynchronous reluctance machine is presented. The mechanism of torque production is explained using the magnetic fields distributions. The dynamic model developed in dq-rotor reference frame from first principles depicts the machine operation and response to sudden load change. The device is self-starting in the absence of rotor conductors and its starting current is lower than that of a conventional induction machine. Although the machine possesses salient pole rotors, it is clearly shown that its performance is that of an induction motor operating at half the synchronous speed. Hence the device produces synchronous torque while operating asynchronously. Simple tests were conducted on a prototype demonstration machine and the results obtained are seen to be in tune with the theory and the steady-state calculations.

  14. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    Science.gov (United States)

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  15. Energy Efficient Multiresource Allocation of Virtual Machine Based on PSO in Cloud Data Center

    Directory of Open Access Journals (Sweden)

    An-ping Xiong

    2014-01-01

    Full Text Available Presently, massive energy consumption in cloud data center tends to be an escalating threat to the environment. To reduce energy consumption in cloud data center, an energy efficient virtual machine allocation algorithm is proposed in this paper based on a proposed energy efficient multiresource allocation model and the particle swarm optimization (PSO method. In this algorithm, the fitness function of PSO is defined as the total Euclidean distance to determine the optimal point between resource utilization and energy consumption. This algorithm can avoid falling into local optima which is common in traditional heuristic algorithms. Compared to traditional heuristic algorithms MBFD and MBFH, our algorithm shows significantly energy savings in cloud data center and also makes the utilization of system resources reasonable at the same time.

  16. An Adaptive Method For Texture Characterization In Medical Images Implemented on a Parallel Virtual Machine

    Directory of Open Access Journals (Sweden)

    Socrates A. Mylonas

    2003-06-01

    Full Text Available This paper describes the application of a new texture characterization algorithm for the segmentation of medical ultrasound images. The morphology of these images poses significant problems for the application of traditional image processing techniques and their analysis has been the subject of research for several years. The basis of the algorithm is an optimum signal modelling algorithm (Least Mean Squares-based, which estimates a set of parameters from small image regions. The algorithm has been converted to a structure suitable for implementation on a Parallel Virtual Machine (PVM consisting of a Network of Workstations (NoW, to improve processing speed. Tests were initially carried out on standard textured images. This paper describes preliminary results of the application of the algorithm in texture discrimination and segmentation of medical ultrasound images. The images examined are primarily used in the diagnosis of carotid plaques, which are linked to the risk of stroke.

  17. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    Science.gov (United States)

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  18. Parallelization of MCNP Monte Carlo neutron and photon transport code in parallel virtual machine and message passing interface

    International Nuclear Information System (INIS)

    Deng Li; Xie Zhongsheng

    1999-01-01

    The coupled neutron and photon transport Monte Carlo code MCNP (version 3B) has been parallelized in parallel virtual machine (PVM) and message passing interface (MPI) by modifying a previous serial code. The new code has been verified by solving sample problems. The speedup increases linearly with the number of processors and the average efficiency is up to 99% for 12-processor. (author)

  19. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  20. Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    Directory of Open Access Journals (Sweden)

    Abdelkhalik Mosa

    2016-10-01

    Full Text Available Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM placement strategy that dynamically assigns VMs to Physical Machines (PMs in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach.

  1. Development and Performance Evaluation of Fluted Pumpkin Seed Dehulling Machine

    Directory of Open Access Journals (Sweden)

    M. M. Odewole

    2017-08-01

    Full Text Available A machine for dehulling fluted pumpkin seed (Telfairia occidentalis was developed. The main objective of developing the machine was to provide a better substitute to traditional methods of dehulling the seed which contains edible oil of high medicinal and nutritional values. Traditional methods are full of drudgery, slow, injury prone and would lead to low and poor outputs in terms of quantity and quality of dehulled products. The machine is made of five major parts: the feed hopper (for holding the seeds to be dehulled before getting into the dehulling chamber, dehulling chamber (the part of the machine that impacts forces on seeds thereby causing fractures and opening of seeds coats for the delivery of the oily kernels, discharge unit (exit for oily kernels and seed coats after dehulling, the frame (for structural support and stability of all parts of the machine and electric motor (power source of the machine.The development process involved design of major components (shaft diameter (20 mm, machine velocity (7.59 m/s, power requirement (3hp single phase electric motor and structural support of mild steel angle iron, selection of construction materials and fabrication. ANSYS R14.5 machine design computer software was used to design the shaft and structural support; while other components were designed with conventional design method of using design equations. The machine works on the principle of centrifugal and impact forces. Performance evaluation was carried out after fabrication and 87.26%, 2.83g/s, 8.9% and 3.84%were obtained for dehulling efficiency, throughput capacity, percentage partially dehulled and percentage undehulled respectively.

  2. Human performance data collected in a virtual environment

    Directory of Open Access Journals (Sweden)

    Mashrura Musharraf

    2017-12-01

    Full Text Available This data article describes the experimental data used in the research article “Incorporating individual differences in human reliability analysis: an extension to the virtual experimental technique” (Musharraf et al., 2017 [1]. The article provides human performance data for 36 individuals collected using a virtual environment. Each participant was assigned to one of two groups for training: 1 G1: high level training and 2 G2: low level training. Participants’ performance was tested in 4 different virtual scenarios with different levels of visibility and complexity. Several performance metrics of the participants were recorded during each scenario. The metrics include: time to muster, time spent running, interaction with fire doors and watertight doors, interaction with hazards, and reporting at different muster locations.

  3. Human performance data collected in a virtual environment.

    Science.gov (United States)

    Musharraf, Mashrura; Smith, Jennifer; Khan, Faisal; Veitch, Brian; MacKinnon, Scott

    2017-12-01

    This data article describes the experimental data used in the research article "Incorporating individual differences in human reliability analysis: an extension to the virtual experimental technique" (Musharraf et al., 2017) [1]. The article provides human performance data for 36 individuals collected using a virtual environment. Each participant was assigned to one of two groups for training: 1) G1: high level training and 2) G2: low level training. Participants' performance was tested in 4 different virtual scenarios with different levels of visibility and complexity. Several performance metrics of the participants were recorded during each scenario. The metrics include: time to muster, time spent running, interaction with fire doors and watertight doors, interaction with hazards, and reporting at different muster locations.

  4. Boxing headguard performance in punch machine tests.

    Science.gov (United States)

    McIntosh, Andrew S; Patton, Declan A

    2015-09-01

    The paper presents a novel laboratory method for assessing boxing headguard impact performance. The method is applied to examine the effects of headguards on head impact dynamics and injury risk. A linear impactor was developed, and a range of impacts was delivered to an instrumented Hybrid III head and neck system both with and without an AIBA (Association Internationale de Boxe Amateur)-approved headguard. Impacts at selected speeds between 4.1 and 8.3 m/s were undertaken. The impactor mass was approximately 4 kg and an interface comprising a semirigid 'fist' with a glove was used. The peak contact forces were in the range 1.9-5.9 kN. Differences in head impact responses between the Top Ten AIBA-approved headguard and bare headform in the lateral and forehead tests were large and/or significant. In the 8.3 m/s fist-glove impacts, the mean peak resultant headform accelerations for bare headform tests was approximately 130 g compared with approximately 85 g in the forehead impacts. In the 6.85 m/s bare headform impacts, mean peak resultant angular head accelerations were in the range of 5200-5600 rad/s(2) and almost halved by the headguard. Linear and angular accelerations in 45° forehead and 60° jaw impacts were reduced by the headguard. The data support the opinion that current AIBA headguards can play an important role in reducing the risk of concussion and superficial injury in boxing competition and training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Fabrication and Performance Evaluation of a Thevetia Nut Cracking Machine

    Directory of Open Access Journals (Sweden)

    M. M. Odewole

    2015-06-01

    Full Text Available Thevetia seed contains about 64 percent of non-edible oil in its oily kernel and this oil can be used for various purposes such as biofuel and bio-oil; making of paints, insecticides, cosmetics, lubricants and cooling oil in electrical transformers. The cakes obtained after oil extraction are incorporated on the field as manure. In order to get quality oil kernels from the hard nuts, there is need to properly crack them; this process of cracking is still a great challenge. As result of the aforementioned problem, this work focused on the design, fabrication and performance evaluation of a thevetia nut cracking machine. The machine works based on the principle of attrition force. Some of the parts designed for were diameter of shaft (13 mm solid shaft and length of belt (A55, power required to operate the machine (2.5 hp, speed of operation (9.14 m/s and the appropriate dimension of angle iron bar of 45 mm × 45 mm × 3 mm was used for the structural support. The fabrication was done systematically followed by the performance evaluation of the machine. The result of the overall cracking efficiency and throughput capacity of the machine were evaluated to be 96.65 % and 510 g⁄min respectively.

  6. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  7. Machine Learning-based Virtual Screening and Its Applications to Alzheimer's Drug Discovery: A Review.

    Science.gov (United States)

    Carpenter, Kristy A; Huang, Xudong

    2018-06-07

    Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Influence of electrical resistivity and machining parameters on electrical discharge machining performance of engineering ceramics.

    Science.gov (United States)

    Ji, Renjie; Liu, Yonghong; Diao, Ruiqiang; Xu, Chenchen; Li, Xiaopeng; Cai, Baoping; Zhang, Yanzhen

    2014-01-01

    Engineering ceramics have been widely used in modern industry for their excellent physical and mechanical properties, and they are difficult to machine owing to their high hardness and brittleness. Electrical discharge machining (EDM) is the appropriate process for machining engineering ceramics provided they are electrically conducting. However, the electrical resistivity of the popular engineering ceramics is higher, and there has been no research on the relationship between the EDM parameters and the electrical resistivity of the engineering ceramics. This paper investigates the effects of the electrical resistivity and EDM parameters such as tool polarity, pulse interval, and electrode material, on the ZnO/Al2O3 ceramic's EDM performance, in terms of the material removal rate (MRR), electrode wear ratio (EWR), and surface roughness (SR). The results show that the electrical resistivity and the EDM parameters have the great influence on the EDM performance. The ZnO/Al2O3 ceramic with the electrical resistivity up to 3410 Ω·cm can be effectively machined by EDM with the copper electrode, the negative tool polarity, and the shorter pulse interval. Under most machining conditions, the MRR increases, and the SR decreases with the decrease of electrical resistivity. Moreover, the tool polarity, and pulse interval affect the EWR, respectively, and the electrical resistivity and electrode material have a combined effect on the EWR. Furthermore, the EDM performance of ZnO/Al2O3 ceramic with the electrical resistivity higher than 687 Ω·cm is obviously different from that with the electrical resistivity lower than 687 Ω·cm, when the electrode material changes. The microstructure character analysis of the machined ZnO/Al2O3 ceramic surface shows that the ZnO/Al2O3 ceramic is removed by melting, evaporation and thermal spalling, and the material from the working fluid and the graphite electrode can transfer to the workpiece surface during electrical discharge

  9. Prediction of tunnel boring machine performance using machine and rock mass data

    International Nuclear Information System (INIS)

    Dastgir, G.

    2012-01-01

    Performance of the tunnel boring machine and its prediction by different methods has been a hot issue since the first TBM came into being. For the sake of safe and sound transport, improvement of hydro-power, mining, civil and many other tunneling projects that cannot be driven efficiently and economically by conventional drill and blast, TBMs are quite frequently used. TBM parameters and rock mass properties, which heavily influence machine performance, should be estimated or known before choice of TBM-type and start of excavation. By applying linear regression analysis (SPSS19), fuzzy logic tools and a special Math-Lab code on actual field data collected from seven TBM driven tunnels (Hieflau expansion, Queen water tunnel, Vereina, Hemerwald, Maen, Pieve and Varzo tunnel), an attempt was made to provide prediction of rock mass class (RMC), rock fracture class (RFC), penetration rate (PR) and advance rate (AR). For detailed analysis of TBM performance, machine parameters (thrust, machine rpm, torque, power etc.), machine types and specification and rock mass properties (UCS, discontinuity in rock mass, RMC, RFC, RMR, etc.) were analyzed by 3-D surface plotting using statistical software R. Correlations between machine parameters and rock mass properties which effectively influence prediction models, are presented as well. In Hieflau expansion tunnel AR linearly decreases with increase of thrust due to high dependence of machine advance rate upon rock strength. For Hieflau expansion tunnel three types of data (TBM, rock mass and seismic data e.g. amplitude, pseudo velocity etc.) were coupled and simultaneously analyzed by plotting 3-D surfaces. No appreciable correlation between seismic data (Amplitude and Pseudo velocity) and rock mass properties and machine parameters could be found. Tool wear as a function of TBM operational parameters was analyzed which revealed that tool wear is minimum if applied thrust is moderate and that tool wear is high when thrust is

  10. Detection of Stress Levels from Biosignals Measured in Virtual Reality Environments Using a Kernel-Based Extreme Learning Machine.

    Science.gov (United States)

    Cho, Dongrae; Ham, Jinsil; Oh, Jooyoung; Park, Jeanho; Kim, Sayup; Lee, Nak-Kyu; Lee, Boreom

    2017-10-24

    Virtual reality (VR) is a computer technique that creates an artificial environment composed of realistic images, sounds, and other sensations. Many researchers have used VR devices to generate various stimuli, and have utilized them to perform experiments or to provide treatment. In this study, the participants performed mental tasks using a VR device while physiological signals were measured: a photoplethysmogram (PPG), electrodermal activity (EDA), and skin temperature (SKT). In general, stress is an important factor that can influence the autonomic nervous system (ANS). Heart-rate variability (HRV) is known to be related to ANS activity, so we used an HRV derived from the PPG peak interval. In addition, the peak characteristics of the skin conductance (SC) from EDA and SKT variation can also reflect ANS activity; we utilized them as well. Then, we applied a kernel-based extreme-learning machine (K-ELM) to correctly classify the stress levels induced by the VR task to reflect five different levels of stress situations: baseline, mild stress, moderate stress, severe stress, and recovery. Twelve healthy subjects voluntarily participated in the study. Three physiological signals were measured in stress environment generated by VR device. As a result, the average classification accuracy was over 95% using K-ELM and the integrated feature (IT = HRV + SC + SKT). In addition, the proposed algorithm can embed a microcontroller chip since K-ELM algorithm have very short computation time. Therefore, a compact wearable device classifying stress levels using physiological signals can be developed.

  11. Machine learning-based assessment tool for imbalance and vestibular dysfunction with virtual reality rehabilitation system.

    Science.gov (United States)

    Yeh, Shih-Ching; Huang, Ming-Chun; Wang, Pa-Chun; Fang, Te-Yung; Su, Mu-Chun; Tsai, Po-Yi; Rizzo, Albert

    2014-10-01

    Dizziness is a major consequence of imbalance and vestibular dysfunction. Compared to surgery and drug treatments, balance training is non-invasive and more desired. However, training exercises are usually tedious and the assessment tool is insufficient to diagnose patient's severity rapidly. An interactive virtual reality (VR) game-based rehabilitation program that adopted Cawthorne-Cooksey exercises, and a sensor-based measuring system were introduced. To verify the therapeutic effect, a clinical experiment with 48 patients and 36 normal subjects was conducted. Quantified balance indices were measured and analyzed by statistical tools and a Support Vector Machine (SVM) classifier. In terms of balance indices, patients who completed the training process are progressed and the difference between normal subjects and patients is obvious. Further analysis by SVM classifier show that the accuracy of recognizing the differences between patients and normal subject is feasible, and these results can be used to evaluate patients' severity and make rapid assessment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. A security-awareness virtual machine management scheme based on Chinese wall policy in cloud computing.

    Science.gov (United States)

    Yu, Si; Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min

    2014-01-01

    Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much.

  13. Virtual Networking Performance in OpenStack Platform for Network Function Virtualization

    Directory of Open Access Journals (Sweden)

    Franco Callegati

    2016-01-01

    Full Text Available The emerging Network Function Virtualization (NFV paradigm, coupled with the highly flexible and programmatic control of network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV on OpenStack.

  14. Machine Learning Approaches Toward Building Predictive Models for Small Molecule Modulators of miRNA and Its Utility in Virtual Screening of Molecular Databases.

    Science.gov (United States)

    Periwal, Vinita; Scaria, Vinod

    2017-01-01

    The ubiquitous role of microRNAs (miRNAs) in a number of pathological processes has suggested that they could act as potential drug targets. RNA-binding small molecules offer an attractive means for modulating miRNA function. The availability of bioassay data sets for a variety of biological assays and molecules in public domain provides a new opportunity toward utilizing them to create models and further utilize them for in silico virtual screening approaches to prioritize or assign potential functions for small molecules. Here, we describe a computational strategy based on machine learning for creation of predictive models from high-throughput biological screens for virtual screening of small molecules with the potential to inhibit microRNAs. Such models could be potentially used for computational prioritization of small molecules before performing high-throughput biological assay.

  15. Institutional Forecasting: The Performance of Thin Virtual Stock Markets

    NARCIS (Netherlands)

    G.H. van Bruggen (Gerrit); M. Spann (Martin); G.L. Lilien (Gary); B. Skiera (Bernd)

    2006-01-01

    textabstractWe study the performance of Virtual Stock Markets (VSMs) in an institutional forecasting environment. We compare VSMs to the Combined Judgmental Forecast (CJF) and the Key Informant (KI) approach. We find that VSMs can be effectively applied in an environment with a small number of

  16. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    Science.gov (United States)

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  17. Synthetic hardware performance analysis in virtualized cloud environment for healthcare organization.

    Science.gov (United States)

    Tan, Chee-Heng; Teh, Ying-Wah

    2013-08-01

    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.

  18. A Heuristic Placement Selection of Live Virtual Machine Migration for Energy-Saving in Cloud Computing Environment

    Science.gov (United States)

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable. PMID:25251339

  19. On the Parallel Elliptic Single/Multigrid Solutions about Aligned and Nonaligned Bodies Using the Virtual Machine for Multiprocessors

    Directory of Open Access Journals (Sweden)

    A. Averbuch

    1994-01-01

    Full Text Available Parallel elliptic single/multigrid solutions around an aligned and nonaligned body are presented and implemented on two multi-user and single-user shared memory multiprocessors (Sequent Symmetry and MOS and on a distributed memory multiprocessor (a Transputer network. Our parallel implementation uses the Virtual Machine for Muli-Processors (VMMP, a software package that provides a coherent set of services for explicitly parallel application programs running on diverse multiple instruction multiple data (MIMD multiprocessors, both shared memory and message passing. VMMP is intended to simplify parallel program writing and to promote portable and efficient programming. Furthermore, it ensures high portability of application programs by implementing the same services on all target multiprocessors. The performance of our algorithm is investigated in detail. It is seen to fit well the above architectures when the number of processors is less than the maximal number of grid points along the axes. In general, the efficiency in the nonaligned case is higher than in the aligned case. Alignment overhead is observed to be up to 200% in the shared-memory case and up to 65% in the message-passing case. We have demonstrated that when using VMMP, the portability of the algorithms is straightforward and efficient.

  20. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.

    Science.gov (United States)

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.

  1. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.

    Directory of Open Access Journals (Sweden)

    Jia Zhao

    Full Text Available The field of live VM (virtual machine migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization idea with the SA (simulated annealing idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.

  2. Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility

    OpenAIRE

    Etienne Provencal; David L. St-Pierre

    2017-01-01

    A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual per...

  3. Ghost Whisperer's Ghost in the Machine: An example of pop cultural representation of virtual worlds

    DEFF Research Database (Denmark)

    Reinhard, CarrieLynn D.

    2009-01-01

    Analysis of an episode of the CBS series "Ghost Whisperer" for how it depicts a) what is a virtual world and b) the tensions that are involved in discussing the uses and effects of a virtual world.  Discussion focuses on the overriding negative reception of virtual worlds in popular culture due...

  4. Performance evaluation of coherent Ising machines against classical neural networks

    Science.gov (United States)

    Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa

    2017-12-01

    The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.

  5. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Science.gov (United States)

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Pavement Subgrade Performance Study in the Danish Road Testing Machine

    DEFF Research Database (Denmark)

    Ullidtz, Per; Ertman Larsen, Hans Jørgen

    1997-01-01

    Most existing pavement subgrade criteria are based on the AASHO Road Test, where only one material was tested and for only one climatic condition. To study the validity of these criteria and to refine the criteria a co-operative research program entitled the "International Pavement Subgrade...... Performance Study" was sponsored by the FHWA with American, Finnish and Danish partners. This paper describes the first test series which was carried out in the Danish Road Testing Machine (RTM).The first step in this program is a full scale test on an instrumented pavement in the Danish Road Testing Machine....... Pressure gauges and strain cells were installed in the upper part of the subgrade, for measuring stresses and strains in all three directions. During and after construction FWD testing was carried out to evaluate the elastic parameters of the materials. These parameters were then used with the theory...

  7. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  8. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  9. Virtual screening for cytochromes p450: successes of machine learning filters.

    Science.gov (United States)

    Burton, Julien; Ijjaali, Ismail; Petitet, François; Michel, André; Vercauteren, Daniel P

    2009-05-01

    Cytochromes P450 (CYPs) are crucial targets when predicting the ADME properties (absorption, distribution, metabolism, and excretion) of drugs in development. Particularly, CYPs mediated drug-drug interactions are responsible for major failures in the drug design process. Accurate and robust screening filters are thus needed to predict interactions of potent compounds with CYPs as early as possible in the process. In recent years, more and more 3D structures of various CYP isoforms have been solved, opening the gate of accurate structure-based studies of interactions. Nevertheless, the ligand-based approach still remains popular. This success can be explained by the growing number of available data and the satisfying performances of existing machine learning (ML) methods. The aim of this contribution is to give an overview of the recent achievements in ML applications to CYP datasets. Particularly, popular methods such as support vector machine, decision trees, artificial neural networks, k-nearest neighbors, and partial least squares will be compared as well as the quality of the datasets and the descriptors used. Consensus of different methods will also be discussed. Often reaching 90% of accuracy, the models will be analyzed to highlight the key descriptors permitting the good prediction of CYPs binding.

  10. Investigations of Cutting Fluid Performance Using Different Machining Operations

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Belluco, Walter

    2002-01-01

    An analysis of cutting fluid performance in dif-ferent metal cutting operations is presented based on performance criteria, work material and fluid type. Cutting fluid performance was evaluated in turning, drilling, reaming and tapping operations, with respect to tool life, cutting forces and prod...... will get the same performance ranking for different metalworking fluids no matter what machining test is used, when the fluids are of the same type. Results show that this is mostly true for the water-based fluids on austenitic stainless steel while ranking did change depending on the test with straight......-gated. In the case of austenitic stainless steel as the workpiece material, results using the different operations under different cutting conditions show that the performance of vegetable oil based prod-ucts is superior or equal to that of mineral oil based products. The hypothesis was investigated that one...

  11. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    Science.gov (United States)

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development.

    Science.gov (United States)

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/.

  13. Usage of OpenStack Virtual Machine and MATLAB HPC Add-on leads to faster turnaround

    KAUST Repository

    Van Waveren, Matthijs

    2017-03-16

    We need to run hundreds of MATLAB® simulations while changing the parameters between each simulation. These simulations need to be run sequentially, and the parameters are defined manually from one simulation to the next. This makes this type of workload unsuitable for a shared cluster. For this reason we are using a cluster running in an OpenStack® Virtual Machine and are using the MATLAB HPC Add-on for submitting jobs to the cluster. As a result we are now able to have a turnaround time for the simulations of the order of a few hours, instead of the 24 hours needed on a local workstation.

  14. Assessment of In-Cloud Enterprise Resource Planning System Performed in a Virtual Cluster

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2015-01-01

    Full Text Available This paper introduces a high-performed high-availability in-cloud enterprise resources planning (in-cloud ERP which has deployed in the virtual machine cluster. The proposed approach can resolve the crucial problems of ERP failure due to unexpected downtime and failover between physical hosts in enterprises, causing operation termination and hence data loss. Besides, the proposed one together with the access control authentication and network security is capable of preventing intrusion hacked and/or malicious attack via internet. Regarding system assessment, cost-performance (C-P ratio, a remarkable cost effectiveness evaluation, has been applied to several remarkable ERP systems. As a result, C-P ratio evaluated from the experiments shows that the proposed approach outperforms two well-known benchmark ERP systems, namely, in-house ECC 6.0 and in-cloud ByDesign.

  15. Improving the performance and energy-efficiency of virtual memory

    OpenAIRE

    Karakostas, Vasileios

    2016-01-01

    Virtual memory improves programmer productivity, enhances process security, and increases memory utilization. However, virtual memory requires an address translation from the virtual to the physical address space on every memory operation. Page-based implementations of virtual memory divide physical memory into fixed size pages, and use a per-process page table to map virtual pages to physical pages. The hardware key component for accelerating address translation is the Translation Lookasi...

  16. From the Symbolic Analysis of Virtual Faces to a Smiles Machine.

    Science.gov (United States)

    Ochs, Magalie; Diday, Edwin; Afonso, Filipe

    2016-02-01

    In this paper, we present an application of symbolic data processing for the design of virtual character's smiling facial expressions. A collected database of virtual character's smiles directly created by users has been explored using symbolic data analysis methods. An unsupervised analysis has enabled us to identify the morphological and dynamic characteristics of different types of smiles as well as of combinations of smiles. Based on the symbolic data analysis, to generate different smiling faces, we have developed procedures to automatically reconstitute smiling virtual faces from a point in a multidimensional space corresponding to a principal component analysis plane.

  17. Virtual Prototyping and Performance Analysis of Two Memory Architectures

    Directory of Open Access Journals (Sweden)

    Huda S. Muhammad

    2009-01-01

    Full Text Available The gap between CPU and memory speed has always been a critical concern that motivated researchers to study and analyze the performance of memory hierarchical architectures. In the early stages of the design cycle, performance evaluation methodologies can be used to leverage exploration at the architectural level and assist in making early design tradeoffs. In this paper, we use simulation platforms developed using the VisualSim tool to compare the performance of two memory architectures, namely, the Direct Connect architecture of the Opteron, and the Shared Bus of the Xeon multicore processors. Key variations exist between the two memory architectures and both design approaches provide rich platforms that call for the early use of virtual system prototyping and simulation techniques to assess performance at an early stage in the design cycle.

  18. Video Quality Assessment and Machine Learning: Performance and Interpretability

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    In this work we compare a simple and a complex Machine Learning (ML) method used for the purpose of Video Quality Assessment (VQA). The simple ML method chosen is the Elastic Net (EN), which is a regularized linear regression model and easier to interpret. The more complex method chosen is Support...... Vector Regression (SVR), which has gained popularity in VQA research. Additionally, we present an ML-based feature selection method. Also, it is investigated how well the methods perform when tested on videos from other datasets. Our results show that content-independent cross-validation performance...... on a single dataset can be misleading and that in the case of very limited training and test data, especially in regards to different content as is the case for many video datasets, a simple ML approach is the better choice....

  19. HAVmS: Highly Available Virtual Machine Computer System Fault Tolerant with Automatic Failback and Close to Zero Downtime

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2014-12-01

    Full Text Available In scientic computing, systems often manage computations that require continuous acquisition of of satellite data and the management of large databases, as well as the execution of analysis software and simulation models (e.g. Monte Carlo or molecular dynamics cell simulations which may require several weeks of continuous run. These systems, consequently, should ensure the continuity of operation even in case of serious faults. HAVmS (High Availability Virtual machine System is a highly available, "fault tolerant" system with zero downtime in case of fault. It is based on the use of Virtual Machines and implemented by two servers with similar characteristics. HAVmS, thanks to the developed software solutions, is unique in its kind since it automatically failbacks once faults have been fixed. The system has been designed to be used both with professional or inexpensive hardware and supports the simultaneous execution of multiple services such as: web, mail, computing and administrative services, uninterrupted computing, data base management. Finally the system is cost effective adopting exclusively open source solutions, is easily manageable and for general use.

  20. Latency and User Performance in Virtual Environments and Augmented Reality

    Science.gov (United States)

    Ellis, Stephen R.

    2009-01-01

    System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some

  1. Performance of a Folded-Strip Toroidally Wound Induction Machine

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Jack, Alan G.; Atkinson, Glynn J.

    2011-01-01

    This paper presents the measured experimental results from a four-pole toroidally wound induction machine, where the stator is constructed as a pre-wound foldable strip. It shows that if the machine is axially restricted in length, the toroidally wound induction machine can have substantially...... shorter stator end-windings than conventionally wound induction machines, and hence that a toroidally wound induction machine can have lower losses and a higher efficiency. The paper also presents the employed construction method, which emphasizes manufacturability, and highlights the advantages...

  2. Integrated Real-Virtuality System and Environments for Advanced Control System Developers and Machines Builders

    OpenAIRE

    Hussein, Mohamed

    2008-01-01

    The pace of technological change is increasing and sophisticated customer driven markets are forcing rapid machine evolution, increasing complexity and quality, and faster response. To survive and thrive in these markets, machine builders/suppliers require absolute customer and market orientation, focusing on .. rapid provision of solutions rather than products. Their production systems will need to accommodate unpredictable changes while maintaining financial and operational efficiency with ...

  3. Performance of Process Damping in Machining Titanium Alloys at Low Cutting Speed with Different Helix Tools

    International Nuclear Information System (INIS)

    Shaharun, M A; Yusoff, A R; Reza, M S; Jalal, K A

    2012-01-01

    Titanium is a strong, lustrous, corrosion-resistant and transition metal with a silver color to produce strong lightweight alloys for industrial process, automotive, medical instruments and other applications. However, it is very difficult to machine the titanium due to its poor machinability. When machining titanium alloys with the conventional tools, the wear rate of the tool is rapidly accelerate and it is generally difficult to achieve at high cutting speed. In order to get better understanding of machining titanium alloy, the interaction between machining structural system and the cutting process which result in machining instability will be studied. Process damping is a useful phenomenon that can be exploited to improve the limited productivity of low speed machining. In this study, experiments are performed to evaluate the performance of process damping of milling under different tool helix geometries. The results showed that the helix of 42° angle is significantly increase process damping performance in machining titanium alloy.

  4. Performance of Virtual Current Meters in Hydroelectric Turbine Intakes

    Energy Technology Data Exchange (ETDEWEB)

    Harding, Samuel F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Hydrology Group; Romero-Gomez, Pedro D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Hydrology Group; Richmond, Marshall C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Hydrology Group

    2016-04-30

    Standards provide recommendations for the best practices in the installation of current meters for measuring fluid flow in closed conduits. These include PTC-18 and IEC-41 . Both of these standards refer to the requirements of the ISO Standard 3354 for cases where the velocity distribution is assumed to be regular and the flow steady. Due to the nature of the short converging intakes of Kaplan hydroturbines, these assumptions may be invalid if current meters are intended to be used to characterize turbine flows. In this study, we examine a combination of measurement guidelines from both ISO standards by means of virtual current meters (VCM) set up over a simulated hydroturbine flow field. To this purpose, a computational fluid dynamics (CFD) model was developed to model the velocity field of a short converging intake of the Ice Harbor Dam on the Snake River, in the State of Washington. The detailed geometry and resulting wake of the submersible traveling screen (STS) at the first gate slot was of particular interest in the development of the CFD model using a detached eddy simulation (DES) turbulence solution. An array of virtual point velocity measurements were extracted from the resulting velocity field to simulate VCM at two virtual measurement (VM) locations at different distances downstream of the STS. The discharge through each bay was calculated from the VM using the graphical integration solution to the velocity-area method. This method of representing practical velocimetry techniques in a numerical flow field has been successfully used in a range of marine and conventional hydropower applications. A sensitivity analysis was performed to observe the effect of the VCM array resolution on the discharge error. The downstream VM section required 11–33% less VCM in the array than the upstream VM location to achieve a given discharge error. In general, more instruments were required to quantify the discharge at high levels of accuracy when the STS was

  5. Performance Comparison for Virtual Impedance Techniques Used in Droop Controlled Islanded Microgrids

    DEFF Research Database (Denmark)

    Micallef, Alexander; Apap, Maurice; Spiteri-Staines, Cyril

    2016-01-01

    ). Virtual impedance loops were proposed in literature to improve the current sharing between the inverters by normalizing the output impedance of the inverters. However, virtual impedance loops have constraints in this respect since the improvement in the current sharing occurs through redistribution...... for a single phase microgrid setup to achieve a fair performance comparison of the different virtual impedance techniques....

  6. Virtual Schools in the U.S. 2014: Politics, Performance, Policy, and Research Evidence

    Science.gov (United States)

    Huerta, Luis; Rice, Jennifer King; Shafer, Sheryl Rankin; Barbour, Michael K.; Miron, Gary; Gulosino, Charisse; Horvitz, Brian

    2014-01-01

    This report is the second of a series of annual reports by the National Education Policy Center (NEPC) on virtual education in the U.S. The NEPC reports contribute to the existing evidence and discourse on virtual education by providing an objective analysis of the evolution and performance of full-time, publicly funded K-12 virtual schools. This…

  7. Share (And Not) Share Alike: Improving Virtual Team Climate and Decision Performance

    Science.gov (United States)

    Cordes, Sean

    2017-01-01

    Virtual teams face unique communication and collaboration challenges that impact climate development and performance. First, virtual teams rely on technology mediated communication which can constrain communication. Second, team members lack skill for adapting process to the virtual setting. A collaboration process structure was designed to…

  8. Statistical Analysis of EGFR Structures’ Performance in Virtual Screening

    Science.gov (United States)

    Li, Yan; Li, Xiang; Dong, Zigang

    2015-01-01

    In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening. PMID:26476847

  9. Effects of team emotional authenticity on virtual team performance

    Directory of Open Access Journals (Sweden)

    Catherine E Connelly

    2016-08-01

    Full Text Available Members of virtual teams lack many of the visual or auditory cues that are usually used as the basis for impressions about fellow team members. We focus on the effects of the impressions formed in this context, and use social exchange theory to understand how these impressions affect team performance. Our pilot study, using content analysis (n = 191 students, suggested that most individuals believe that they can assess others’ emotional authenticity in online settings by focusing on the content and tone of the messages. Our quantitative study examined the effects of these assessments. Structural equation modeling (SEM analysis (n = 81 student teams suggested that team-level trust and teamwork behaviors mediate the relationship between team emotional authenticity and team performance, and illuminate the importance of team emotional authenticity for team processes and outcomes.

  10. Effects of VIrtualIty on Employee Performance and CommItment: A Research

    OpenAIRE

    Baysal, Zeynep; Baraz, Barış

    2017-01-01

    In the modern business world, due to the impact of technological advancements and globalisation,organisations are obliged to keep up with the change in order to seize new opportunities they encounterand overcome the obstacles in their way. Concepts of virtuality and virtual organisations are amongthese concepts which surfaced as a result of these changes. All white-collar employees have a certaindegree of virtuality and the fact that organisations are taking rapid steps towards virtualisation...

  11. Comparative Performance Analysis of Machine Learning Techniques for Software Bug Detection

    OpenAIRE

    Saiqa Aleem; Luiz Fernando Capretz; Faheem Ahmed

    2015-01-01

    Machine learning techniques can be used to analyse data from different perspectives and enable developers to retrieve useful information. Machine learning techniques are proven to be useful in terms of software bug prediction. In this paper, a comparative performance analysis of different machine learning techniques is explored f or software bug prediction on public available data sets. Results showed most of the mac ...

  12. Setting Organizational Key Performance Indicators in the Precision Machine Industry

    Directory of Open Access Journals (Sweden)

    Mei-Hsiu Hong

    2015-11-01

    Full Text Available The aim of this research is to define (or set organizational key performance indicators (KPIs in the precision machine industry using the concept of core competence and the supply chain operations reference (SCOR model. The research is conducted in three steps. In the first step, a benchmarking study is conducted to collect major items of core competence and to group them into main categories in order to form a foundation for the research. In the second step, a case company questionnaire and interviews are conducted to identify the key factors of core competence in the precision machine industry. The analysis is conducted based on four dimensions and hence several analysis rounds are completed. Questionnaire data is analyzed with grey relational analysis (GRA and resulted in 5–6 key factors in each dimension or sub-dimension. Based on the conducted interviews, 13 of these identified key factors are separated into one organization objective, five key factors of core competence and seven key factors of core ability. In the final step, organizational KPIs are defined (or set for the five identified key factors of core competence. The most competitive core abilities for each of the five key factors are established. After that, organizational KPIs are set based on the core abilities within 3 main categories of KPIs (departmental, office grade and hierarchal for each key factor. The developed KPI system based on organizational objectives, core competences, and core abilities allow enterprises to handle dynamic market demand and business environments, as well as changes in overall corporate objectives.

  13. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    Science.gov (United States)

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  14. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Science.gov (United States)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  15. The Development and Evaluation of a Virtual Radiotherapy Treatment Machine Using an Immersive Visualisation Environment

    Science.gov (United States)

    Bridge, P.; Appleyard, R. M.; Ward, J. W.; Philips, R.; Beavis, A. W.

    2007-01-01

    Due to the lengthy learning process associated with complicated clinical techniques, undergraduate radiotherapy students can struggle to access sufficient time or patients to gain the level of expertise they require. By developing a hybrid virtual environment with real controls, it was hoped that group learning of these techniques could take place…

  16. Formalizing the Safety of Java, the Java Virtual Machine and Java Card

    NARCIS (Netherlands)

    Hartel, Pieter H.; Moreau, Luc

    2001-01-01

    We review the existing literature on Java safety, emphasizing formal approaches, and the impact of Java safety on small footprint devices such as smart cards. The conclusion is that while a lot of good work has been done, a more concerted effort is needed to build a coherent set of machine readable

  17. Employing a virtual reality tool to explicate tacit knowledge of machine operations

    NARCIS (Netherlands)

    Vasenev, Alexandr; Hartmann, Timo; Doree, Andries G.; Hassani, F.

    2013-01-01

    The quality and durability of asphalted roads strongly depends on the final step in the road construction process; the profiling and compaction of the fresh spread asphalt. During compaction machine operators continuously make decisions on how to proceed with the compaction accounting for

  18. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  19. Using an open-source PACS virtual machine for a digital angiography unit: methods and initial impressions.

    Science.gov (United States)

    Kagadis, George C; Alexakos, Christos; Langer, Steve G; French, Todd

    2012-02-01

    The productivity gains, diagnostic benefit, and enhanced data availability to clinicians enabled by picture archiving and communication systems (PACS) are no longer in doubt. However, commercial PACS offerings are often extremely expensive initially and require ongoing support contracts with vendors to maintain them. Recently, several open-source offerings have become available that put PACS within reach of more users. However, they can be resource-intensive to install and assure that they have room for future growth--both for computational and storage capacity. An alternate approach, which we describe herein, is to use PACS built on virtual machines which can be moved from smaller to larger hardware as needed in a just-in-time manner. This leverages the cost benefits of Moore's Law for both storage and compute costs. We describe the approach and current results in this paper.

  20. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing.

    Science.gov (United States)

    Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire

    2016-06-25

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.

  1. Virtual machines & volunteer computing: Experience from LHC@Home: Test4Theory project

    CERN Document Server

    Lombraña González, Daniel; Blomer, Jakob; Buncic, Predrag; Harutyunyan, Artem; Marquina, Miguel; Segal, Ben; Skands, Peter; Karneyeu, Anton

    2012-01-01

    Volunteer desktop grids are nowadays becoming more and more powerful thanks to improved high end components: multi-core CPUs, larger RAM memories and hard disks, better network connectivity and bandwidth, etc. As a result, desktop grid systems can run more complex experiments or simulations, but some problems remain: the heterogeneity of hardware architectures and software (library dependencies, code length, big repositories, etc.) make it very difficult for researchers and developers to deploy and maintain a software stack for all the available platforms. In this paper, the employment of virtualization is shown to be the key to solve these problems. It provides a homogeneous layer allowing researchers to focus their efforts on running their experiments. Inside virtual custom execution environments, researchers can control and deploy very complex experiments or simulations running on heterogeneous grids of high-end computers. The following work presents the latest results from CERN’s LHC@home Test4Theory p...

  2. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries

    Directory of Open Access Journals (Sweden)

    Han Bucong

    2012-11-01

    Full Text Available Abstract Background Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. Results We evaluated support vector machines (SVM as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33% of 13.56M PubChem, 1,496 (0.89% of 168 K MDDR, and 719 (7.73% of 9,305 MDDR compounds similar to the known inhibitors. Conclusions SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  3. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    Science.gov (United States)

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  4. How the choice of Operating System can affect databases on a Virtual Machine

    OpenAIRE

    Karlsson, Jan; Eriksson, Patrik

    2014-01-01

    As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virt...

  5. Performance of a Horizontal Triple Cylinder Type Pulping Machine

    Directory of Open Access Journals (Sweden)

    Sukrisno Widyotomo

    2011-05-01

    Full Text Available Pulping is one important step in wet coffee processing method. Pulping process usually uses a machine which constructed by wood or metal materials. A horizontal single cylinder type of fresh coffee cherries pulping machine is the most popular machine in coffee processing. One of the weaknesses of a horizontal single cylinder type of fresh coffee cherries pulping machine is higher in broken beans. Broken bean is one of mayor aspects in defect system that contribute to low quality. Indonesian Coffee and Cocoa Research Institute has designed and tested a horizontal double cylinder type of fresh coffee cherries pulping machine which resulted in 12.6—21.4% of broken beans. To reduce percentage of broken beans, Indonesian Coffee and Cocoa Research Institute has developed and tested a horizontal triple cylinder type of fresh coffee cherries pulping machine. Material tested was fresh mature Robusta coffee cherries, 60—65% (wet basis moisture content; has classified on 3 levels i.e. unsorted, small and medium, and clean from metal and foreign materials. The result showed that the machine produced 6,340 kg/h in optimal capacity for operational conditions, 1400 rpm rotor rotation speed for unsorted coffee cherries with composition 55.5% whole parchment coffee, 3.66% broken beans, and 1% beans in wet skin.Key words : coffee, pulp, pulper, cylinder, quality.

  6. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    Science.gov (United States)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  7. Combinatorial support vector machines approach for virtual screening of selective multi-target serotonin reuptake inhibitors from large compound libraries.

    Science.gov (United States)

    Shi, Z; Ma, X H; Qin, C; Jia, J; Jiang, Y Y; Tan, C Y; Chen, Y Z

    2012-02-01

    Selective multi-target serotonin reuptake inhibitors enhance antidepressant efficacy. Their discovery can be facilitated by multiple methods, including in silico ones. In this study, we developed and tested an in silico method, combinatorial support vector machines (COMBI-SVMs), for virtual screening (VS) multi-target serotonin reuptake inhibitors of seven target pairs (serotonin transporter paired with noradrenaline transporter, H(3) receptor, 5-HT(1A) receptor, 5-HT(1B) receptor, 5-HT(2C) receptor, melanocortin 4 receptor and neurokinin 1 receptor respectively) from large compound libraries. COMBI-SVMs trained with 917-1951 individual target inhibitors correctly identified 22-83.3% (majority >31.1%) of the 6-216 dual inhibitors collected from literature as independent testing sets. COMBI-SVMs showed moderate to good target selectivity in misclassifying as dual inhibitors 2.2-29.8% (majority virtual hits correlate with the reported effects of their predicted targets. COMBI-SVM is potentially useful for searching selective multi-target agents without explicit knowledge of these agents. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Loop shaping design for tracking performance in machine axes.

    Science.gov (United States)

    Schinstock, Dale E; Wei, Zhouhong; Yang, Tao

    2006-01-01

    A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.

  9. Application of Machine Learning Algorithms for the Query Performance Prediction

    Directory of Open Access Journals (Sweden)

    MILICEVIC, M.

    2015-08-01

    Full Text Available This paper analyzes the relationship between the system load/throughput and the query response time in a real Online transaction processing (OLTP system environment. Although OLTP systems are characterized by short transactions, which normally entail high availability and consistent short response times, the need for operational reporting may jeopardize these objectives. We suggest a new approach to performance prediction for concurrent database workloads, based on the system state vector which consists of 36 attributes. There is no bias to the importance of certain attributes, but the machine learning methods are used to determine which attributes better describe the behavior of the particular database server and how to model that system. During the learning phase, the system's profile is created using multiple reference queries, which are selected to represent frequent business processes. The possibility of the accurate response time prediction may be a foundation for automated decision-making for database (DB query scheduling. Possible applications of the proposed method include adaptive resource allocation, quality of service (QoS management or real-time dynamic query scheduling (e.g. estimation of the optimal moment for a complex query execution.

  10. Impact of time on task on ADHD patient's performances in a virtual classroom.

    Science.gov (United States)

    Bioulac, Stéphanie; Lallemand, Stéphanie; Rizzo, Albert; Philip, Pierre; Fabrigoule, Colette; Bouvard, Manuel Pierre

    2012-09-01

    Use of virtual reality tool is interesting for the evaluation of Attention Deficit/Hyperactivity Disorder (ADHD) patients. The virtual environment offers the opportunity to administer controlled task like the typical neuropsychological tools, but in an environment much more like standard classroom. Previous studies showed that a virtual classroom was able to distinguish performances of children with and without ADHD, but the evolution of performances over time has not been explored. The aim of this work was to study time on task effects on performances of ADHD children compared to controls in a virtual classroom (VC). 36 boys aged from 7 to 10 years completed the virtual classroom task. We compared the performance of the children diagnosed with ADHD with those of the control children. We also compared attentional performances recorded in the virtual classroom with measures of the Continuous Performance Test (CPT II). Our results showed that patients differ from control subjects in term of time effect on performances. If controls sustained performances over time in the virtual reality task, ADHD patients showed a significant performance decrement over time. Performances at the VC correlated with CPT II measures. ADHD children are vulnerable to a time on task effect on performances which could explain part of their difficulties. Virtual reality is a reliable method to test ADHD children ability to sustain performances over time. Copyright © 2012 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  11. Commonality and Variability Analysis for Xenon Family of Separation Virtual Machine Monitors (CVAX)

    Science.gov (United States)

    2017-07-18

    the sponsor (e.g., military, intelligence community, other government, commercial, medical ) and upon the type of system (e.g., application in the...loads. • Machine memory. Xen’s terminology for hardware memory present on a chip. • Misuse case. Abuse case. Attacker-product interaction that the...on connections between domains. • Physical memory. Xen’s terminology , short for pseudo-physical memory. Physical memory is the Xen term for the

  12. Method and apparatus for characterizing and enhancing the functional performance of machine tools

    Science.gov (United States)

    Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David

    2013-04-30

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.

  13. Exploration of Machine Learning Approaches to Predict Pavement Performance

    Science.gov (United States)

    2018-03-23

    Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...

  14. Macroscopic transport by synthetic molecular machines

    NARCIS (Netherlands)

    Berna, J; Leigh, DA; Lubomska, M; Mendoza, SM; Perez, EM; Rudolf, P; Teobaldi, G; Zerbetto, F

    Nature uses molecular motors and machines in virtually every significant biological process, but demonstrating that simpler artificial structures operating through the same gross mechanisms can be interfaced with - and perform physical tasks in - the macroscopic world represents a significant hurdle

  15. Virtual Schools Report 2016: Directory and Performance Review

    Science.gov (United States)

    Miron, Gary; Gulosino, Charisse

    2016-01-01

    This 2016 report is the fourth in an annual series of National Education Policy Center (NEPC) reports on the fast-growing U.S virtual school sector. This year's report provides a comprehensive directory of the nation's full-time virtual and blended learning school providers. It also pulls together and assesses the available evidence on the…

  16. Upnp-Based Discovery And Management Of Hypervisors And Virtual Machines

    Directory of Open Access Journals (Sweden)

    Sławomir Zieliński

    2011-01-01

    Full Text Available The paper introduces a Universal Plug and Play based discovery and management toolkitthat facilitates collaboration between cloud infrastructure providers and users. The presentedtools construct a unified hierarchy of devices and their management-related services, thatrepresents the current deployment of users’ (virtual infrastructures in the provider’s (physicalinfrastructure as well as the management interfaces of respective devices. The hierarchycan be used to enhance the capabilities of the provider’s infrastructure management system.To maintain user independence, the set of management operations exposed by a particulardevice is always defined by the device owner (either the provider or user.

  17. Formalising the Safety of Java, the Java Virtual Machine and Java Card

    OpenAIRE

    Hartel, Pieter H.; Moreau, Luc

    2001-01-01

    We review the existing literature on Java safety, emphasizing formal approaches, and the impact of Java safety on small footprint devices such as smart cards. The conclusion is that while a lot of good work has been done, a more concerted effort is needed to build a coherent set of machine readable formal models of the whole of Java and its implementation. This is a formidable task but we believe it is essential to building trust in Java safety, and thence to achieve ITSEC level 6 or Common C...

  18. A study on constructing a machine-maintenance training system based on virtual reality technology

    International Nuclear Information System (INIS)

    Ishii, H.; Kashiwa, K.; Tezuka, T.; Yoshikawa, H.

    1997-01-01

    The development of a VR based training system are presented for teaching disassembling procedures of mechanical machines used in nuclear power plant. The methods of Petri net model for describing trainees' plausible actions in the disassembling process and reduce a right sequence of action sequence are developed as well as realization of the related Petri net editor and the demonstration of the developed VR based training system was demonstrated by example practice of disassembly simulation of check valve. Moreover, the needed future works are also discussed

  19. Target-specific support vector machine scoring in structure-based virtual screening: computational validation, in vitro testing in kinases, and effects on lung cancer cell proliferation.

    Science.gov (United States)

    Li, Liwei; Khanna, May; Jo, Inha; Wang, Fang; Ashpole, Nicole M; Hudmon, Andy; Meroueh, Samy O

    2011-04-25

    We assess the performance of our previously reported structure-based support vector machine target-specific scoring function across 41 targets, 40 among them from the Directory of Useful Decoys (DUD). The area under the curve of receiver operating characteristic plots (ROC-AUC) revealed that scoring with SVM-SP resulted in consistently better enrichment over all target families, outperforming Glide and other scoring functions, most notably among kinases. In addition, SVM-SP performance showed little variation among protein classes, exhibited excellent performance in a test case using a homology model, and in some cases showed high enrichment even with few structures used to train a model. We put SVM-SP to the test by virtual screening 1125 compounds against two kinases, EGFR and CaMKII. Among the top 25 EGFR compounds, three compounds (1-3) inhibited kinase activity in vitro with IC₅₀ of 58, 2, and 10 μM. In cell cultures, compounds 1-3 inhibited nonsmall cell lung carcinoma (H1299) cancer cell proliferation with similar IC₅₀ values for compound 3. For CaMKII, one compound inhibited kinase activity in a dose-dependent manner among 20 tested with an IC₅₀ of 48 μM. These results are encouraging given that our in-house library consists of compounds that emerged from virtual screening of other targets with pockets that are different from typical ATP binding sites found in kinases. In light of the importance of kinases in chemical biology, these findings could have implications in future efforts to identify chemical probes of kinases within the human kinome.

  20. Performance Evaluation of a Prototyped Breadfruit Seed Dehulling Machine

    Directory of Open Access Journals (Sweden)

    Nnamdi Anosike

    2016-05-01

    Full Text Available The drudgery involved in dehulling breadfruit seed by traditional methods has been highlighted as one of the major problems hindering the realization of the full potential of breadfruit as a field to food material. This paper describes a development in an African breadfruit seed dehulling machine with increased throughput of about 70% above reported machines. The machine consists of a 20 mm diameter shaft, carrying a spiral wound around its circumference (feeder. The feeder provides the required rotational motion and turns a circular disk that rotates against a fixed disk. The two disks can be adjusted to maintain a pre-determined gap for dehulling. An inbuilt drying unit reduces the moisture content of the breadfruit for easy separation of the cotyledon from the endosperm immediately after the dehulling process. The sifting unit that separates the shell from the seed is achieved in this design with an electric fan. The machine is design to run at a speed of 250 rpm with an electric motor as the prime mover. The dehulling efficiency up to 86% and breakage of less than 1.3% was obtained at a clearance setting of 12.4 mm between disks. A sifting efficiency of 100% was achieved. Based on the design diameter and clearance between the dehulling disks, the machine throughput was 216 kg/h with an electric power requirement of 1.207 kW.

  1. Effects of ICT Assisted Real and Virtual Learning on the Performance of Secondary School Students

    Science.gov (United States)

    Deka, Monisha; Jena, Ananta Kumar

    2017-01-01

    The study aimed to assess the effect of ICT assisted real and virtual learning performance over the traditional approach of secondary school students. Non-Equivalent Pretest-Posttest Quasi Experimental Design used to assess and relate the effects of independent variables virtual learning on dependent variables (i.e. learning performance).…

  2. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  3. NeuroDebian Virtual Machine Deployment Facilitates Trainee-Driven Bedside Neuroimaging Research.

    Science.gov (United States)

    Cohen, Alexander; Kenney-Jung, Daniel; Botha, Hugo; Tillema, Jan-Mendelt

    2017-01-01

    Freely available software, derived from the past 2 decades of neuroimaging research, is significantly more flexible for research purposes than presently available clinical tools. Here, we describe and demonstrate the utility of rapidly deployable analysis software to facilitate trainee-driven translational neuroimaging research. A recipe and video tutorial were created to guide the creation of a NeuroDebian-based virtual computer that conforms to current neuroimaging research standards and can exist within a HIPAA-compliant system. This allows for retrieval of clinical imaging data, conversion to standard file formats, and rapid visualization and quantification of individual patients' cortical and subcortical anatomy. As an example, we apply this pipeline to a pediatric patient's data to illustrate the advantages of research-derived neuroimaging tools in asking quantitative questions "at the bedside." Our goal is to provide a path of entry for trainees to become familiar with common neuroimaging tools and foster an increased interest in translational research.

  4. A Universal Motor Performance Test System Based on Virtual Instrument

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-09-01

    Full Text Available With the development of technology universal motors play a more and more important role in daily life and production, they have been used in increasingly wide field and the requirements increase gradually. How to control the speed and monitor the real-time temperature of motors are key issues. The cost of motor testing system based on traditional technology platform is very high in many reasons. In the paper a universal motor performance test system which based on virtual instrument is provided. The system achieves the precise control of the current motor speed and completes the measurement of real-time temperature of motor bearing support in order to realize the testing of general-purpose motor property. Experimental result shows that the system can work stability in controlling the speed and monitoring the real-time temperature. It has advantages that traditional using of SCM cannot match in speed, stability, cost and accuracy aspects. Besides it is easy to expand and reconfigure.

  5. Virtual street-crossing performance in persons with multiple sclerosis: Feasibility and task performance characteristics.

    Science.gov (United States)

    Stratton, M E; Pilutti, L A; Crowell, J A; Kaczmarski, H; Motl, R W

    2017-01-02

    Multiple sclerosis (MS) is a neurological disease that commonly results in physical and cognitive dysfunction. Accordingly, MS might impact the ability to safely cross the street. The purpose of this study was to examine the feasibility of a simulated street-crossing task in persons with MS and to determine differences in street-crossing performance between persons with MS and non-MS controls. 26 participants with MS (median Expanded Disability Status Scale [EDSS] score = 3.5) and 19 controls completed 40 trials of a virtual street-crossing task. There were 2 crossing conditions (i.e., no distraction and phone conversation), and participants performed 20 trials per condition. Participants were instructed that the goal of the task was to cross the street successfully (i.e., without being hit be a vehicle). The primary outcome was task feasibility, assessed as completion and adverse events. Secondary outcomes were measures of street-crossing performance. Overall, the simulated street-crossing task was feasible (i.e., 90% completion, no adverse events) in participants with MS. Participants with MS waited longer and were less attentive to traffic before entering the street compared with controls (all P .05). A virtual street-crossing task is feasible for studying street-crossing behavior in persons with mild MS and most individuals with moderate MS. Virtual street-crossing performance is impaired in persons with MS compared to controls; however, persons with MS do not appear to be more vulnerable to a distracting condition. The virtual reality environment presents a safe and useful setting for understanding pedestrian behavior in persons with MS.

  6. Cognitive Performance Assessment in Mixed and Virtual Environment Systems

    National Research Council Canada - National Science Library

    Pair, Jarrell; Rizzo, Albert

    2006-01-01

    The U.S. Army is currently interested in developing state-of-the-art training methods that leverage technology based on established and emerging immersive mixed and virtual environment systems employing...

  7. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

    Directory of Open Access Journals (Sweden)

    Haejoon Jung

    2018-01-01

    Full Text Available As an intrinsic part of the Internet of Things (IoT ecosystem, machine-to-machine (M2M communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  8. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

    Science.gov (United States)

    Jung, Haejoon; Lee, In-Ho

    2018-01-12

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  9. Virtual overlay metrology for fault detection supported with integrated metrology and machine learning

    Science.gov (United States)

    Lee, Hong-Goo; Schmitt-Weaver, Emil; Kim, Min-Suk; Han, Sang-Jun; Kim, Myoung-Soo; Kwon, Won-Taik; Park, Sung-Ki; Ryan, Kevin; Theeuwes, Thomas; Sun, Kyu-Tae; Lim, Young-Wan; Slotboom, Daan; Kubis, Michael; Staecker, Jens

    2015-03-01

    While semiconductor manufacturing moves toward the 7nm node for logic and 15nm node for memory, an increased emphasis has been placed on reducing the influence known contributors have toward the on product overlay budget. With a machine learning technique known as function approximation, we use a neural network to gain insight to how known contributors, such as those collected with scanner metrology, influence the on product overlay budget. The result is a sufficiently trained function that can approximate overlay for all wafers exposed with the lithography system. As a real world application, inline metrology can be used to measure overlay for a few wafers while using the trained function to approximate overlay vector maps for the entire lot of wafers. With the approximated overlay vector maps for all wafers coming off the track, a process engineer can redirect wafers or lots with overlay signatures outside the standard population to offline metrology for excursion validation. With this added flexibility, engineers will be given more opportunities to catch wafers that need to be reworked, resulting in improved yield. The quality of the derived corrections from measured overlay metrology feedback can be improved using the approximated overlay to trigger, which wafers should or shouldn't be, measured inline. As a development or integration engineer the approximated overlay can be used to gain insight into lots and wafers used for design of experiments (DOE) troubleshooting. In this paper we will present the results of a case study that follows the machine learning function approximation approach to data analysis, with production overlay measured on an inline metrology system at SK hynix.

  10. Performance Characterization and Auto-Ignition Performance of a Rapid Compression Machine

    OpenAIRE

    Hao Liu; Hongguang Zhang; Zhicheng Shi; Haitao Lu; Guangyao Zhao; Baofeng Yao

    2014-01-01

    A rapid compression machine (RCM) test bench is developed in this study. The performance characterization and auto-ignition performance tests are conducted at an initial temperature of 293 K, a compression ratio of 9.5 to 16.5, a compressed temperature of 650 K to 850 K, a driving gas pressure range of 0.25 MPa to 0.7 MPa, an initial pressure of 0.04 MPa to 0.09 MPa, and a nitrogen dilution ratio of 35% to 65%. A new type of hydraulic piston is used to address the problem in which the hydraul...

  11. Intraoperative performance and postoperative outcome comparison of longitudinal, torsional, and transversal phacoemulsification machines.

    Science.gov (United States)

    Christakis, Panos G; Braga-Mele, Rosa M

    2012-02-01

    To compare the intraoperative performance and postoperative outcomes of 3 phacoemulsification machines that use different modes. Kensington Eye Institute, Toronto, Ontario, Canada. Comparative case series. This chart and video review comprised consecutive eligible patients who had phacoemulsification by the same surgeon using a Whitestar Signature Ellips-FX (transversal), Infiniti-Ozil-IP (torsional), or Stellaris (longitudinal) machine. The review included 98 patients. Baseline characteristics in the groups were similar; the mean nuclear sclerosis grade was 2.0 ± 0.8. There were no significant intraoperative complications. The torsional machine averaged less phacoemulsification needle time (83 ± 33 seconds) than the transversal (99 ± 40 seconds; P=.21) or longitudinal (110 ± 45 seconds; P=.02) machines; the difference was accentuated in cases with high-grade nuclear sclerosis. The torsional machine had less chatter and better followability than the transversal or longitudinal machines (P<.001). The torsional and longitudinal machines had better anterior chamber stability than the transversal machine (P<.001). Postoperatively, the torsional machine yielded less central corneal edema than the transversal (P<.001) and longitudinal (P=.04) machines, corresponding to a smaller increase in mean corneal thickness (torsional 5%, transversal 10%, longitudinal 12%; P=.04). Also, the torsional machine had better 1-day postoperative visual acuities (P<.001). All 3 phacoemulsification machines were effective with no significant intraoperative complications. The torsional machine outperformed the transversal and longitudinal machines, with a lower mean needle time, less chatter, and improved followability. This corresponded to less corneal edema 1 day postoperatively and better visual acuity. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Utilization of a virtual patient for advanced assessment of student performance in pain management.

    Science.gov (United States)

    Smith, Michael A; Waite, Laura H

    2017-09-01

    To assess student performance and achievement of course objectives following the integration of a virtual patient case designed to promote active, patient-centered learning in a required pharmacy course. DecisionSim™ (Kynectiv, Inc., Chadsford, PA), a dynamic virtual patient platform, was used to implement an interactive patient case to augment pain management material presented during a didactic session in a pharmacotherapy course. Simulation performance data were collected and analyzed. Student exam performance on pain management questions was compared to student exam performance on nearly identical questions from a prior year when a paper-based case was used instead of virtual patient technology. Students who performed well on the virtual patient case performed better on exam questions related to patient assessment (p = 0.0244), primary pharmacological therapy (p = 0.0001), and additional pharmacological therapy (p = 0.0001). Overall exam performance did not differ between the two groups. However, students with exposure to the virtual patient case demonstrated significantly better performance on higher level Bloom's Taxonomy questions that required them to create pharmacotherapy regimens (p=0.0005). Students in the previous year (exposed only to a paper patient case) performed better in calculating conversions of opioids for patients (p = 0.0001). Virtual patient technology may enhance student performance on high-level Bloom's Taxonomy examination questions. This study adds to the current literature demonstrating the value of virtual patient technology as an active-learning strategy. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Virtual reality simulation training of mastoidectomy - studies on novice performance.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts

    2016-08-01

    Virtual reality (VR) simulation-based training is increasingly used in surgical technical skills training including in temporal bone surgery. The potential of VR simulation in enabling high-quality surgical training is great and VR simulation allows high-stakes and complex procedures such as mastoidectomy to be trained repeatedly, independent of patients and surgical tutors, outside traditional learning environments such as the OR or the temporal bone lab, and with fewer of the constraints of traditional training. This thesis aims to increase the evidence-base of VR simulation training of mastoidectomy and, by studying the final-product performances of novices, investigates the transfer of skills to the current gold-standard training modality of cadaveric dissection, the effect of different practice conditions and simulator-integrated tutoring on performance and retention of skills, and the role of directed, self-regulated learning. Technical skills in mastoidectomy were transferable from the VR simulation environment to cadaveric dissection with significant improvement in performance after directed, self-regulated training in the VR temporal bone simulator. Distributed practice led to a better learning outcome and more consolidated skills than massed practice and also resulted in a more consistent performance after three months of non-practice. Simulator-integrated tutoring accelerated the initial learning curve but also caused over-reliance on tutoring, which resulted in a drop in performance when the simulator-integrated tutor-function was discontinued. The learning curves were highly individual but often plateaued early and at an inadequate level, which related to issues concerning both the procedure and the VR simulator, over-reliance on the tutor function and poor self-assessment skills. Future simulator-integrated automated assessment could potentially resolve some of these issues and provide trainees with both feedback during the procedure and immediate

  14. Comparative analysis of partial imaging performance parameters of home and imported X-ray machines

    International Nuclear Information System (INIS)

    Cao Yunxi; Wang Xianyun; Liu Huiqin; Guo Yongxin

    2002-01-01

    Objective: To compare and analyze the performance indexes and the imaging quality of the home and imported X-ray machines through testing their partial imaging performance parameters. Methods: By separate sampling from 10 home and 10 imported X-ray machines, the parameters including tube current, time of exposure, machine total exposure, and repeatability were tested, and the imaging performance was evaluated according to the national standard. Results: All the performance indexes met the standard of GB4505-84. The first sampling tests showed the maximum changing coefficient of imaging performance repeatability of the home X-ray machines was Δmax1 = 0.025, while that of the imported X-ray machine was Δmax1 = 0.016. In the second sampling tests, the maximum changing coefficients of the two were Δmax2 = 0.048 and Δmax2 = 0.022, respectively. Conclusion: The 2 years' follow-up tests indicate that there is no significant difference between the above-mentioned parameters of the elaborately adjusted home X-ray machines and imported ones, but the home X-ray machines are no better than the imported X-ray machines in stability and consistency

  15. Towards a virtual platform for aerodynamic design, performance assessment and optimization of horizontal axis wind turbines

    OpenAIRE

    Martínez Valdivieso, Daniel

    2017-01-01

    This thesis focuses on the study and improvement of the techniques involved on a virtual platform for the simulation of the Aerodynamics of Horizontal Axis Wind Turbines, with the ultimate objective of making Wind Energy more competitive. Navier-Stokes equations govern Aerodynamics, which is an unresolved and very active field of research due to the current inability to capture the relevant the scales both in time and space for nowadays industrial-size machines (with rotors over 100 m...

  16. Improvement of the dynamic performance of an AC linear permanent magnet machine

    NARCIS (Netherlands)

    Jansen, J.W.; Lomonova, E.; Vandenput, A.J.A.; Compter, J.C.; Verweij, A.H.

    2003-01-01

    This paper discusses the controller design and test approaches leading to the performance improvement of a brushless 3-phase AC synchronous permanent magnet linear machine. The feasible controller design concept for the linear machine is presented and further implemented in Simulink and dSPACE. Two

  17. Quality control and performance evaluation of microselectron HDR machine over 30 months

    International Nuclear Information System (INIS)

    Balasubramanian, N.; Annex, E.H.; Sunderam, N.; Patel, N.P.; Kaushal, V.

    2008-01-01

    To assess the performance evaluation of Microselectron HDR machine the standard quality control and quality assurance checks were carried out after each loading of new 192 Ir brachytherapy source In the machine. Total 9 loadings were done over a period of 30 months

  18. Using GPS to evaluate productivity and performance of forest machine systems

    Science.gov (United States)

    Steven E. Taylor; Timothy P. McDonald; Matthew W. Veal; Ton E. Grift

    2001-01-01

    This paper reviews recent research and operational applications of using GPS as a tool to help monitor the locations, travel patterns, performance, and productivity of forest machines. The accuracy of dynamic GPS data collected on forest machines under different levels of forest canopy is reviewed first. Then, the paper focuses on the use of GPS for monitoring forest...

  19. Virtual reality training improves da Vinci performance: a prospective trial.

    Science.gov (United States)

    Cho, Jae Sung; Hahn, Koo Yong; Kwak, Jung Myun; Kim, Jin; Baek, Se Jin; Shin, Jae Won; Kim, Seon Hahn

    2013-12-01

    The DV-Trainer™ (a virtual reality [VR] simulator) (Mimic Technologies, Inc., Seattle, WA) is one of several different robotic surgical training methods. We designed a prospective study to determine whether VR training could improve da Vinci(®) Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA) performance. Surgeons (n=12) were enrolled using a randomized protocol. Groups 1 (VR training) and 2 (control) participated in VR and da Vinci exercises. Participants' time and moving distance were combined to determine a composite score: VR index=1000/(time×moving distance). The da Vinci exercises included needle control and suturing. Procedure time and error were measured. A composite index (DV index) was computed and used to measure da Vinci competency. After the initial trial with both the VR and da Vinci exercises, only Group 1 was trained with the VR simulator following our institutional curriculum for 3 weeks. All members of both groups then participated in the second trial of the VR and da Vinci exercises and were scored in the same way as in the initial trial. In the initial trial, there was no difference in the VR index (Group 1 versus Group 2, 8.9 ± 3.3 versus 9.4 ± 3.7; P=.832) and the DV index (Group 1 versus Group 2, 3.85 ± 0.73 versus 3.66 ± 0.65; P=.584) scores between the two groups. At the second time point, Group 1 showed increased VR index scores in comparison with Group 2 (19.3 ± 4.5 versus 9.7 ± 4.1, respectively; P=.001) and improved da Vinci performance skills as measured by the DV index (5.80 ± 1.13 versus 4.05 ± 1.03, respectively; P=.028) and by suturing time (7.1 ± 1.54 minutes versus 10.55 ± 1.93 minutes, respectively; P=.018). We found that VR simulator training can improve da Vinci performance. VR practice can result in an early plateau in the learning curve for robotic practice under controlled circumstances.

  20. Superconductor Armature Winding for High Performance Electrical Machines

    Science.gov (United States)

    2016-12-05

    eddy -induced currents used for shielding. 3.1 SOLID SHIELD. The frequency of the induced current for our machines ... eddy   current  shields)   •  SuperSat     •  switch  reluctance  generators   •  AC  Homopolar   • Toroidal  (Gramme...higher than expected, due probably to highly conducting Nb sheath around the MgB2 filaments (the measured losses were coupling or eddy current

  1. Virtual Classroom Instruction and Academic Performance of Educational Technology Students in Distance Education, Enugu State

    Science.gov (United States)

    Akpan, Sylvester J.; Etim, Paulinus J.; Udom, Stella Ogechi

    2016-01-01

    The virtual classroom and distance education have created new teaching pedagogy. This study was carried out to investigate Virtual Classroom Instruction on Academic Performance of Educational Technology Students in Distance Education, Enugu State. The population for this study was limited to the Students in National Open University, Enugu study…

  2. Elbows higher! Performing, observing and correcting exercises by a Virtual Trainer

    NARCIS (Netherlands)

    Ruttkay, Z.M.; van Welbergen, H.

    2008-01-01

    In the framework of our Reactive Virtual Trainer (RVT) project, we are developing an Intelligent Virtual Agent (IVA) capable to act similarly to a real trainer. Besides presenting the physical exercises to be performed, she keeps an eye on the user. She provides feedback whenever appropriate, to

  3. Prospective Teachers' Likelihood of Performing Unethical Behaviors in the Real and Virtual Environments

    Science.gov (United States)

    Akdemir, Ömür; Vural, Ömer F.; Çolakoglu, Özgür M.

    2015-01-01

    Individuals act different in virtual environment than real life. The primary purpose of this study is to investigate the prospective teachers' likelihood of performing unethical behaviors in the real and virtual environments. Prospective teachers are surveyed online and their perceptions have been collected for various scenarios. Findings revealed…

  4. Microsoft Virtualization Master Microsoft Server, Desktop, Application, and Presentation Virtualization

    CERN Document Server

    Olzak, Thomas; Boomer, Jason; Keefer, Robert M

    2010-01-01

    Microsoft Virtualization helps you understand and implement the latest virtualization strategies available with Microsoft products. This book focuses on: Server Virtualization, Desktop Virtualization, Application Virtualization, and Presentation Virtualization. Whether you are managing Hyper-V, implementing desktop virtualization, or even migrating virtual machines, this book is packed with coverage on all aspects of these processes. Written by a talented team of Microsoft MVPs, Microsoft Virtualization is the leading resource for a full installation, migration, or integration of virtual syste

  5. Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters

    Science.gov (United States)

    Younge, Andrew J.

    2016-01-01

    With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…

  6. The Role of a Multidimensional Concept of Trust in the Performance of Global Virtual Teams

    Science.gov (United States)

    Bodensteiner, Nan Muir; Stecklein, Jonette M.

    2002-01-01

    This paper focuses on the concept of trust as an important ingredient of effective global virtual team performance. Definitions of trust and virtual teams are presented. The concept of trust is developed from its unilateral application (trust, absence of trust) to a multidimensional concept including cognitive and affective components. The special challenges of a virtual team are then discussed with particular emphasis on how a multidimensional concept of trust impacts these challenges. Propositions suggesting the multidimensional concept of trust moderates the negative impacts of distance, cross cultural and organizational differences, the effects of electronically mediated communication, reluctance to share information and a lack of hi story/future on the performance of virtual teams are stated. The paper concludes with recommendations and a set of techniques to build both cognitive and affective trust in virtual teams.

  7. Performance of quantum cloning and deleting machines over coherence

    Science.gov (United States)

    Karmakar, Sumana; Sen, Ajoy; Sarkar, Debasis

    2017-10-01

    Coherence, being at the heart of interference phenomena, is found to be an useful resource in quantum information theory. Here we want to understand quantum coherence under the combination of two fundamentally dual processes, viz., cloning and deleting. We found the role of quantum cloning and deletion machines with the consumption and generation of quantum coherence. We establish cloning as a cohering process and deletion as a decohering process. Fidelity of the process will be shown to have connection with coherence generation and consumption of the processes.

  8. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    Science.gov (United States)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  9. Surface Coating of Plastic Parts for Business Machines (Industrial Surface Coating): New Source Performance Standards (NSPS)

    Science.gov (United States)

    Learn more about the new source performance standards (NSPS) for surface coating of plastic parts for business machines by reading the rule summary and history and finding the code of federal regulations as well as related rules.

  10. Effectiveness of Hamstring Knee Rehabilitation Exercise Performed in Training Machine vs. Elastic Resistance Electromyography Evaluation Study

    DEFF Research Database (Denmark)

    Jakobsen, M. D.; Sundstrup, E.; Andersen, C. H.

    2014-01-01

    Objective The aim of this study was to evaluate muscle activity during hamstring rehabilitation exercises performed in training machine compared with elastic resistance. Design Six women and 13 men aged 28-67 yrs participated in a crossover study. Electromyographic (EMG) activity was recorded...... inclinometers. Results Training machines and elastic resistance showed similar high levels of muscle activity (biceps femoris and semitendinosus peak normalized EMG >80%). EMG during the concentric phase was higher than during the eccentric phase regardless of exercise and muscle. However, compared with machine.......001) during hamstring curl performed with elastic resistance (7.58 +/- 0.08) compared with hamstring curl performed in a machine (5.92 +/- 0.03). Conclusions Hamstring rehabilitation exercise performed with elastic resistance induces similar peak hamstring muscle activity but slightly lower EMG values at more...

  11. Insider Threat Detection on the Windows Operating System using Virtual Machine Introspection

    Science.gov (United States)

    2012-06-14

    layer security (TLS) to prevent an organization from performing a man-in-the-middle ( MITM ) attack to determine the user’s activity, thus defeating...available on that system. As previously mentioned, RDP can be encrypted to prevent MITM attacks, which also defeats any network level traffic monitoring

  12. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury.

    Science.gov (United States)

    Fong, Kenneth N K; Chow, Kathy Y Y; Chan, Bianca C H; Lam, Kino C K; Lee, Jeff C K; Li, Teresa H Y; Yan, Elaine W H; Wong, Asta T Y

    2010-04-30

    This study aimed to examine the usability of a newly designed virtual reality (VR) environment simulating the operation of an automated teller machine (ATM) for assessment and training. Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM). Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. A rehabilitation hospital and university-based teaching facilities were used as the setting. A total of 24 persons in the community with acquired brain injury (ABI)--14 in Part I and 10 in Part II--made up the participants in the study. In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI) teaching programs for six 1-hour sessions over a three-week period. Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021). We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real-life practice in persons with ABI.

  13. Modification and Performance Evaluation of a Low Cost Electro-Mechanically Operated Creep Testing Machine

    Directory of Open Access Journals (Sweden)

    John J. MOMOH

    2010-12-01

    Full Text Available Existing mechanically operated tensile and creep testing machine was modified to a low cost, electro-mechanically operated creep testing machine capable of determining the creep properties of aluminum, lead and thermoplastic materials as a function of applied stress, time and temperature. The modification of the testing machine was necessitated by having an electro-mechanically operated creep testing machine as a demonstration model ideal for use and laboratory demonstrations, which will provide an economical means of performing standard creep experiments. The experimental result is a more comprehensive understanding of the laboratory experience, as the technology behind the creep testing machine, the test methodology and the response of materials loaded during experiment are explored. The machine provides a low cost solution for Mechanics of Materials laboratories interested in creep testing experiment and demonstration but not capable of funding the acquisition of commercially available creep testing machines. Creep curves of strain versus time on a thermoplastic material were plotted at a stress level of 1.95MPa, 3.25MPa and 4.55MPa and temperature of 20oC, 40oC and 60oC respectively. The machine is satisfactory since it is always ready for operation at any given time.

  14. SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software

    Science.gov (United States)

    Plesea, Lucian

    2008-01-01

    This software is a new target for the Spacecraft Health Inference Engine (SHINE) knowledge base that compiles a knowledge base to a language called Tiny C - an interpreted version of C that can be embedded on flight processors. This new target allows portions of a running SHINE knowledge base to be updated on a "live" system without needing to halt and restart the containing SHINE application. This enhancement will directly provide this capability without the risk of software validation problems and can also enable complete integration of BEAM and SHINE into a single application. This innovation enables SHINE deployment in domains where autonomy is used during flight-critical applications that require updates. This capability eliminates the need for halting the application and performing potentially serious total system uploads before resuming the application with the loss of system integrity. This software enables additional applications at JPL (microsensors, embedded mission hardware) and increases the marketability of these applications outside of JPL.

  15. Virtual and augmented reality technologies in Human Performance: a review

    Directory of Open Access Journals (Sweden)

    Tânia Brusque Crocetta

    Full Text Available Abstract Introduction : Today's society is influenced by Information and Communication Technologies. Toys that were once built by hand have been reinterpreted and have become highly commercialized products. In this context, games using Augmented Reality (AR and Virtual Reality (VR technologies are present in the everyday lives of children, youth and adults. Objective : To investigate how Physical Education professionals in Brazil have been making use of AR and VR games to benefit their work. Materials and methods : We only included studies that addressed exercise or physical activity using AR or VR games. We searched the databases of Virtual Health Library (VHL and Scientific Electronic Library Online (SciELO, using the words augmented reality, virtual reality, exergames, Wii and serious games. Results : Nineteen articles were included in the systematic review. The most frequently used device was the Nintendo(r Wii, with over 25 different kinds of games. With regard to the subjects of the studies, four studies were conducted with healthy individuals (mean = 65.7, three with patients with Parkinson's disease (mean = 18.0, three with elderly women (mean = 7.7 and two with patients with stroke injury (mean = 6.0. Conclusion : Many physical therapists and occupational therapists use serious games with AR or VR technologies as another work tool, especially for rehabilitation practices. The fact that these technologies are also used in Physical Education classes in Brazil indicates that electronic games are available and can be a tool that can contribute to the widespread adoption of exercise as an enjoyable form of recreation.

  16. Evaluating the use of prior information under different pacing conditions on aircraft inspection performance: The use of virtual reality technology

    Science.gov (United States)

    Bowling, Shannon Raye

    The aircraft maintenance industry is a complex system consisting of human and machine components, because of this; much emphasis has been placed on improving aircraft-inspection performance. One proven technique for improving inspection performance is the use of training. There are several strategies that have been implemented for training, one of which is feedforward information. The use of prior information (feedforward) is known to positively affect inspection performance. This information can consist of knowledge about defect characteristics (types, severity/criticality, and location) and the probability of occurrence. Although several studies have been conducted that demonstrate the usefulness of feedforward as a training strategy, there are certain research issues that need to be addressed. This study evaluates the effect of feedforward information in a simulated 3-dimensional environment by the use of virtual reality. A controlled study was conducted to evaluate the effectiveness of feedforward information in a simulated aircraft inspection environment. The study was conducted in two phases. The first phase evaluated the difference between general and detailed inspection at different pacing levels. The second phase evaluated the effect of feedforward information pertaining to severity, probability and location. Analyses of the results showed that subjects performing detailed inspection performed significantly better than while performing general inspection. Pacing also had the effect of reducing performance for both general and detailed inspection. The study also found that as the level of feedforward information increases, performance also increases. In addition to evaluating performance measures, the study also evaluated process and subjective measures. It was found that process measures such as number of fixation points, fixation groups, mean fixation duration, and percent area covered were all affected by the treatment levels. Analyses of the subjective

  17. 'Virtual' central business office: how UMMS improved revenue cycle performance.

    Science.gov (United States)

    Henciak, Bill; Fontaine, Christine; Fields, Keith; Parks, Stacy

    2010-06-01

    Based on its experience with implementing a virtual central business office, UMMS recommends the following steps to ensure the success of such an initiative: Define the process flow for the organization's day-today revenue cycle operations prior to implementation. Then select best practices and milestones for managing accounts. Identify any possible technology issues that could arise during implementation prior to go live. Hold a midproject debriefing with staff. Develop an organizational chart that details who is responsible for handling issues that arise during implementation and afterward.

  18. Development of Virtual Reality Cycling Simulator

    OpenAIRE

    Schramka, Filip; Arisona, Stefan; Joos, Michael; Erath, Alexander

    2017-01-01

    This paper presents a cycling simulator implemented using consumer virtual reality hardware and additional off-the-shelf sensors. Challenges like real time motion tracking within the performance requirements of state of the art virtual reality are successfully mastered. Retrieved data from digital motion processors is sent over Bluetooth to a render machine running Unity3D. By processing this data a bicycle is mapped into virtual space. Physically correct behaviour is simulated and high quali...

  19. Alpha band functional connectivity correlates with the performance of brain-machine interfaces to decode real and imagined movements

    Directory of Open Access Journals (Sweden)

    Hisato eSugata

    2014-08-01

    Full Text Available Brain signals recorded from the primary motor cortex (M1 are known to serve a significant role in coding the information brain-machine interfaces (BMIs need to perform real and imagined movements, and also to form several functional networks with motor association areas. However, whether functional networks between M1 and other brain regions, such as these motor association areas, are related to performance of BMIs is unclear. To examine the relationship between functional connectivity and performance of BMIs, we analyzed the correlation coefficient between performance of neural decoding and functional connectivity over the whole brain using magnetoencephalography. Ten healthy participants were instructed to execute or imagine three simple right upper limb movements. To decode the movement type, we extracted 40 virtual channels in the left M1 via the beamforming approach, and used them as a decoding feature. In addition, seed-based functional connectivities of activities in the alpha band during real and imagined movements were calculated using imaginary coherence. Seed voxels were set as the same virtual channels in M1. After calculating the imaginary coherence in individuals, the correlation coefficient between decoding accuracy and strength of imaginary coherence was calculated over the whole brain. The significant correlations were distributed mainly to motor association areas for both real and imagined movements. These regions largely overlapped with brain regions that had significant connectivity to M1. Our results suggest that use of the strength of functional connectivity between M1 and motor association areas has the potential to improve the performance of BMIs to perform real and imagined movements.

  20. Experimental analysis of the performance of machine learning algorithms in the classification of navigation accident records

    Directory of Open Access Journals (Sweden)

    REIS, M V. S. de A.

    2017-06-01

    Full Text Available This paper aims to evaluate the use of machine learning techniques in a database of marine accidents. We analyzed and evaluated the main causes and types of marine accidents in the Northern Fluminense region. For this, machine learning techniques were used. The study showed that the modeling can be done in a satisfactory manner using different configurations of classification algorithms, varying the activation functions and training parameters. The SMO (Sequential Minimal Optimization algorithm showed the best performance result.

  1. TRACEABILITY OF ON COORDINATE MEASURING MACHINES – CALIBRATION AND PERFORMANCE VERIFICATION

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Savio, Enrico; Bariani, Paolo

    This document is used in connection with three exercises each of 45 minutes duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercises concern three aspects of coordinate measurement traceability: 1) Performance verification of a CMM using a ball bar; 2) Calibration...... of an optical coordinate measuring machine; 3) Uncertainty assessment using the ISO 15530-3 “Calibrated workpieces” procedure....

  2. ALife for Real and Virtual Audio-Video Performances

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2014-01-01

    of neural networks – one dimensional artificial agents that populate their two dimensional artificial world, and which are served by a simple input output control system – that can use both genetic and reinforcement learning algorithms to evolve appropriate behavioural answers to an impressively large...... shapes of inputs, through both a fitness formula based genetic pressure, and, eventually, a user-machine based feedbacks. More closely, in the first version of MAG algorithm the agents’ control system is a perceptron; the world of the agents is a two dimensional grid that changes its dimensions...

  3. Operating Room Performance Improves after Proficiency-Based Virtual Reality Cataract Surgery Training

    DEFF Research Database (Denmark)

    Thomsen, Ann Sofia Skou; Bach-Holm, Daniella; Kjærbo, Hadi

    2017-01-01

    PURPOSE: To investigate the effect of virtual reality proficiency-based training on actual cataract surgery performance. The secondary purpose of the study was to define which surgeons benefit from virtual reality training. DESIGN: Multicenter masked clinical trial. PARTICIPANTS: Eighteen cataract...... surgeons with different levels of experience. METHODS: Cataract surgical training on a virtual reality simulator (EyeSi) until a proficiency-based test was passed. MAIN OUTCOME MEASURES: Technical performance in the operating room (OR) assessed by 3 independent, masked raters using a previously validated...... task-specific assessment tool for cataract surgery (Objective Structured Assessment of Cataract Surgical Skill). Three surgeries before and 3 surgeries after the virtual reality training were video-recorded, anonymized, and presented to the raters in random order. RESULTS: Novices (non...

  4. What limits the performance of current invasive Brain Machine Interfaces?

    Directory of Open Access Journals (Sweden)

    Gytis eBaranauskas

    2014-04-01

    Full Text Available The concept of a brain-machine interface (BMI or a computer-brain interface is simple: BMI creates a communication pathway for a direct control by brain of an external device. In reality BMIs are very complex devices and only recently the increase in computing power of microprocessors enabled a boom in BMI research that continues almost unabated to this date, the high point being the insertion of electrode arrays into the brains of 5 human patients in a clinical trial run by Cyberkinetics with few other clinical tests still in progress. Meanwhile several EEG-based BMI devices (non-invasive BMIs were launched commercially. Modern electronics and dry electrode technology made possible to drive the cost of some of these devices below few hundred dollars. However, the initial excitement of the direct control by brain waves of a computer or other equipment is dampened by large efforts required for learning, high error rates and slow response speed. All these problems are directly related to low information transfer rates typical for such EEG-based BMIs. In invasive BMIs employing multiple electrodes inserted into the brain one may expect much higher information transfer rates than in EEG-based BMIs because, in theory, each electrode provides an independent information channel. However, although invasive BMIs require more expensive equipment and have ethical problems related to the need to insert electrodes in the live brain, such financial and ethical costs are often not offset by a dramatic improvement in the information transfer rate. Thus the main topic of this review is why in invasive BMIs an apparently much larger information content obtained with multiple extracellular electrodes does not translate into much higher rates of information transfer? This paper explores possible answers to this question by concluding that more research on what movement parameters are encoded by neurons in motor cortex is needed before we can enjoy the next

  5. APPLICATION OF THE PERFORMANCE SELECTION INDEX METHOD FOR SOLVING MACHINING MCDM PROBLEMS

    Directory of Open Access Journals (Sweden)

    Dušan Petković

    2017-04-01

    Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.

  6. Correlation of cutting fluid performance in different machining operations

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Belluco, Walter

    2001-01-01

    An analysis of cutting fluid performance in different metal cutting operations is presented, based on experimental investigations in which type of operation, performance criteria, work material, and fluid type are considered. Cutting fluid performance was evaluated in turning, drilling, reaming...... investigated. Results show that correlation of cutting fluid performance in different operations exists, within the same group of cutting fluids, in the case of stainless steel as workpiece material. Under the tested conditions, the average correlation coefficients between efficiency parameters with different...... operations on austenitic stainless steel lied in the range 0.87-0.97 for waterbased fluids and 0.79-0.89 for straight oils. A similar correlation could not be found for the other workpiece materials investigated in this work. A rationalisation of cutting fluid performance tests is suggested....

  7. Managing creative team performance in virtual environments : An empirical study in 44 R&D teams

    NARCIS (Netherlands)

    Kratzer, J.; Leenders, R.Th.A.J.; van Engelen, J.M.L.

    Creative performance in R&D is of vital importance to organizations. Because R&D usually is organized in teams, the management of creative performance inherently refers to the team level creative performance. Over the last decades, R&D teams have become increasingly virtual. In this article we

  8. Solution procedure and performance evaluation for a water–LiBr absorption refrigeration machine

    International Nuclear Information System (INIS)

    Wonchala, Jason; Hazledine, Maxwell; Goni Boulama, Kiari

    2014-01-01

    The water–lithium bromide absorption cooling machine was investigated theoretically in this paper. A detailed solution procedure was proposed and validated. A parametric study was conducted over the entire admissible ranges of the desorber, condenser, absorber and evaporator temperatures. The performance of the machine was evaluated based on the circulation ratio which is a measure of the system size and cost, the first law coefficient of performance and the second law exergy efficiency. The circulation ratio and the coefficient of performance were seen to improve as the temperature of the heat source increased, while the second law performance deteriorated. The same qualitative responses were obtained when the temperature of the refrigerated environment was increased. On the other hand, simultaneously raising the condenser and absorber temperatures was seen to result in a severe deterioration of both the circulation ratio and first law coefficient of performance, while the second law performance indicator improved significantly. The influence of the difference between the condenser and absorber exit temperatures, as well as that of the internal recovery heat exchanger on the different performance indicators was also calculated and discussed. - Highlights: • Analysis of a water–LiBr absorption machine, including detailed solution procedure. • Performance assessed using first and second law considerations, as well as flow ratio. • Effects of heat source and refrigerated environment temperatures on the performance. • Effects of the difference between condenser and absorber temperatures. • Effects of internal heat exchanger efficiency on overall cooling machine performance

  9. Performance Evaluation and Development of Virtual Reality Bike Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J.Y.; Song, C.G.; Kim, N.G. [Chonbuk National University, Chonju (Korea)

    2002-03-01

    This paper describes a new bike system for the postural balance rehabilitation training. Virtual environment and three dimensional graphic model is designed with CAD tools such as 3D Studio Max and World Up. For the real time bike simulation, the optimized WorldToolKit graphic library is embedded with the dynamic geometry generation method, multi-thread method, and portal generation method. In this experiment, 20 normal adults were tested to investigate the influencing factors of balancing posture. We evaluated the system by measuring the parameters such as path deviation, driving velocity, COP(center of pressure), and average weight shift. Also, we investigated the usefulness of visual feedback information by weight shift. The results showed that continuous visual feedback by weight shift was more effective than no visual feedback in the postural balance control. It is concluded this system might be applied to clinical use as a new postural balance training system. (author). 19 refs., 15 figs., 2 tabs.

  10. Virtual Reality Exposure Training for Musicians: Its Effect on Performance Anxiety and Quality.

    Science.gov (United States)

    Bissonnette, Josiane; Dubé, Francis; Provencher, Martin D; Moreno Sala, Maria T

    2015-09-01

    Music performance anxiety affects numerous musicians, with many of them reporting impairment of performance due to this problem. This exploratory study investigated the effects of virtual reality exposure training on students with music performance anxiety. Seventeen music students were randomly assigned to a control group (n=8) or a virtual training group (n=9). Participants were asked to play a musical piece by memory in two separate recitals within a 3-week interval. Anxiety was then measured with the Personal Report of Confidence as a Performer Scale and the S-Anxiety scale from the State-Trait Anxiety Inventory (STAI-Y). Between pre- and post-tests, the virtual training group took part in virtual reality exposure training consisting of six 1-hour long sessions of virtual exposure. The results indicate a significant decrease in performance anxiety for musicians in the treatment group for those with a high level of state anxiety, for those with a high level of trait anxiety, for women, and for musicians with high immersive tendencies. Finally, between the pre- and post-tests, we observed a significant increase in performance quality for the experimental group, but not for the control group.

  11. STUDENT ACADEMIC PERFORMANCE PREDICTION USING SUPPORT VECTOR MACHINE

    OpenAIRE

    S.A. Oloruntoba1 ,J.L.Akinode2

    2017-01-01

    This paper investigates the relationship between students' preadmission academic profile and final academic performance. Data Sample of students in one of the Federal Polytechnic in south West part of Nigeria was used. The preadmission academic profile used for this study is the 'O' level grades(terminal high school results).The academic performance is defined using student's Grade Point Average(GPA). This research focused on using data mining technique to develop a model for predicting stude...

  12. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury

    Directory of Open Access Journals (Sweden)

    Li Teresa HY

    2010-04-01

    Full Text Available Abstract Objective This study aimed to examine the usability of a newly designed virtual reality (VR environment simulating the operation of an automated teller machine (ATM for assessment and training. Design Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM. Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. Setting A rehabilitation hospital and university-based teaching facilities were used as the setting. Participants A total of 24 persons in the community with acquired brain injury (ABI - 14 in Part I and 10 in Part II - made up the participants in the study. Interventions In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI teaching programs for six 1-hour sessions over a three-week period. Outcome Measures Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. Results The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021. We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. Conclusion We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real

  13. The Impact of Virtual Collaboration and Collaboration Technologies on Knowledge Transfer and Team Performance in Distributed Organizations

    Science.gov (United States)

    Ngoma, Ngoma Sylvestre

    2013-01-01

    Virtual teams are increasingly viewed as a powerful determinant of competitive advantage in geographically distributed organizations. This study was designed to provide insights into the interdependencies between virtual collaboration, collaboration technologies, knowledge transfer, and virtual team performance in an effort to understand whether…

  14. Consistency of performance of robot-assisted surgical tasks in virtual reality.

    Science.gov (United States)

    Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N

    2009-01-01

    The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.

  15. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  16. Performance Characterization and Auto-Ignition Performance of a Rapid Compression Machine

    Directory of Open Access Journals (Sweden)

    Hao Liu

    2014-09-01

    Full Text Available A rapid compression machine (RCM test bench is developed in this study. The performance characterization and auto-ignition performance tests are conducted at an initial temperature of 293 K, a compression ratio of 9.5 to 16.5, a compressed temperature of 650 K to 850 K, a driving gas pressure range of 0.25 MPa to 0.7 MPa, an initial pressure of 0.04 MPa to 0.09 MPa, and a nitrogen dilution ratio of 35% to 65%. A new type of hydraulic piston is used to address the problem in which the hydraulic buffer adversely affects the rapid compression process. Auto-ignition performance tests of the RCM are then performed using a DME–O2–N2 mixture. The two-stage ignition delay and negative temperature coefficient (NTC behavior of the mixture are observed. The effects of driving gas pressure, compression ratio, initial pressure, and nitrogen dilution ratio on the two-stage ignition delay are investigated. Results show that both the first-stage and overall ignition delays tend to increase with increasing driving gas pressure. The driving gas pressure within a certain range does not significantly influence the compressed pressure. With increasing compression ratio, the first-stage ignition delay is shortened, whereas the second-stage ignition delay is extended. With increasing initial pressure, both the first-stage and second-stage ignition delays are shortened. The second-stage ignition delay is shortened to a greater extent than that of the first-stage. With increasing nitrogen dilution ratio, the first-stage ignition delay is shortened, whereas the second-stage is extended. Thus, overall ignition delay presents different trends under various compression ratios and compressed pressure conditions.

  17. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    OpenAIRE

    Yukun Huang; Rong Chen; Jingbo Wei; Xilong Pei; Jing Cao; Prem Prakash Jayaraman; Rajiv Ranjan

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant co...

  18. Machining of high performance workpiece materials with CBN coated cutting tools

    International Nuclear Information System (INIS)

    Uhlmann, E.; Fuentes, J.A. Oyanedel; Keunecke, M.

    2009-01-01

    The machining of high performance workpiece materials requires significantly harder cutting materials. In hard machining, the early tool wear occurs due to high process forces and temperatures. The hardest known material is the diamond, but steel materials cannot be machined with diamond tools because of the reactivity of iron with carbon. Cubic boron nitride (cBN) is the second hardest of all known materials. The supply of such PcBN indexable inserts, which are only geometrically simple and available, requires several work procedures and is cost-intensive. The development of a cBN coating for cutting tools, combine the advantages of a thin film system and of cBN. Flexible cemented carbide tools, in respect to the geometry can be coated. The cBN films with a thickness of up to 2 μm on cemented carbide substrates show excellent mechanical and physical properties. This paper describes the results of the machining of various workpiece materials in turning and milling operations regarding the tool life, resultant cutting force components and workpiece surface roughness. In turning tests of Inconel 718 and milling tests of chrome steel the high potential of cBN coatings for dry machining was proven. The results of the experiments were compared with common used tool coatings for the hard machining. Additionally, the wear mechanisms adhesion, abrasion, surface fatigue and tribo-oxidation were researched in model wear experiments.

  19. Design and Performance Improvement of AC Machines Sharing a Common Stator

    Science.gov (United States)

    Guo, Lusu

    With the increasing demand on electric motors in various industrial applications, especially electric powered vehicles (electric cars, more electric aircrafts and future electric ships and submarines), both synchronous reluctance machines (SynRMs) and interior permanent magnet (IPM) machines are recognized as good candidates for high performance variable speed applications. Developing a single stator design which can be used for both SynRM and IPM motors is a good way to reduce manufacturing and maintenance cost. SynRM can be used as a low cost solution for many electric driving applications and IPM machines can be used in power density crucial circumstances or work as generators to meet the increasing demand for electrical power on board. In this research, SynRM and IPM machines are designed sharing a common stator structure. The prototype motors are designed with the aid of finite element analysis (FEA). Machine performances with different stator slot and rotor pole numbers are compared by FEA. An 18-slot, 4-pole structure is selected based on the comparison for this prototype design. Sometimes, torque pulsation is the major drawback of permanent magnet synchronous machines. There are several sources of torque pulsations, such as back-EMF distortion, inductance variation and cogging torque due to presence of permanent magnets. To reduce torque pulsations in permanent magnet machines, all the efforts can be classified into two categories: one is from the design stage, the structure of permanent magnet machines can be optimized with the aid of finite element analysis. The other category of reducing torque pulsation is after the permanent magnet machine has been manufactured or the machine structure cannot be changed because of other reasons. The currents fed into the permanent magnet machine can be controlled to follow a certain profile which will make the machine generate a smoother torque waveform. Torque pulsation reduction methods in both categories will be

  20. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    Science.gov (United States)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  1. Permanent Magnet Flux-Switching Machine, Optimal Design and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Liviu Emilian Somesan

    2013-01-01

    Full Text Available In this paper an analytical sizing-design procedure for a typical permanent magnet flux-switching machine (PMFSM with 12 stator and respectively 10 rotor poles is presented. An optimal design, based on Hooke-Jeeves method with the objective functions of maximum torque density, is performed. The results were validated via two dimensions finite element analysis (2D-FEA applied on the optimized structure. The influence of the permanent magnet (PM dimensions and type, respectively of the rotor poles' shape on the machine performance were also studied via 2D-FEA.

  2. Virtual Reality Simulation as a Tool to Monitor Surgical Performance Indicators: VIRESI Observational Study.

    Science.gov (United States)

    Muralha, Nuno; Oliveira, Manuel; Ferreira, Maria Amélia; Costa-Maia, José

    2017-05-31

    Virtual reality simulation is a topic of discussion as a complementary tool to traditional laparoscopic surgical training in the operating room. However, it is unclear whether virtual reality training can have an impact on the surgical performance of advanced laparoscopic procedures. Our objective was to assess the ability of the virtual reality simulator LAP Mentor to identify and quantify changes in surgical performance indicators, after LAP Mentor training for digestive anastomosis. Twelve surgeons from Centro Hospitalar de São João in Porto (Portugal) performed two sessions of advanced task 5: anastomosis in LAP Mentor, before and after completing the tutorial, and were evaluated on 34 surgical performance indicators. The results show that six surgical performance indicators significantly changed after LAP Mentor training. The surgeons performed the task significantly faster as the median 'total time' significantly reduced (p virtual reality training simulation as a benchmark tool to assess the surgical performance of Portuguese surgeons. LAP Mentor is able to identify variations in surgical performance indicators of digestive anastomosis.

  3. Mastoidectomy performance assessment of virtual simulation training using final-product analysis

    DEFF Research Database (Denmark)

    Andersen, Steven A W; Cayé-Thomasen, Per; Sørensen, Mads S

    2015-01-01

    a modified Welling scale. The simulator gathered basic metrics on time, steps, and volumes in relation to the on-screen tutorial and collisions with vital structures. RESULTS: Substantial inter-rater reliability (kappa = 0.77) for virtual simulation and moderate inter-rater reliability (kappa = 0.......59) for dissection final-product assessment was found. The simulation and dissection performance scores had significant correlation (P = .014). None of the basic simulator metrics correlated significantly with the final-product score except for number of steps completed in the simulator. CONCLUSIONS: A modified...... version of a validated final-product performance assessment tool can be used to assess mastoidectomy on virtual temporal bones. Performance assessment of virtual mastoidectomy could potentially save the use of cadaveric temporal bones for more advanced training when a basic level of competency...

  4. Software platform virtualization in chemistry research and university teaching.

    Science.gov (United States)

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  5. Deploying HEP applications using Xen and Globus Virtual Workspaces

    International Nuclear Information System (INIS)

    Agarwal, A; Desmarais, R; Gable, I; Grundy, D; P-Brown, D; Seuster, R; Vanderster, D C; Sobie, R; Charbonneau, A; Enge, R

    2008-01-01

    The deployment of HEP applications in heterogeneous grid environments can be challenging because many of the applications are dependent on specific OS versions and have a large number of complex software dependencies. Virtual machine monitors such as Xen could be used to package HEP applications, complete with their execution environments, to run on resources that do not meet their operating system requirements. Our previous work has shown HEP applications running within Xen suffer little or no performance penalty as a result of virtualization. However, a practical strategy is required for remotely deploying, booting, and controlling virtual machines on a remote cluster. One tool that promises to overcome the deployment hurdles using standard grid technology is the Globus Virtual Workspaces project. We describe strategies for the deployment of Xen virtual machines using Globus Virtual Workspace middleware that simplify the deployment of HEP applications

  6. The effect of virtual audiences on music performance anxiety

    OpenAIRE

    Castle-Green, Teresa Anne

    2015-01-01

    Music Performance Anxiety (MPA) is experienced by a large number of musicians throughout their careers. In many cases this anxiety can have debilitating consequences for the performer preventing them from delivering a performance to the standard they are capable of. Very little research has been conducted within the field of Human-Computer Interaction (HCI) into the ways in which technology can assist musicians with learning to manage their MPA. This paper builds on the limited body of resear...

  7. Performance Evaluation of Machine Learning Algorithms for Urban Pattern Recognition from Multi-spectral Satellite Images

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2014-03-01

    Full Text Available In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS and very high resolution (WorldView-2, Quickbird, Ikonos multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based, have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.

  8. Investigations on the performance of ultrasonic drilling process with special reference to precision machining of advanced ceramics

    International Nuclear Information System (INIS)

    Adithan, M.; Laroiya, S.C.

    1997-01-01

    Advanced ceramics are assuming an important role in modern industrial technology. The applications and advantages of using advanced ceramics are many. There are several reasons why we should go in for machining of advanced ceramics after their compacting and sintering. These are discussed in this paper. However, precision machining of advanced ceramics must be economical. Critical technological issues to be addressed in cost effective machining of ceramics include design of machine tools, tooling arrangements, improved yield and precision, relationship of part dimensions and finish specifications to functional performance, and on-line inspection. Considering the above ultrasonic drilling is an important process used for the precision machining of advanced ceramics. Extensive studies on tool wear occurring in the ultrasonic machining of advanced ceramics have been carried out. In addition, production accuracy of holes drilled, surface finish obtained and surface integrity aspects in the machining of advanced ceramics have also been investigated. Some specific findings with reference to surface integrity are: a) there were no cracks or micro-cracks developed during or after ultrasonic machining of advanced ceramics, b) while machining Hexoloy alpha silicon carbide a recast layer is formed as a result of ultrasonic machining. This is attributed to the viscous heating resulting from high energy impacts during ultrasonic machining. While machining all other types of ceramics no such formation of recast layer was observed, and , c) there is no change in the microstructure of the advanced ceramics as a result of ultrasonic machining

  9. Teaching Machines, Programming, Computers, and Instructional Technology: The Roots of Performance Technology.

    Science.gov (United States)

    Deutsch, William

    1992-01-01

    Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)

  10. Effectiveness and resolution of tests for evaluating the performance of cutting fluids in machining aerospace alloys

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Axinte, Dragos A.

    2008-01-01

    The paper discusses effectiveness and resolution of five cutting tests (turning, milling, drilling, tapping, VIPER grinding) and their quality output measures used in a multi-task procedure for evaluating the performance of cutting fluids when machining aerospace materials. The evaluation takes...

  11. Construction and performance of the scanning and measuring machine HOLMES used for bubble chamber holograms

    International Nuclear Information System (INIS)

    Drevermann, H.; Geissler, K.K.; Johansson, K.E.

    1985-01-01

    The construction and performance of the scanning and measuring machine HOLMES are described. It has been used to analyse in-line holograms taken with the small bubble chamber HOBC. A total of 8000 holograms has up to now been analysed on HOLMES. (orig.)

  12. Field weakening performance of flux-switching machines for hybrid/electric vehicles

    NARCIS (Netherlands)

    Tang, Y.; Paulides, J.J.H.; Lomonova, E.A.

    2015-01-01

    Flux-switching machines (FSMs) are a viable candidate for electric propulsion of hybrid/electric vehicles. This paper investigates the field weakening performance of FSMs. The investigation starts with general torque and voltage expressions, which reveal the relationships between certain parameters

  13. TRACEABILITY OF PRECISION MEASUREMENTS ON COORDINATE MEASURING MACHINESPERFORMANCE VERIFICATION OF CMMs

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Sobiecki, René; Tosello, Guido

    This document is used in connection with one exercise of 30 minutes duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercise concerns performance verification of the volumetric measuring capability of a small volume coordinate measuring machine...

  14. Effect of changing polarity of graphite tool/ Hadfield steel workpiece couple on machining performances in die sinking EDM

    Directory of Open Access Journals (Sweden)

    Özerkan Haci Bekir

    2017-01-01

    Full Text Available In this study, machining performance ouput parameters such as machined surface roughness (SR, material removal rate (MRR, tool wear rate (TWR, were experimentally examined and analyzed with the diversifying and changing machining parameters in (EDM. The processing parameters (input par. of this research are stated as tool material, peak current (I, pulse duration (ton and pulse interval (toff. The experimental machinings were put into practice by using Hadfield steel workpiece (prismatic and cylindrical graphite electrodes with kerosene dielectric at different machining current, polarity and pulse time settings. The experiments have shown that the type of tool material, polarity (direct polarity forms higher MRR, SR and TWR, current (high current lowers TWR and enhances MRR, TWR and pulse on time (ton=48□s is critical threshold value for MRR and TWR were influential on machining performance in electrical discharge machining.

  15. Performance and beam characteristics of the PANTAK THERAPAX HF225 X-ray therapy machine

    Energy Technology Data Exchange (ETDEWEB)

    Yiannakkaras, C; Papadopoulos, N; Christodoulides, G [Department of Medical Physics, Nicosia General Hospital, 1450 Nicosia (Cyprus)

    1999-12-31

    The performance and beam characteristics of the new PANTAK THERAPAX HF225 X-ray therapy machine have been measured, evaluated and discussed. Eight beam qualities within the working range of generating potentials between 50 and 225 kVp are used in our department. These beam qualities have been investigated in order to provide a data base specific to our machine. Beam Quality, Central Axis Depth Dose, Output, Relative Field Uniformity and Timer Error were investigated. (authors) 11 refs., 4 figs., 9 tabs.

  16. Performance Evaluation of Eleven-Phase Induction Machine with Different PWM Techniques

    Directory of Open Access Journals (Sweden)

    M.I. Masoud

    2015-06-01

    Full Text Available Multiphase induction machines are used extensively in low and medium voltage (MV drives. In MV drives, power switches have a limitation associated with switching frequency. This paper is a comparative study of the eleven-phase induction machine’s performance when used as a prototype and fed sinusoidal pulse-width-modulation (SPWM with a low switching frequency, selective harmonic elimination (SHE, and single pulse modulation (SPM techniques. The comparison depends on voltage/frequency controls for the same phase of voltage applied on the machine terminals for all previous techniques. The comparative study covers torque ripple, stator and harmonic currents, and motor efficiency.

  17. Design Enhancement and Performance Examination of External Rotor Switched Flux Permanent Magnet Machine for Downhole Application

    Science.gov (United States)

    Kumar, R.; Sulaiman, E.; Soomro, H. A.; Jusoh, L. I.; Bahrim, F. S.; Omar, M. F.

    2017-08-01

    The recent change in innovation and employments of high-temperature magnets, permanent magnet flux switching machine (PMFSM) has turned out to be one of the suitable contenders for seaward boring, however, less intended for downhole because of high atmospheric temperature. Subsequently, this extensive review manages the design enhancement and performance examination of external rotor PMFSM for the downhole application. Preparatory, the essential design parameters required for machine configuration are computed numerically. At that point, the design enhancement strategy is actualized through deterministic technique. At last, preliminary and refined execution of the machine is contrasted and as a consequence, the yield torque is raised from 16.39Nm to 33.57Nm while depreciating the cogging torque and PM weight up to 1.77Nm and 0.79kg, individually. In this manner, it is inferred that purposed enhanced design of 12slot-22pole with external rotor is convenient for the downhole application.

  18. Manufacturing and performance tests of in-pile creep measuring machine of zirconium alloys

    International Nuclear Information System (INIS)

    Choi, Y.; Kim, B. G.; Kang, Y. H.

    2000-01-01

    A mock-up of the in-pile creep test machine of zirconium alloys for HANARO was designed and manufactured, which performance tests were carried. The dimension of the in-pile creep machine is 55 mm in diameter and 700 mm in length for HANARO, respectively. Load is transferred to specimen by through the working mechanisms in which the contraction of bellows by gas pressure moves a yoke and an upper grip connected to a specimen, simultaneously. It was observed that the extension of the specimen mounted in grips was transferred to a linear voltage differential transformer perfectly by a yoke and a push rod in a bearing. The displacement of specimen with applied pressure was determined with the LVDT and a pressure gauge, respectively. Resultant stress-strain behaviors of the specimen was determined by the displacement-applied gas pressure curve, which showed similar values obtained with a standard tensile test machine

  19. Design of Parameter Independent, High Performance Sensorless Controllers for Permanent Magnet Synchronous Machines

    DEFF Research Database (Denmark)

    Xie, Ge

    . The transient fluctuation of the estimated rotor position error is around 20 degrees with a step load torque change from 0% to 100% of the rated torque. The position error in steady state is within ±2 electrical degrees for the best case. The proposed method may also be used for e.g. online machine parameter......The Permanent Magnet Synchronous Machine (PMSM) has become an attractive candidate for various industrial applications due to its high efficiency and torque density. In the PMSM drive system, simple and robust control methods play an important role in achieving satisfactory drive performances....... For reducing the cost and increasing the reliability of the drive system, eliminating the mechanical sensor brings a lot advantages to the PMSM drive system. Therefore, sensorless control was developed and has been increasingly used in different PMSM drive systems in the last 20 years. However, machine...

  20. Fire Response performance - Behavioural research in virtual Reality

    NARCIS (Netherlands)

    Kobes, M.; Oberije, N.; Rosmuller, N.; Helsloot, I.; Vries, de B.

    2007-01-01

    Fire response performance is the ability of an individual to perceive and validate clues of danger and to make decisions that are effective with regard to survive a fire situation with none or little health complications subsequently. In general little information is known about human behaviour in

  1. Influence of Workpiece Material on Tool Wear Performance and Tribofilm Formation in Machining Hardened Steel

    Directory of Open Access Journals (Sweden)

    Junfeng Yuan

    2016-04-01

    Full Text Available In addition to the bulk properties of a workpiece material, characteristics of the tribofilms formed as a result of workpiece material mass transfer to the friction surface play a significant role in friction control. This is especially true in cutting of hardened materials, where it is very difficult to use liquid based lubricants. To better understand wear performance and the formation of beneficial tribofilms, this study presents an assessment of uncoated mixed alumina ceramic tools (Al2O3+TiC in the turning of two grades of steel, AISI T1 and AISI D2. Both workpiece materials were hardened to 59 HRC then machined under identical cutting conditions. Comprehensive characterization of the resulting wear patterns and the tribofilms formed at the tool/workpiece interface were made using X-ray Photoelectron Spectroscopy and Scanning Electron Microscopy. Metallographic studies on the workpiece material were performed before the machining process and the surface integrity of the machined part was investigated after machining. Tool life was 23% higher when turning D2 than T1. This improvement in cutting tool life and wear behaviour was attributed to a difference in: (1 tribofilm generation on the friction surface and (2 the amount and distribution of carbide phases in the workpiece materials. The results show that wear performance depends both on properties of the workpiece material and characteristics of the tribofilms formed on the friction surface.

  2. Performance verification of network function virtualization in software defined optical transport networks

    Science.gov (United States)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  3. Using social media and machine learning to predict financial performance of a company

    OpenAIRE

    Forouzani, Sepehr

    2016-01-01

    Social media have recently become one of the most popular communicating form of media for numerous number of people. the text and posts shared on social media is widely used by researcher to analyze, study and relate them to various fields. In this master thesis, sentiment analysis has been performed on posts containing information about two companies that are shared on Twitter, and machine learning algorithms has been used to predict the financial performance of these companies.

  4. Virtual reality in surgical training.

    Science.gov (United States)

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  5. Virtualization, The next step for online services

    Directory of Open Access Journals (Sweden)

    Haller Piroska

    2013-06-01

    Full Text Available Virtualization allows sharing and allocating the hardware resources to more virtual machines thus increasing their usage rate. There are multiple solutions available today such as VMware vSphere, Microsoft Hyper-V, Xen Server and Red Hat KVM each with its own advantages and disadvantages. Choosing the right virtualization solution largely depends on the used applications and their resources requirements. The comparative analysis of the available virtualization solutions shows that it is essential to establish performance criteria’s and minimum and maximum resources usage thresholds over a given period of time. The coexistence of different services in different virtual machines that use different amount of resources allows a more efficient use of the available hardware resources.

  6. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  7. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention

    OpenAIRE

    Frère Annie F; Silva Alessandro P

    2011-01-01

    Abstract Background Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. Methods An interactive c...

  8. Motor performance of individuals with cerebral palsy in a virtual game using a mobile phone.

    Science.gov (United States)

    de Paula, Juliana Nobre; de Mello Monteiro, Carlos Bandeira; da Silva, Talita Dias; Capelini, Camila Miliani; de Menezes, Lilian Del Cielo; Massetti, Thais; Tonks, James; Watson, Suzanna; Nicolai Ré, Alessandro Hervaldo

    2017-11-01

    Cerebral palsy (CP) is a permanent disorder of movement, muscle tone or posture that is caused by damage to the immature and developing brain. Research has shown that Virtual Reality (VR) technology can be used in rehabilitation to support the acquisition of motor skills and the achievement of functional tasks. The aim of this study was to explore for improvements in the performance of individuals with CP with practice in the use of a virtual game on a mobile phone and to compare their performance with that of the control group. Twenty-five individuals with CP were matched for age and sex with twenty-five, typically developing individuals. Participants were asked to complete a VR maze task as fast as possible on a mobile phone. All participants performed 20 repetitions in the acquisition phase, five repetitions for retention and five more repetitions for transfer tests, in order to evaluate motor learning from the task. The CP group improved their performance in the acquisition phase and maintained the performance, which was shown by the retention test; in addition, they were able to transfer the performance acquired in an opposite maze path. The CP group had longer task-execution compared to the control group for all phases of the study. Individuals with cerebral palsy were able to learn a virtual reality game (maze task) using a mobile phone, and despite their differences from the control group, this kind of device offers new possibilities for use to improve function. Implications for rehabilitation A virtual game on a mobile phone can enable individuals with Cerebral Palsy (CP) to improve performance. This illustrates the potential for use of mobile phone games to improve function. Individuals with CP had poorer performance than individuals without CP, but they demonstrated immediate improvements from using a mobile phone device. Individuals with CP were able to transfer their skills to a similar task indicating that they were able to learn these motor skills by

  9. The Performance of Self in the Context of Shopping in a Virtual Dressing Room System

    DEFF Research Database (Denmark)

    Gao, Yi; Petersson, Eva; Brooks, Anthony Lewis

    2014-01-01

    This paper investigates the performance of self in a virtual dressing room based on a camera-based system reflecting a full body mirrored image of the self. The study was based on a qualitative research approach and a user-centered design methodology. 22 participants participated in design sessio......, semi-structured interviews and a questionnaire investigation. The results showed that the system facilitated self-recognition, self-perception, and shared experience, which afforded an enriched experience of the performing self....

  10. Evaluating the Effect of Virtual Reality Temporal Bone Simulation on Mastoidectomy Performance: A Meta-analysis.

    Science.gov (United States)

    Lui, Justin T; Hoy, Monica Y

    2017-06-01

    Background The increasing prevalence of virtual reality simulation in temporal bone surgery warrants an investigation to assess training effectiveness. Objectives To determine if temporal bone simulator use improves mastoidectomy performance. Data Sources Ovid Medline, Embase, and PubMed databases were systematically searched per the PRISMA guidelines. Review Methods Inclusion criteria were peer-reviewed publications that utilized quantitative data of mastoidectomy performance following the use of a temporal bone simulator. The search was restricted to human studies published in English. Studies were excluded if they were in non-peer-reviewed format, were descriptive in nature, or failed to provide surgical performance outcomes. Meta-analysis calculations were then performed. Results A meta-analysis based on the random-effects model revealed an improvement in overall mastoidectomy performance following training on the temporal bone simulator. A standardized mean difference of 0.87 (95% CI, 0.38-1.35) was generated in the setting of a heterogeneous study population ( I 2 = 64.3%, P virtual reality simulation temporal bone surgery studies, meta-analysis calculations demonstrate an improvement in trainee mastoidectomy performance with virtual simulation training.

  11. Utilizing Virtual Reality to Understand Athletic Performance and Underlying Sensorimotor Processing

    Directory of Open Access Journals (Sweden)

    Toshitaka Kimura

    2018-02-01

    Full Text Available In behavioral sports sciences, knowledge of athletic performance and underlying sensorimotor processing remains limited, because most data is obtained in the laboratory. In laboratory experiments we can strictly control the measurement conditions, but the action we can target may be limited and differ from actual sporting action. Thus, the obtained data is potentially unrealistic. We propose using virtual reality (VR technology to compensate for the lack of actual reality. We have developed a head mounted display (HMD-based VR system for application to baseball batting where the user can experience hitting a pitch in a virtual baseball stadium. The batter and the bat movements are measured using nine-axis inertial sensors attached to various parts of the body and bat, and they are represented by a virtual avatar in real time. The pitched balls are depicted by computer graphics based on previously recorded ball trajectories and are thrown in time with the motion of a pitcher avatar based on simultaneously recorded motion capture data. The ball bounces depending on its interaction with the bat. In a preliminary measurement where the VR system was combined with measurement equipment we found some differences between the behavioral and physiological data (i.e., the body movements and respiration of experts and beginners and between the types of pitches during virtual batting. This VR system with a sufficiently real visual experience will provide novel findings as regards athletic performance that were formerly hard to obtain and allow us to elucidate their sensorimotor processing in detail.

  12. Effects of physical randomness training on virtual and laboratory golf putting performance in novices.

    Science.gov (United States)

    Pataky, T C; Lamb, P F

    2018-06-01

    External randomness exists in all sports but is perhaps most obvious in golf putting where robotic putters sink only 80% of 5 m putts due to unpredictable ball-green dynamics. The purpose of this study was to test whether physical randomness training can improve putting performance in novices. A virtual random-physics golf-putting game was developed based on controlled ball-roll data. Thirty-two subjects were assigned a unique randomness gain (RG) ranging from 0.1 to 2.0-times real-world randomness. Putter face kinematics were measured in 5 m laboratory putts before and after five days of virtual training. Performance was quantified using putt success rate and "miss-adjustment correlation" (MAC), the correlation between left-right miss magnitude and subsequent right-left kinematic adjustments. Results showed no RG-success correlation (r = -0.066, p = 0.719) but mildly stronger correlations with MAC for face angle (r = -0.168, p = 0.358) and clubhead path (r = -0.302, p = 0.093). The strongest RG-MAC correlation was observed during virtual training (r = -0.692, p golf putting kinematics. Adaptation to external physical randomness during virtual training may therefore help golfers adapt to external randomness in real-world environments.

  13. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  14. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  15. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    International Nuclear Information System (INIS)

    Ballestrero, S; Lee, C J; Batraneanu, S M; Scannicchio, D A; Brasolin, F; Contescu, C; Girolamo, A Di; Astigarraga, M E Pozo; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  16. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  17. PROCESSING OF SOFT MAGNETIC MATERIALS BY POWDER METALLURGY AND ANALYSIS OF THEIR PERFORMANCE IN ELECTRICAL MACHINES

    Directory of Open Access Journals (Sweden)

    W. H. D. Luna

    2017-12-01

    Full Text Available This article presents the use of finite elements to analyze the yield of electric machines based on the use of different soft magnetic materials for the rotor and the stator, in order to verify the performance in electric machine using powder metallurgy. Traditionally, the cores of electric machines are built from rolled steel plates, thus the cores developed in this work are obtained from an alternative process known as powder metallurgy, where powders of soft magnetic materials are compacted and sintered. The properties of interest were analyzed (magnetic, electric and mechanical properties and they were introduced into the software database. The topology of the rotor used was 400 W three-phase synchronous motor manufactured by WEG Motors. The results show the feasibility to replace the metal sheets of the electric machines by solid blocks obtained by powder metallurgy process with only 0.37% yield losses. In addition, the powder metallurgical process reduces the use of raw materials and energy consumption per kg of raw material processed.

  18. Implementation of Total Productive Maintenance (TPM to Improve Sheeter Machine Performance

    Directory of Open Access Journals (Sweden)

    Candra Nofri Eka

    2017-01-01

    Full Text Available This paper purpose is an evaluation of TPM implementation, as a case study at sheeter machine cut size line 5 finishing department, PT RAPP, Indonesia. Research methodology collected the Overall Equipment Effectiveness (OEE data of sheeter machine and computed its scores. Then, OEE analysis big losses, statistical analysis using SPSS 20 and focused maintenance evaluation of TPM were performed. The data collected to machine sheeter’s production for 10 months (January-October 2016. The data analyses was resulted the OEE average score of 82.75%. This score was still below the world class OEE (85% and the company target (90%. Based the big losses of OEE analysis was obtained the reduce speed losses, which most significant losses of OEE scores. The reduce speed losses value was 44.79% of total losses during the research period. The high score of these losses due to decreasing of machine production speed by operators, which intended to improve the quality of resulting products. The OEE scores statistical analysis was found breakdown losses and reduces speed losses, which significantly affected to OEE scores. Implementations of focused maintenance of TPM in the case study may need to improve because there were still occurred un-expecting losses during the research period.

  19. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  20. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  1. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements.

    Science.gov (United States)

    Gerig, Nicolas; Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth

  2. Virtual Prototyping at CERN

    Science.gov (United States)

    Gennaro, Silvano De

    The VENUS (Virtual Environment Navigation in the Underground Sites) project is probably the largest Virtual Reality application to Engineering design in the world. VENUS is just over one year old and offers a fully immersive and stereoscopic "flythru" of the LHC pits for the proposed experiments, including the experimental area equipment and the surface models that are being prepared for a territorial impact study. VENUS' Virtual Prototypes are an ideal replacement for the wooden models traditionally build for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they are totally reliable, they can be updated in a matter of minutes, and they allow designers to explore them from inside, in a one-to-one scale. Navigation can be performed on the computer screen, on a stereoscopic large projection screen, or in immersive conditions, with an helmet and 3D mouse. By using specialised collision detection software, the computer can find optimal paths to lower each detector part into the pits and position it to destination, letting us visualize the whole assembly probess. During construction, these paths can be fed to a robot controller, which can operate the bridge cranes and build LHC almost without human intervention. VENUS is currently developing a multiplatform VR browser that will let the whole HEP community access LHC's Virtual Protoypes over the web. Many interesting things took place during the conference on Virtual Reality. For more information please refer to the Virtual Reality section.

  3. Prospective performance evaluation of selected common virtual screening tools. Case study: Cyclooxygenase (COX) 1 and 2.

    Science.gov (United States)

    Kaserer, Teresa; Temml, Veronika; Kutil, Zsofia; Vanek, Tomas; Landa, Premysl; Schuster, Daniela

    2015-01-01

    Computational methods can be applied in drug development for the identification of novel lead candidates, but also for the prediction of pharmacokinetic properties and potential adverse effects, thereby aiding to prioritize and identify the most promising compounds. In principle, several techniques are available for this purpose, however, which one is the most suitable for a specific research objective still requires further investigation. Within this study, the performance of several programs, representing common virtual screening methods, was compared in a prospective manner. First, we selected top-ranked virtual screening hits from the three methods pharmacophore modeling, shape-based modeling, and docking. For comparison, these hits were then additionally predicted by external pharmacophore- and 2D similarity-based bioactivity profiling tools. Subsequently, the biological activities of the selected hits were assessed in vitro, which allowed for evaluating and comparing the prospective performance of the applied tools. Although all methods performed well, considerable differences were observed concerning hit rates, true positive and true negative hits, and hitlist composition. Our results suggest that a rational selection of the applied method represents a powerful strategy to maximize the success of a research project, tightly linked to its aims. We employed cyclooxygenase as application example, however, the focus of this study lied on highlighting the differences in the virtual screening tool performances and not in the identification of novel COX-inhibitors. Copyright © 2015 The Authors. Published by Elsevier Masson SAS.. All rights reserved.

  4. Application of support vector machine to three-dimensional shape-based virtual screening using comprehensive three-dimensional molecular shape overlay with known inhibitors.

    Science.gov (United States)

    Sato, Tomohiro; Yuki, Hitomi; Takaya, Daisuke; Sasaki, Shunta; Tanaka, Akiko; Honma, Teruki

    2012-04-23

    In this study, machine learning using support vector machine was combined with three-dimensional (3D) molecular shape overlay, to improve the screening efficiency. Since the 3D molecular shape overlay does not use fingerprints or descriptors to compare two compounds, unlike 2D similarity methods, the application of machine learning to a 3D shape-based method has not been extensively investigated. The 3D similarity profile of a compound is defined as the array of 3D shape similarities with multiple known active compounds of the target protein and is used as the explanatory variable of support vector machine. As the measures of 3D shape similarity for our new prediction models, the prediction performances of the 3D shape similarity metrics implemented in ROCS, such as ShapeTanimoto and ScaledColor, were validated, using the known inhibitors of 15 target proteins derived from the ChEMBL database. The learning models based on the 3D similarity profiles stably outperformed the original ROCS when more than 10 known inhibitors were available as the queries. The results demonstrated the advantages of combining machine learning with the 3D similarity profile to process the 3D shape information of plural active compounds.

  5. Self-attitude awareness training: An aid to effective performance in microgravity and virtual environments

    Science.gov (United States)

    Parker, Donald E.; Harm, D. L.; Florer, Faith L.

    1993-01-01

    This paper describes ongoing development of training procedures to enhance self-attitude awareness in astronaut trainees. The procedures are based on observations regarding self-attitude (perceived self-orientation and self-motion) reported by astronauts. Self-attitude awareness training is implemented on a personal computer system and consists of lesson stacks programmed using Hypertalk with Macromind Director movie imports. Training evaluation will be accomplished by an active search task using the virtual Spacelab environment produced by the Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME-PAT) as well as by assessment of astronauts' performance and sense of well-being during orbital flight. The general purpose of self-attitude awareness training is to use as efficiently as possible the limited DOME-PAT training time available to astronauts prior to a space mission. We suggest that similar training procedures may enhance the performance of virtual environment operators.

  6. Development, Implementation, and Assessment of General Chemistry Lab Experiments Performed in the Virtual World of Second Life

    Science.gov (United States)

    Winkelmann, Kurt; Keeney-Kennicutt, Wendy; Fowler, Debra; Macik, Maria

    2017-01-01

    Virtual worlds are a potential medium for teaching college-level chemistry laboratory courses. To determine the feasibility of conducting chemistry experiments in such an environment, undergraduate students performed two experiments in the immersive virtual world of Second Life (SL) as part of their regular General Chemistry 2 laboratory course.…

  7. The assessment of Attention Deficit Hyperactivity Disorder in children using continous performance tasks in virtual environments

    OpenAIRE

    Gutiérrez Maldonado, José; Letosa Porta, A.; Rus Calafell, M.; Peñaloza Salazar, C.

    2009-01-01

    The assessment of Attention-Deficit/Hyperactivity Disorder (ADHD) involves the use of different instruments, and one of the most frequently used is the Continuous Performance Test (CPT). Virtual reality allows for the achieving of the presentation of stimuli with high levels of control. In addition, it facilitates the presentation of distracters with a high level of resemblance to elements which in fact can be found in the real world by placing them in a similar context. Thus, it is possible ...

  8. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    Science.gov (United States)

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  9. Performance of palm oil as a biobased machining lubricant when drilling inconel 718

    Directory of Open Access Journals (Sweden)

    Abd Rahim Erween

    2017-01-01

    Full Text Available Metalworking fluid acts as cooling and lubrication agent at the cutting zone in the machining process. However, conventional Metalworking fluid such mineral oil gives negative impact on the human and environment. Therefore, the manufacture tends to substitute the mineral oil to bio-based oil such as vegetables and synthetic oil. In this paper, the drilling experiment was carried out to evaluate the efficiency of palm oil and compare it with minimal quantity lubrication technique using synthetic ester, flood coolant and air blow with respect to cutting temperature, cutting force, torque and tool life. The experimental results showed that the application of palm oil under minimal quantity lubrication condition as the cutting fluid was more efficient process as it improves the machining performances.

  10. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Science.gov (United States)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  11. Task performance in virtual environments used for cognitive rehabilitation after traumatic brain injury.

    Science.gov (United States)

    Christiansen, C; Abreu, B; Ottenbacher, K; Huffman, K; Masel, B; Culpepper, R

    1998-08-01

    This report describes a reliability study using a prototype computer-simulated virtual environment to assess basic daily living skills in a sample of persons with traumatic brain injury (TBI). The benefits of using virtual reality in training for situations where safety is a factor have been established in defense and industry, but have not been demonstrated in rehabilitation. Thirty subjects with TBI receiving comprehensive rehabilitation services at a residential facility. An immersive virtual kitchen was developed in which a meal preparation task involving multiple steps could be performed. The prototype was tested using subjects who completed the task twice within 7 days. The stability of performance was estimated using intraclass correlation coefficients (ICCs). The ICC value for total performance based on all steps involved in the meal preparation task was .73. When three items with low variance were removed the ICC improved to .81. Little evidence of vestibular optical side-effects was noted in the subjects tested. Adequate initial reliability exists to continue development of the environment as an assessment and training prototype for persons with brain injury.

  12. Impact of Health Care Employees’ Job Satisfaction on Organizational Performance Support Vector Machine Approach

    Directory of Open Access Journals (Sweden)

    CEMIL KUZEY

    2018-01-01

    Full Text Available This study is undertaken to search for key factors that contribute to job satisfaction among health care workers, and also to determine the impact of these underlying dimensions of employee satisfaction on organizational performance. Exploratory Factor Analysis (EFA is applied to initially uncover the key factors, and then, in the next stage of analysis, a popular data mining technique, Support Vector Machine (SVM is employed on a sample of 249 to determine the impact of job satisfaction factors on organizational performance. According to the proposed model, the main factors are revealed to be management’s attitude, pay/reward, job security and colleagues.

  13. Secure Virtualization Environment Based on Advanced Memory Introspection

    Directory of Open Access Journals (Sweden)

    Shuhui Zhang

    2018-01-01

    Full Text Available Most existing virtual machine introspection (VMI technologies analyze the status of a target virtual machine under the assumption that the operating system (OS version and kernel structure information are known at the hypervisor level. In this paper, we propose a model of virtual machine (VM security monitoring based on memory introspection. Using a hardware-based approach to acquire the physical memory of the host machine in real time, the security of the host machine and VM can be diagnosed. Furthermore, a novel approach for VM memory forensics based on the virtual machine control structure (VMCS is put forward. By analyzing the memory of the host machine, the running VMs can be detected and their high-level semantic information can be reconstructed. Then, malicious activity in the VMs can be identified in a timely manner. Moreover, by mutually analyzing the memory content of the host machine and VMs, VM escape may be detected. Compared with previous memory introspection technologies, our solution can automatically reconstruct the comprehensive running state of a target VM without any prior knowledge and is strongly resistant to attacks with high reliability. We developed a prototype system called the VEDefender. Experimental results indicate that our system can handle the VMs of mainstream Linux and Windows OS versions with high efficiency and does not influence the performance of the host machine and VMs.

  14. Empirical Analysis of Server Consolidation and Desktop Virtualization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2013-01-01

    Full Text Available Physical server transited to virtual server infrastructure (VSI and desktop device to virtual desktop infrastructure (VDI have the crucial problems of server consolidation, virtualization performance, virtual machine density, total cost of ownership (TCO, and return on investments (ROI. Besides, how to appropriately choose hypervisor for the desired server/desktop virtualization is really challenging, because a trade-off between virtualization performance and cost is a hard decision to make in the cloud. This paper introduces five hypervisors to establish the virtual environment and then gives a careful assessment based on C/P ratio that is derived from composite index, consolidation ratio, virtual machine density, TCO, and ROI. As a result, even though ESX server obtains the highest ROI and lowest TCO in server virtualization and Hyper-V R2 gains the best performance of virtual machine management; both of them however cost too much. Instead the best choice is Proxmox Virtual Environment (Proxmox VE because it not only saves the initial investment a lot to own a virtual server/desktop infrastructure, but also obtains the lowest C/P ratio.

  15. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  16. Virtual airway simulation to improve dexterity among novices performing fibreoptic intubation.

    Science.gov (United States)

    De Oliveira, G S; Glassenberg, R; Chang, R; Fitzgerald, P; McCarthy, R J

    2013-10-01

    We developed a virtual reality software application (iLarynx) using built-in accelerometer properties of the iPhone(®) or iPad(®) (Apple Inc., Cupertino, CA, USA) that mimics hand movements for the performance of fibreoptic skills. Twenty novice medical students were randomly assigned to virtual airway training with the iLarynx software or no additional training. Eight out of the 10 subjects in the standard training group had at least one failed (> 120 s) attempt compared with two out of the 10 participants in the iLarynx group (p = 0.01). There were a total of 24 failed attempts in the standard training group and four in the iLarynx group (p < 0.005). Cusum analysis demonstrated continued group improvement in the iLarynx, but not in the standard training group. Virtual airway simulation using freely available software on a smartphone/tablet device improves dexterity among novices performing upper airway endoscopy. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  17. A methodology for performing virtual measurements in a nuclear reactor system

    International Nuclear Information System (INIS)

    Ikonomopoulos, A.; Uhrig, R.E.; Tsoukalas, L.H.

    1992-01-01

    A novel methodology is presented for monitoring nonphysically measurable variables in an experimental nuclear reactor. It is based on the employment of artificial neural networks to generate fuzzy values. Neural networks map spatiotemporal information (in the form of time series) to algebraically defined membership functions. The entire process can be thought of as a virtual measurement. Through such virtual measurements the values of nondirectly monitored parameters with operational significance, e.g., transient-type, valve-position, or performance, can be determined. Generating membership functions is a crucial step in the development and practical utilization of fuzzy reasoning, a computational approach that offers the advantage of describing the state of the system in a condensed, linguistic form, convenient for monitoring, diagnostics, and control algorithms

  18. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    Directory of Open Access Journals (Sweden)

    Ruijian Zhang

    2017-12-01

    Full Text Available Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the assessment of water quality will be far more efficient, and by generating the decision tree, the prediction will be quite accurate. The drawback of the machine learning modeling is that the execution takes quite long time, especially when we employ a better accuracy but more time-consuming algorithm in clustering. Therefore, we applied the high performance computing (HPC System to deal with this problem. Up to now, the pilot experiments have achieved very promising preliminary results. The visualized water quality assessment and prediction obtained from this project would be published in an interactive website so that the public and the environmental managers could use the information for their decision making.

  19. Identification of human flap endonuclease 1 (FEN1) inhibitors using a machine learning based consensus virtual screening.

    Science.gov (United States)

    Deshmukh, Amit Laxmikant; Chandra, Sharat; Singh, Deependra Kumar; Siddiqi, Mohammad Imran; Banerjee, Dibyendu

    2017-07-25

    Human Flap endonuclease1 (FEN1) is an enzyme that is indispensable for DNA replication and repair processes and inhibition of its Flap cleavage activity results in increased cellular sensitivity to DNA damaging agents (cisplatin, temozolomide, MMS, etc.), with the potential to improve cancer prognosis. Reports of the high expression levels of FEN1 in several cancer cells support the idea that FEN1 inhibitors may target cancer cells with minimum side effects to normal cells. In this study, we used large publicly available, high-throughput screening data of small molecule compounds targeted against FEN1. Two machine learning algorithms, Support Vector Machine (SVM) and Random Forest (RF), were utilized to generate four classification models from huge PubChem bioassay data containing probable FEN1 inhibitors and non-inhibitors. We also investigated the influence of randomly selected Zinc-database compounds as negative data on the outcome of classification modelling. The results show that the SVM model with inactive compounds was superior to RF with Matthews's correlation coefficient (MCC) of 0.67 for the test set. A Maybridge database containing approximately 53 000 compounds was screened and top ranking 5 compounds were selected for enzyme and cell-based in vitro screening. The compound JFD00950 was identified as a novel FEN1 inhibitor with in vitro inhibition of flap cleavage activity as well as cytotoxic activity against a colon cancer cell line, DLD-1.

  20. Modelisation de la conversion electromecanique des machines ...

    African Journals Online (AJOL)

    These implemented models would constitute the module of possible generators that one could couple with a model of wind power engine in order to study, within the framework of a virtual laboratory, the performances of wind-driven systems of electricity generation. Cet article présente les modèles de machines électriques ...

  1. Neuropsychological performance and integrated evaluation for disabled people using Virtual Reality: integrated VR profile.

    Science.gov (United States)

    Piccini, PierAntonio

    2002-01-01

    This chapter describes a Virtual Reality (VR) based innovative model of evaluation of the performance and potentiality of young mentally/psychically disabled subjects with learning difficulties. Using an immersive PC-based VR system, the study investigated the characteristics of 150 disabled subjects in the EU funded project "Horizon O.D.A.--Catania-1998--2000". The result is the definition of an individual neuropsychological "Integrated Profile", based on VR performance, that allows an objective functional benchmark between different subjects. This model can be used to investigate the possibility of job integration for mentally/psychically disabled subjects.

  2. Comparing the performance of different meta-heuristics for unweighted parallel machine scheduling

    Directory of Open Access Journals (Sweden)

    Adamu, Mumuni Osumah

    2015-08-01

    Full Text Available This article considers the due window scheduling problem to minimise the number of early and tardy jobs on identical parallel machines. This problem is known to be NP complete and thus finding an optimal solution is unlikely. Three meta-heuristics and their hybrids are proposed and extensive computational experiments are conducted. The purpose of this paper is to compare the performance of these meta-heuristics and their hybrids and to determine the best among them. Detailed comparative tests have also been conducted to analyse the different heuristics with the simulated annealing hybrid giving the best result.

  3. Performance test of the prototype-unit for J-PARC machine protection system

    International Nuclear Information System (INIS)

    Sakaki, Hironao; Nakamura, Naoki; Takahashi, Hiroki; Yoshikawa, Hiroshi

    2004-03-01

    In High Intensity Proton Accelerator Project (J-PARC), the high-power proton beam is accelerated. If the beam in J-PARC is not stopped at a few micro seconds or less, the fatal thermal shock destruction is caused on the surface of accelerating structure, because of the high-power proton beam. To avoid the thermal shock damage, we designed the high-speed machine protection system. And, the prototype unit for the system was produced. This report shows the result of its performance test. (author)

  4. Performance Improvement of Servo Machine Low Speed Operation Using RBFN Disturbance Observer

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2004-01-01

    A new scheme to estimate the moment of inertia in the servo motor drive system in very low speed is proposed in this paper. The typical speed estimation scheme in most servo system for low speed operation is sensitive to the variation of machine parameters, especially the moment of inertia....... To estimate the motor inertia value, the observer using the Radial Basis Function Networks (RBFN) is applied. The effectiveness of the proposed inertia estimation method is verified by experiments. It is concluded that the speed control performance in the low speed region is improved with the proposed...

  5. Introduction on performance analysis and profiling methodologies for KVM on ARM virtualization

    Science.gov (United States)

    Motakis, Antonios; Spyridakis, Alexander; Raho, Daniel

    2013-05-01

    The introduction of hardware virtualization extensions on ARM Cortex-A15 processors has enabled the implementation of full virtualization solutions for this architecture, such as KVM on ARM. This trend motivates the need to quantify and understand the performance impact, emerged by the application of this technology. In this work we start looking into some interesting performance metrics on KVM for ARM processors, which can provide us with useful insight that may lead to potential improvements in the future. This includes measurements such as interrupt latency and guest exit cost, performed on ARM Versatile Express and Samsung Exynos 5250 hardware platforms. Furthermore, we discuss additional methodologies that can provide us with a deeper understanding in the future of the performance footprint of KVM. We identify some of the most interesting approaches in this field, and perform a tentative analysis on how these may be implemented in the KVM on ARM port. These take into consideration hardware and software based counters for profiling, and issues related to the limitations of the simulators which are often used, such as the ARM Fast Models platform.

  6. Virtual tape measure for the operating microscope: system specifications and performance evaluation.

    Science.gov (United States)

    Kim, M Y; Drake, J M; Milgram, P

    2000-01-01

    The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.

  7. CernVM Co-Pilot: a Framework for Orchestrating Virtual Machines Running Applications of LHC Experiments on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Sánchez, C Aguado; Blomer, J; Buncic, P

    2011-01-01

    CernVM Co-Pilot is a framework for the delivery and execution of the workload on remote computing resources. It consists of components which are developed to ease the integration of geographically distributed resources (such as commercial or academic computing clouds, or the machines of users participating in volunteer computing projects) into existing computing grid infrastructures. The Co-Pilot framework can also be used to build an ad-hoc computing infrastructure on top of distributed resources. In this paper we present the architecture of the Co-Pilot framework, describe how it is used to execute the jobs of the ALICE and ATLAS experiments, as well as to run the Monte-Carlo simulation application of CERN Theoretical Physics Group.

  8. Virtual microscopy: an evaluation of its validity and diagnostic performance in routine histologic diagnosis of skin tumors

    DEFF Research Database (Denmark)

    Nielsen, Patricia Switten; Lindebjerg, Jan; Rasmussen, Jan

    2010-01-01

    Digitization of histologic slides is associated with many advantages, and its use in routine diagnosis holds great promise. Nevertheless, few articles evaluate virtual microscopy in routine settings. This study is an evaluation of the validity and diagnostic performance of virtual microscopy...... in routine histologic diagnosis of skin tumors. Our aim is to investigate whether conventional microscopy of skin tumors can be replaced by virtual microscopy. Ninety-six skin tumors and skin-tumor-like changes were consecutively gathered over a 1-week period. Specimens were routinely processed, and digital...... slides were captured on Mirax Scan (Carl Zeiss MicroImaging, Göttingen, Germany). Four pathologists evaluated the 96 virtual slides and the associated 96 conventional slides twice with intermediate time intervals of at least 3 weeks. Virtual slides that caused difficulties were reevaluated to identify...

  9. High correlation between performance on a virtual-reality simulator and real-life cataract surgery

    DEFF Research Database (Denmark)

    Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif

    2017-01-01

    -tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). CONCLUSION: Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data......PURPOSE: To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. METHODS: Cataract surgeons with varying levels of experience were included in the study. All...... antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. RESULTS: Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion...

  10. Advances in three-dimensional field analysis and evaluation of performance parameters of electrical machines

    Science.gov (United States)

    Sivasubramaniam, Kiruba

    This thesis makes advances in three dimensional finite element analysis of electrical machines and the quantification of their parameters and performance. The principal objectives of the thesis are: (1)the development of a stable and accurate method of nonlinear three-dimensional field computation and application to electrical machinery and devices; and (2)improvement in the accuracy of determination of performance parameters, particularly forces and torque computed from finite elements. Contributions are made in two general areas: a more efficient formulation for three dimensional finite element analysis which saves time and improves accuracy, and new post-processing techniques to calculate flux density values from a given finite element solution. A novel three-dimensional magnetostatic solution based on a modified scalar potential method is implemented. This method has significant advantages over the traditional total scalar, reduced scalar or vector potential methods. The new method is applied to a 3D geometry of an iron core inductor and a permanent magnet motor. The results obtained are compared with those obtained from traditional methods, in terms of accuracy and speed of computation. A technique which has been observed to improve force computation in two dimensional analysis using a local solution of Laplace's equation in the airgap of machines is investigated and a similar method is implemented in the three dimensional analysis of electromagnetic devices. A new integral formulation to improve force calculation from a smoother flux-density profile is also explored and implemented. Comparisons are made and conclusions drawn as to how much improvement is obtained and at what cost. This thesis also demonstrates the use of finite element analysis to analyze torque ripples due to rotor eccentricity in permanent magnet BLDC motors. A new method for analyzing torque harmonics based on data obtained from a time stepping finite element analysis of the machine is

  11. Electrical performance of a string of magnets representing a half-cell of the LHC machine

    International Nuclear Information System (INIS)

    Rodriguez-Mateos, F.; Coull, L.; Dahlerup-Petersen, K.; Hagedorn, D.; Krainz, G.; Rijllart, A.; McInturff, A.

    1996-01-01

    Tests have been carried out on a string of prototype superconducting magnets, consisting of one double-quadrupole and two double-dipoles forming the major part of a half-cell of the LHC machine. The magnets are protected individually by cold diodes and quench heaters. The electrical aspects of these tests are described here. The performance during quench of the protection diodes and the associated interconnections was studied. Tests determined the magnet quench performance in training and at different ramp-rates, and investigated the inter-magnet propagation of quenches. Current lead and inter-magnet contact resistances were controlled and the performance of the power converter and the dump switches assessed

  12. Electrical performance of a string of magnets representing a half-cell of the LHC machine

    International Nuclear Information System (INIS)

    Rodriguez-Mateos, F.; Coull, L.; Dahlerup-Petersen, K.; Hagedorn, D.; Krainz, G.; Rijllart, A.; McInturff, A.

    1995-01-01

    Tests have been carried out on a string prototype superconducting magnets, consisting of one double-quadrupole and two double-dipoles forming the major part of a half-cell of the LHC machine. The magnets are protected individually by ''cold diodes'' and quench heaters. The electrical aspects of these tests are described here. The performance during quench of the protection diodes and the associated interconnections was studied. Tests determined the magnet quench performance in training and at different ramp-rates, and investigated the inter-magnet propagation of quenches. Current lead and inter-magnet contact resistances were controlled and the performance of the power converter and the dump switches assessed

  13. Predictive Power of Machine Learning for Optimizing Solar Water Heater Performance: The Potential Application of High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Hao Li

    2017-01-01

    Full Text Available Predicting the performance of solar water heater (SWH is challenging due to the complexity of the system. Fortunately, knowledge-based machine learning can provide a fast and precise prediction method for SWH performance. With the predictive power of machine learning models, we can further solve a more challenging question: how to cost-effectively design a high-performance SWH? Here, we summarize our recent studies and propose a general framework of SWH design using a machine learning-based high-throughput screening (HTS method. Design of water-in-glass evacuated tube solar water heater (WGET-SWH is selected as a case study to show the potential application of machine learning-based HTS to the design and optimization of solar energy systems.

  14. Visuospatial and psychomotor aptitude predicts endovascular performance of inexperienced individuals on a virtual reality simulator.

    Science.gov (United States)

    Van Herzeele, Isabelle; O'Donoghue, Kevin G L; Aggarwal, Rajesh; Vermassen, Frank; Darzi, Ara; Cheshire, Nicholas J W

    2010-04-01

    This study evaluated virtual reality (VR) simulation for endovascular training of medical students to determine whether innate perceptual, visuospatial, and psychomotor aptitude (VSA) can predict initial and plateau phase of technical endovascular skills acquisition. Twenty medical students received didactic and endovascular training on a commercially available VR simulator. Each student treated a series of 10 identical noncomplex renal artery stenoses endovascularly. The simulator recorded performance data instantly and objectively. An experienced interventionalist rated the performance at the initial and final sessions using generic (out of 40) and procedure-specific (out of 30) rating scales. VSA were tested with fine motor dexterity (FMD, Perdue Pegboard), psychomotor ability (minimally invasive virtual reality surgical trainer [MIST-VR]), image recall (Rey-Osterrieth), and organizational aptitude (map-planning). VSA performance scores were correlated with the assessment parameters of endovascular skills at commencement and completion of training. Medical students exhibited statistically significant learning curves from the initial to the plateau performance for contrast usage (medians, 28 vs 17 mL, P dexterity as well as with image recall at end of the training period. In addition to current recruitment strategies, VSA may be a useful tool for predictive validity studies.

  15. A cross docking pipeline for improving pose prediction and virtual screening performance

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2018-01-01

    Pose prediction and virtual screening performance of a molecular docking method depend on the choice of protein structures used for docking. Multiple structures for a target protein are often used to take into account the receptor flexibility and problems associated with a single receptor structure. However, the use of multiple receptor structures is computationally expensive when docking a large library of small molecules. Here, we propose a new cross-docking pipeline suitable to dock a large library of molecules while taking advantage of multiple target protein structures. Our method involves the selection of a suitable receptor for each ligand in a screening library utilizing ligand 3D shape similarity with crystallographic ligands. We have prospectively evaluated our method in D3R Grand Challenge 2 and demonstrated that our cross-docking pipeline can achieve similar or better performance than using either single or multiple-receptor structures. Moreover, our method displayed not only decent pose prediction performance but also better virtual screening performance over several other methods.

  16. Performance of refrigerating machineries with new refrigerants; Performance des machines frigorifiques avec les nouveaux refrigerants

    Energy Technology Data Exchange (ETDEWEB)

    Bailly, A; Jurkowski, R [CIAT, 01 - Culoz (France)

    1998-12-31

    This paper reports on a comparative study of the thermal performances of different refrigerants like R-22, R-134a, R-404A and R-407C when used as possible substitutes for the HCFC22 refrigerant in a given refrigerating machinery equipped with compact high performance plate exchangers. Thermal performances are compared in identical operating conditions. The behaviour of the two-phase exchange coefficient is analyzed with respect to the different parameters. The composition of the mixture after one year of operation has been analyzed too and the influence of oil on the performances is studied. (J.S.)

  17. Performance of refrigerating machineries with new refrigerants; Performance des machines frigorifiques avec les nouveaux refrigerants

    Energy Technology Data Exchange (ETDEWEB)

    Bailly, A.; Jurkowski, R. [CIAT, 01 - Culoz (France)

    1997-12-31

    This paper reports on a comparative study of the thermal performances of different refrigerants like R-22, R-134a, R-404A and R-407C when used as possible substitutes for the HCFC22 refrigerant in a given refrigerating machinery equipped with compact high performance plate exchangers. Thermal performances are compared in identical operating conditions. The behaviour of the two-phase exchange coefficient is analyzed with respect to the different parameters. The composition of the mixture after one year of operation has been analyzed too and the influence of oil on the performances is studied. (J.S.)

  18. Smith machine counterbalance system affects measures of maximal bench press throw performance.

    Science.gov (United States)

    Vingren, Jakob L; Buddhadev, Harsh H; Hill, David W

    2011-07-01

    Equipment with counterbalance weight systems is commonly used for the assessment of performance in explosive resistance exercise movements, but it is not known if such systems affect performance measures. The purpose of this study was to determine the effect of using a counterbalance weight system on measures of smith machine bench press throw performance. Ten men and 14 women (mean ± SD: age, 25 ± 4 years; height, 173 ± 10 cm; weight, 77.7 ± 18.3 kg) completed maximal smith machine bench press throws under 4 different conditions (2 × 2; counterbalance × load): with or without a counterbalance weight system and using 'light' or 'moderate' net barbell loads. Performance variables (peak force, peak velocity, and peak power) were measured using a linear accelerometer attached to the barbell. The counterbalance weight system resulted in significant (p velocity (light: -0.49 ± 0.10 m·s; moderate: -0.33 ± 0.07 m·s), and peak power (light: -220 ± 43 W; moderate: -143 ± 28 W) compared with no counterbalance system for both load conditions. Load condition did not affect absolute or percentage reductions from the counterbalance weight system for any variable. In conclusion, the use of a counterbalance weight system reduces accelerometer-based performance measures for the bench press throw exercise at light and moderate loads. This reduction in measures is likely because of an increase in the external resistance during the movement, which results in a discrepancy between the manually input and the actual value for external load. A counterbalance weight system should not be used when measuring performance in explosive resistance exercises with an accelerometer.

  19. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  20. Positioning the endoscope in laparoscopic surgery by foot: Influential factors on surgeons' performance in virtual trainer.

    Science.gov (United States)

    Abdi, Elahe; Bouri, Mohamed; Burdet, Etienne; Himidan, Sharifa; Bleuler, Hannes

    2017-07-01

    We have investigated how surgeons can use the foot to position a laparoscopic endoscope, a task that normally requires an extra assistant. Surgeons need to train in order to exploit the possibilities offered by this new technique and safely manipulate the endoscope together with the hands movements. A realistic abdominal cavity has been developed as training simulator to investigate this multi-arm manipulation. In this virtual environment, the surgeon's biological hands are modelled as laparoscopic graspers while the viewpoint is controlled by the dominant foot. 23 surgeons and medical students performed single-handed and bimanual manipulation in this environment. The results show that residents had superior performance compared to both medical students and more experienced surgeons, suggesting that residency is an ideal period for this training. Performing the single-handed task improves the performance in the bimanual task, whereas the converse was not true.

  1. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.; Al-Ghadhban, Samir; Zummo, Salam A.

    2014-01-01

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  2. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.

    2014-10-29

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  3. Assessing the Performance of a Machine Learning Algorithm in Identifying Bubbles in Dust Emission

    Science.gov (United States)

    Xu, Duo; Offner, Stella S. R.

    2017-12-01

    Stellar feedback created by radiation and winds from massive stars plays a significant role in both physical and chemical evolution of molecular clouds. This energy and momentum leaves an identifiable signature (“bubbles”) that affects the dynamics and structure of the cloud. Most bubble searches are performed “by eye,” which is usually time-consuming, subjective, and difficult to calibrate. Automatic classifications based on machine learning make it possible to perform systematic, quantifiable, and repeatable searches for bubbles. We employ a previously developed machine learning algorithm, Brut, and quantitatively evaluate its performance in identifying bubbles using synthetic dust observations. We adopt magnetohydrodynamics simulations, which model stellar winds launching within turbulent molecular clouds, as an input to generate synthetic images. We use a publicly available three-dimensional dust continuum Monte Carlo radiative transfer code, HYPERION, to generate synthetic images of bubbles in three Spitzer bands (4.5, 8, and 24 μm). We designate half of our synthetic bubbles as a training set, which we use to train Brut along with citizen-science data from the Milky Way Project (MWP). We then assess Brut’s accuracy using the remaining synthetic observations. We find that Brut’s performance after retraining increases significantly, and it is able to identify yellow bubbles, which are likely associated with B-type stars. Brut continues to perform well on previously identified high-score bubbles, and over 10% of the MWP bubbles are reclassified as high-confidence bubbles, which were previously marginal or ambiguous detections in the MWP data. We also investigate the influence of the size of the training set, dust model, evolutionary stage, and background noise on bubble identification.

  4. The performance model of dynamic virtual organization (VO) formations within grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. Within the grid computing context, successful dynamic VO formations mean a number of individuals and institutions associated with certain resources join together and form new VOs in order to effectively execute tasks within given time steps. To date, while the concept of VOs has been accepted, few research has been done on the impact of effective dynamic virtual organization formations. In this paper, we develop a performance model of dynamic VOs formation and analyze the effect of different complex organizational structures and their various statistic parameter properties on dynamic VO formations from three aspects: (1) the probability of a successful VO formation under different organizational structures and statistic parameters change, e.g. average degree; (2) the effect of task complexity on dynamic VO formations; (3) the impact of network scales on dynamic VO formations. The experimental results show that the proposed model can be used to understand the dynamic VO formation performance of the simulated organizations. The work provides a good path to understand how to effectively schedule and utilize resources based on the complex grid network and therefore improve the overall performance within grid environment.

  5. A virtual reality endoscopic simulator augments general surgery resident cancer education as measured by performance improvement.

    Science.gov (United States)

    White, Ian; Buchberg, Brian; Tsikitis, V Liana; Herzig, Daniel O; Vetto, John T; Lu, Kim C

    2014-06-01

    Colorectal cancer is the second most common cause of death in the USA. The need for screening colonoscopies, and thus adequately trained endoscopists, particularly in rural areas, is on the rise. Recent increases in required endoscopic cases for surgical resident graduation by the Surgery Residency Review Committee (RRC) further emphasize the need for more effective endoscopic training during residency to determine if a virtual reality colonoscopy simulator enhances surgical resident endoscopic education by detecting improvement in colonoscopy skills before and after 6 weeks of formal clinical endoscopic training. We conducted a retrospective review of prospectively collected surgery resident data on an endoscopy simulator. Residents performed four different clinical scenarios on the endoscopic simulator before and after a 6-week endoscopic training course. Data were collected over a 5-year period from 94 different residents performing a total of 795 colonoscopic simulation scenarios. Main outcome measures included time to cecal intubation, "red out" time, and severity of simulated patient discomfort (mild, moderate, severe, extreme) during colonoscopy scenarios. Average time to intubation of the cecum was 6.8 min for those residents who had not undergone endoscopic training versus 4.4 min for those who had undergone endoscopic training (p Virtual reality endoscopic simulation is an effective tool for both augmenting surgical resident endoscopy cancer education and measuring improvement in resident performance after formal clinical endoscopic training.

  6. Performance of Rotary Cutter Type Breaking Machine for Breakingand Deshelling Cocoa Roasted Beans

    Directory of Open Access Journals (Sweden)

    Sukrisno Widyotomo

    2005-12-01

    Full Text Available Conversion of cocoa beans to chocolate product is, therefore, one of the promising alternatives to increase the value added of dried cocoa beans. On the other hand, the development of chocolate industry requires an appropriate technology that is not available yet for small or medium scale of business. Breaking and deshelling cocoa roasted beans is one important steps in cocoa processing to ascertain good chocolate quality. The aim of this research is to study performance of rotary cutter type breaking machine for breaking and deshelling cocoa roasted beans. Indonesian Coffee and Cocoa Research Institute has designed and tested a rotary cutter type breaking machine for breaking and deshelling cocoa roasted beans. Breaker unit has rotated by ½ HP power, single phase, 110/220 V and 1440 rpm. Transmission system that use for rotating breaker unit is pulley and single V belt. Centrifugal blower as separator unit between cotyledon and shell has specification 0.5 m 3 /min air flow, 780 Pa, 370 W, and 220 V. Field tests showed that the optimum capacity of the machine was 268 kg/h with 500 rpm speed of rotary cutter, 2,8 m/s separator air flow, and power require was 833 W. Percentage product in outlet 1 and 2 were 94.5% and 5.5%. Particle distribution from outlet 1 was 92% as cotyledon, 8% as shell in cotyledon and on outlet 2 was 97% as shell, 3% as cotyledon in shell. Key words:cocoa, breaking, rotary cutter, quality.

  7. Machine Learning Techniques for Optical Performance Monitoring from Directly Detected PDM-QAM Signals

    DEFF Research Database (Denmark)

    Thrane, Jakob; Wass, Jesper; Piels, Molly

    2017-01-01

    Linear signal processing algorithms are effective in dealing with linear transmission channel and linear signal detection, while the nonlinear signal processing algorithms, from the machine learning community, are effective in dealing with nonlinear transmission channel and nonlinear signal...... detection. In this paper, a brief overview of the various machine learning methods and their application in optical communication is presented and discussed. Moreover, supervised machine learning methods, such as neural networks and support vector machine, are experimentally demonstrated for in-band optical...

  8. Modification and Performance Evaluation of a Low Cost Electro-Mechanically Operated Creep Testing Machine

    OpenAIRE

    John J. MOMOH; Lanre Y. SHUAIB-BABATA; Gabriel O. ADELEGAN

    2010-01-01

    Existing mechanically operated tensile and creep testing machine was modified to a low cost, electro-mechanically operated creep testing machine capable of determining the creep properties of aluminum, lead and thermoplastic materials as a function of applied stress, time and temperature. The modification of the testing machine was necessitated by having an electro-mechanically operated creep testing machine as a demonstration model ideal for use and laboratory demonstrations, which will prov...

  9. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  10. Virtual Resting Pd/Pa From Coronary Angiography and Blood Flow Modelling: Diagnostic Performance Against Fractional Flow Reserve.

    Science.gov (United States)

    Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Bourantas, Christos V; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Serruys, Patrick W; Michalis, Lampros K

    2018-03-01

    Fractional flow reserve (FFR) has been established as a useful diagnostic tool. The distal coronary pressure to aortic pressure (Pd/Pa) ratio at rest is a simpler physiologic index but also requires the use of the pressure wire, whereas recently proposed virtual functional indices derived from coronary imaging require complex blood flow modelling and/or are time-consuming. Our aim was to test the diagnostic performance of virtual resting Pd/Pa using routine angiographic images and a simple flow model. Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by FFR. The resting Pd/Pa for each lesion was assessed by computational fluid dynamics. The discriminatory power of virtual resting Pd/Pa against FFR (reference: ≤0.80) was high (area under the receiver operator characteristic curve [AUC]: 90.5% [95% CI: 85.4-95.6%]). Diagnostic accuracy, sensitivity and specificity for the optimal virtual resting Pd/Pa cut-off (≤0.94) were 84.9%, 90.4% and 81.6%, respectively. Virtual resting Pd/Pa demonstrated superior performance (pvirtual resting Pd/Pa and FFR (r=0.69, pVirtual resting Pd/Pa using routine angiographic data and a simple flow model provides fast functional assessment of coronary lesions without requiring the pressure-wire and hyperaemia induction. The high diagnostic performance of virtual resting Pd/Pa for predicting FFR shows promise for using this simple/fast virtual index in clinical practice. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  11. Do "Virtual" and "Outpatient" Public Health Tuberculosis Clinics Perform Equally Well? A Program-Wide Evaluation in Alberta, Canada.

    Directory of Open Access Journals (Sweden)

    Richard Long

    Full Text Available Meeting the challenge of tuberculosis (TB elimination will require adopting new models of delivering patient-centered care customized to diverse settings and contexts. In areas of low incidence with cases spread out across jurisdictions and large geographic areas, a "virtual" model is attractive. However, whether "virtual" clinics and telemedicine deliver the same outcomes as face-to-face encounters in general and within the sphere of public health in particular, is unknown. This evidence is generated here by analyzing outcomes between the "virtual" and "outpatient" public health TB clinics in Alberta, a province of Western Canada with a large geographic area and relatively small population.In response to the challenge of delivering equitable TB services over long distances and to hard to reach communities, Alberta established three public health clinics for the delivery of its program: two outpatient serving major metropolitan areas, and one virtual serving mainly rural areas. The virtual clinic receives paper-based or electronic referrals and generates directives which are acted upon by local providers. Clinics are staffed by dedicated public health nurses and university-based TB physicians. Performance of the two types of clinics is compared between the years 2008 and 2012 using 16 case management and treatment outcome indicators and 12 contact management indicators.In the outpatient and virtual clinics, respectively, 691 and 150 cases and their contacts were managed. Individually and together both types of clinics met most performance targets. Compared to outpatient clinics, virtual clinic performance was comparable, superior and inferior in 22, 3, and 3 indicators, respectively.Outpatient and virtual public health TB clinics perform equally well. In low incidence settings a combination of the two clinic types has the potential to address issues around equitable service delivery and declining expertise.

  12. Machine site preparation improves seedling performance on a high-elevation site in southwest Oregon

    International Nuclear Information System (INIS)

    McNabb, D.H.; Baker-Katz, K.; Tesch, S.D.

    1993-01-01

    Douglas-fir (Pseudotsuga menziesii) seedlings planted on areas receiving one of four site-preparation treatments (scarify, scarify/till, soil removal, and soil removal/till) and on unprepared control areas were compared for 5 yr at a high-elevation, nutrient-poor site in the western Siskiyou Mountains of southwest Oregon. Fifth-year survival of seedlings was at least 85% among machine-prepared plots, compared to 42% on control plots. Cover of competing vegetation remained less than 25% during the period for all machine treatments. In contrast, vegetation cover on control plots was 30% at the time of planting and increased to nearly 75% after 5 yr. Competing vegetation clearly impeded seedling performance. The effects of unusually droughty conditions at the time of planting in 1982 were examined further by interplanting additional seedlings in the soil-removal treatment in 1985. The interplanting was followed by more normal spring precipitation, and seedlings grew better over 5 yr than those planted in 1982. The slow recovery of competing vegetation and generally poor seedling growth on all treatments during both planting years are attributed to low soil fertility

  13. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgindy, Tarek [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dobbs, Alex [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-03

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmospheric measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.

  14. Strategic Performance Measurement Using Balanced Scorecard: A Case of Machine Tool Industry

    Directory of Open Access Journals (Sweden)

    Kshatriya Anil

    2017-02-01

    Full Text Available This paper focuses on implementation, monitoring, and application of balanced scorecard (BSC techniques in an organization involved in providing machine tool solutions to the industrial sector. The growth of the company considered in real time constituted improvements of both top and bottom lines. In the industry under consideration, it was observed that in our company, the top line was steadily growing but not the bottom line. This is when we started getting down to brass tacks and strategically focusing on growth in overall profits of the company. This included growing revenues by improving of EBITDA (earnings before interests, taxes, depreciation, and amortization and by increasing efficiency (i.e., cutting costs. These improvements were implemented by chalking out a comprehensive BSC designed to suit the machine tool industry. The four perspectives of the management, namely, internal business process, organizational learning, financial perspective, and customer perspective, have been considered lucidly and enunciate the parameters that affect the BSC very aptly. The BSC designed considered 9 objectives and 27 relative measures of these factors to quantify the various quantitative and qualitative dimensions that affect the company’s performance. A Balanced Lean Index (BL Score was used to measure the results for company X.

  15. Application of rare-earth magnets in high-performance electric machines

    International Nuclear Information System (INIS)

    Ramsden, V.S.

    1998-01-01

    Some state of the art developments of high-performance machines using rare-earth magnets are reviewed with particular examples drawn from a number of novel machine designs developed jointly by the Faculty of Engineering, University of Technology, Sydney (UTS) and CSIRO Telecommunications and Industrial Physics. These designs include an 1800 W, 1060 rev/min, 98% efficient solar car in-wheel motor using a Halbach magnet array, axial flux, and ironless winding; a 1200 W, 3000 rev/min, 91% efficient solar-powered, water-filled, submersible, bore-hole pump motor using a surface magnet rotor; a 500 W, 10000 rev/min, 87% efficient, oil-filled, oil-well tractor motor using a 2-pole cylindrical magnet rotor and slotless winding; a 75 kW, 48000 rev/min, 97% efficient, high-speed compressor drive with 2-pole cylindrical magnet rotor, slotted stator, and refrigerant cooling; and a 20 kW, 211 rev/min, 87% efficient, direct-drive generator for wind turbines with very low starting torque using an outer rotor with surface magnets and a slotted stator. (orig.)

  16. Performance Evaluation of a Bench-Top Precision Glass Molding Machine

    Directory of Open Access Journals (Sweden)

    Peter Wachtel

    2013-01-01

    Full Text Available A Dyna Technologies Inc. GP-5000HT precision glass molding machine has been found to be a capable tool for bridging the gap between research-level instruments and the higher volume production machines typically used in industry, providing a means to apply the results of rigorous instrumentation analysis performed in the lab to industrial PGM applications. The GP-5000HT's thermal and mechanical functionality is explained and characterized through the measurement baseline functionality and the associated error. These baseline measurements were used to determine the center thickness repeatability of pressed glass parts, which is the main metric used in industrial pressing settings. The baselines and the repeatability tests both confirmed the need for three warm-up pressing cycles before the press reaches a thermal steady state. The baselines used for pressing a 2 mm glass piece to a 1 mm target center thickness yielded an average center thickness of 1.001 mm and a standard deviation of thickness of 0.0055 mm for glass samples pressed over 3 consecutive days. The baseline tests were then used to deconvolve the sources of error of final pressed piece center thickness.

  17. Investigation of tool engagement and cutting performance in machining a pocket

    Science.gov (United States)

    Adesta, E. Y. T.; Hamidon, R.; Riza, M.; Alrashidi, R. F. F. A.; Alazemi, A. F. F. S.

    2018-01-01

    This study investigates the variation of tool engagement for different profile of cutting. In addition, behavior of cutting force and cutting temperature for different tool engagements for machining a pocket also been explored. Initially, simple tool engagement models were developed for peripheral and slot cutting for different types of corner. Based on these models, the tool engagements for contour and zig zag tool path strategies for a rectangular shape pocket with dimension 80 mm x 60 mm were analyzed. Experiments were conducted to investigate the effect of tool engagements on cutting force and cutting temperature for the machining of a pocket of AISI H13 material. The cutting parameters used were 150m/min cutting speed, 0.05mm/tooth feed, and 0.1mm depth of cut. Based on the results obtained, the changes of cutting force and cutting temperature performance there exist a relationship between cutting force, cutting temperature and tool engagement. A higher cutting force and cutting temperature is obtained when the cutting tool goes through up milling and when the cutting tool makes a full engagement with the workpiece.

  18. Virtual navigation performance: the relationship to field of view and prior video gaming experience.

    Science.gov (United States)

    Richardson, Anthony E; Collaer, Marcia L

    2011-04-01

    Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.

  19. Tracking Systems for Virtual Rehabilitation: Objective Performance vs. Subjective Experience. A Practical Scenario

    Directory of Open Access Journals (Sweden)

    Roberto Lloréns

    2015-03-01

    Full Text Available Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements have to be considered for rehabilitation purposes. This paper studies and compares the accuracy and jitter of three tracking solutions (optical, electromagnetic, and skeleton tracking in a practical scenario and analyzes the subjective perceptions of 19 healthy subjects, 22 stroke survivors, and 14 physical therapists. The optical tracking system provided the best accuracy (1.074 ± 0.417 cm while the electromagnetic device provided the most inaccurate results (11.027 ± 2.364 cm. However, this tracking solution provided the best jitter values (0.324 ± 0.093 cm, in contrast to the skeleton tracking, which had the worst results (1.522 ± 0.858 cm. Healthy individuals and professionals preferred the skeleton tracking solution rather than the optical and electromagnetic solution (in that order. Individuals with stroke chose the optical solution over the other options. Our results show that subjective perceptions and preferences are far from being constant among different populations, thus suggesting that these considerations, together with the performance parameters, should be also taken into account when designing a rehabilitation system.

  20. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention

    Directory of Open Access Journals (Sweden)

    Frère Annie F

    2011-08-01

    Full Text Available Abstract Background Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. Methods An interactive computer game based on virtual reality was developed to evaluate the performance of the players. The game's storyline was based on the story of an old pirate who runs across islands and dangerous seas in search of a lost treasure. Within the game, the player must find and interpret the hints scattered in different scenarios. Two versions of this game were implemented. In the first, hints and information boards were painted using red and green colours. In the second version, these objects were painted using blue and yellow colours. For modelling, texturing, and animating virtual characters and objects the three-dimensional computer graphics tool Blender 3D was used. The textures were created with the GIMP editor to provide visual effects increasing the realism and immersion of the players. The games were tested on 20 non-ADHD volunteers who were divided into two subgroups (A1 and A2 and 20 volunteers with ADHD who were divided into subgroups B1 and B2. Subgroups A1 and B1 used the first version of the game with the hints painted in green-red colors, and subgroups A2 and B2 the second version using the same hints now painted in blue-yellow. The time spent to complete each task of the game was measured. Results Data analyzed with ANOVA two-way and posthoc TUKEY LSD showed that the use of blue/yellow instead of green/red colors decreased the game performance of all participants. However, a greater decrease in performance could be observed with ADHD participants

  1. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention.

    Science.gov (United States)

    Silva, Alessandro P; Frère, Annie F

    2011-08-19

    Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. An interactive computer game based on virtual reality was developed to evaluate the performance of the players.The game's storyline was based on the story of an old pirate who runs across islands and dangerous seas in search of a lost treasure. Within the game, the player must find and interpret the hints scattered in different scenarios. Two versions of this game were implemented. In the first, hints and information boards were painted using red and green colours. In the second version, these objects were painted using blue and yellow colours. For modelling, texturing, and animating virtual characters and objects the three-dimensional computer graphics tool Blender 3D was used. The textures were created with the GIMP editor to provide visual effects increasing the realism and immersion of the players. The games were tested on 20 non-ADHD volunteers who were divided into two subgroups (A1 and A2) and 20 volunteers with ADHD who were divided into subgroups B1 and B2. Subgroups A1 and B1 used the first version of the game with the hints painted in green-red colors, and subgroups A2 and B2 the second version using the same hints now painted in blue-yellow. The time spent to complete each task of the game was measured. Data analyzed with ANOVA two-way and posthoc TUKEY LSD showed that the use of blue/yellow instead of green/red colors decreased the game performance of all participants. However, a greater decrease in performance could be observed with ADHD participants where tasks, that require attention, were most affected

  2. Psychometric Properties of Virtual Reality Vignette Performance Measures: A Novel Approach for Assessing Adolescents' Social Competency Skills

    Science.gov (United States)

    Paschall, Mallie J.; Fishbein, Diana H.; Hubal, Robert C.; Eldreth, Diana

    2005-01-01

    This study examined the psychometric properties of performance measures for three novel, interactive virtual reality vignette exercises developed to assess social competency skills of at-risk adolescents. Performance data were collected from 117 African-American male 15-17 year olds. Data for 18 performance measures were obtained, based on…

  3. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music

    Science.gov (United States)

    Giraldo, Sergio I.; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules

  4. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music

    Directory of Open Access Journals (Sweden)

    Sergio Ivan Giraldo

    2016-12-01

    Full Text Available Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1 quantitatively evaluate the accuracy of the induced models, (2 analyse the relative importance of the considered musical features, (3 discuss some of the learnt expressive performance rules in the context of previous work, and (4 assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules’ performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the

  5. A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music.

    Science.gov (United States)

    Giraldo, Sergio I; Ramirez, Rafael

    2016-01-01

    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.

  6. The effect of self-directed virtual reality simulation on dissection training performance in mastoidectomy

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Foghsgaard, Søren; Konge, Lars

    2016-01-01

    OBJECTIVES/HYPOTHESIS: To establish the effect of self-directed virtual reality (VR) simulation training on cadaveric dissection training performance in mastoidectomy and the transferability of skills acquired in VR simulation training to the cadaveric dissection training setting. STUDY DESIGN......: Prospective study. METHODS: Two cohorts of 20 novice otorhinolaryngology residents received either self-directed VR simulation training before cadaveric dissection training or vice versa. Cadaveric and VR simulation performances were assessed using final-product analysis with three blinded expert raters....... RESULTS: The group receiving VR simulation training before cadaveric dissection had a mean final-product score of 14.9 (95 % confidence interval [CI] [12.9-16.9]) compared with 9.8 (95% CI [8.4-11.1]) in the group not receiving VR simulation training before cadaveric dissection. This 52% increase...

  7. The performance of disk arrays in shared-memory database machines

    Science.gov (United States)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  8. Performance implications of virtualization and hyper-threading on high energy physics applications in a Grid environment

    CERN Document Server

    Gilbert, Laura; Cobban, M; Iqbal, Saima; Jenwei, Hsieh; Newman, R; Pepper, R; Tseng, Jeffrey

    2005-01-01

    The simulations used in the field of high energy physics are compute intensive and exhibit a high level of data parallelism. These features make such simulations ideal candidates for Grid computing. We are taking as an example the GEANT4 detector simulation used for physics studies within the ATLAS experiment at CERN. One key issue in Grid computing is that of network and system security, which can potentially inhibit the wide spread use of such simulations. Virtualization provides a feasible solution because it allows the creation of virtual compute nodes in both local and remote compute clusters, thus providing an insulating layer which can play an important role in satisfying the security concerns of all parties involved. However, it has performance implications. This study provides quantitative estimates of the virtualization and hyper- threading overhead for GEANT on commodity clusters. Results show that virtualization has less than 15% run-time overhead, and that the best run time (with the non-SMP lice...

  9. Virtualization for the LHCb experiment

    International Nuclear Information System (INIS)

    Bonaccorsi, E.; Brarda, L.; Chebbi, M.; Neufeld, N.; Sborzacci, F.

    2012-01-01

    The LHCb experiment, one of the 4 large particle detector at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS. (authors)

  10. Influence of virtual reality soccer game on walking performance in robotic assisted gait training for children

    Directory of Open Access Journals (Sweden)

    Zimmerli Lukas

    2010-04-01

    Full Text Available Abstract Background Virtual reality (VR offers powerful therapy options within a functional, purposeful and motivating context. Several studies have shown that patients' motivation plays a crucial role in determining therapy outcome. However, few studies have demonstrated the potential of VR in pediatric rehabilitation. Therefore, we developed a VR-based soccer scenario, which provided interactive elements to engage patients during robotic assisted treadmill training (RAGT. The aim of this study was to compare the immediate effect of different supportive conditions (VR versus non-VR conditions on motor output in patients and healthy control children during training with the driven gait orthosis Lokomat®. Methods A total of 18 children (ten patients with different neurological gait disorders, eight healthy controls took part in this study. They were instructed to walk on the Lokomat in four different, randomly-presented conditions: (1 walk normally without supporting assistance, (2 with therapists' instructions to promote active participation, (3 with VR as a motivating tool to walk actively and (4 with the VR tool combined with therapists' instructions. The Lokomat gait orthosis is equipped with sensors at hip and knee joint to measure man-machine interaction forces. Additionally, subjects' acceptance of the RAGT with VR was assessed using a questionnaire. Results The mixed ANOVA revealed significant main effects for the factor CONDITIONS (p Conclusions The VR scenario used here induces an immediate effect on motor output to a similar degree as the effect resulting from verbal instructions by the therapists. Further research needs to focus on the implementation of interactive design elements, which keep motivation high across and beyond RAGT sessions, especially in pediatric rehabilitation.

  11. Acute Effect of Static Stretching on Lower Limb Movement Performance by Using STABL Virtual Reality System.

    Science.gov (United States)

    Ameer, Mariam A; Muaidi, Qassim I

    2017-07-17

    The effect of acute static stretch (ASS) on the lower limb RT has been recently questioned to decrease the risk of falling and injuries in situations requiring a rapid reaction, as in the cases of balance disturbance. The main purpose of this study was to detect the effect of ASS on the lower limb RT by using virtual reality device. Two Group Control Group design. Research laboratory. The control and experimental groups were formed randomly from sixty female university students. Each participant in the experimental group was tested before and after ASS for the quadriceps, hamstrings and planter flexor muscles, and compared with the control group with warming-up exercise only. The stretching program involved warming-up in the form of circular running inside the lab for 5 minutes followed by stretching of each muscle group thrice, to the limit of discomfort of 45 s, with resting period of 15s between stretches. The measurements included the RT of the dominant lower extremity by using the dynamic stability program, STABL Virtual Reality System (Model No. DIZ 2709, Motek Medical and Force Link Merged Co., Amsterdam). There was statistically significant reduction (F = 162, P= .00) in post-test RT between the two groups, and significant decrease in RT after stretching, in the experimental group (7.5%) (P= .00). ASS of the lower limb muscles tends to decrease the lower limb RT and improve movement performance.

  12. Performance optimization in electro- discharge machining using a suitable multiresponse optimization technique

    Directory of Open Access Journals (Sweden)

    I. Nayak

    2017-06-01

    Full Text Available In the present research work, four different multi response optimization techniques, viz. multiple response signal-to-noise (MRSN ratio, weighted signal-to-noise (WSN ratio, Grey relational analysis (GRA and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian methods have been used to optimize the electro-discharge machining (EDM performance characteristics such as material removal rate (MRR, tool wear rate (TWR and surface roughness (SR simultaneously. Experiments have been planned on a D2 steel specimen based on L9 orthogonal array. Experimental results are analyzed using the standard procedure. The optimum level combinations of input process parameters such as voltage, current, pulse-on-time and pulse-off-time, and percentage contributions of each process parameter using ANOVA technique have been determined. Different correlations have been developed between the various input process parameters and output performance characteristics. Finally, the optimum performances of these four methods are compared and the results show that WSN ratio method is the best multiresponse optimization technique for this process. From the analysis, it is also found that the current has the maximum effect on the overall performance of EDM operation as compared to other process parameters.

  13. Assessment of performance measures and learning curves for use of a virtual-reality ultrasound simulator in transvaginal ultrasound examination

    DEFF Research Database (Denmark)

    Madsen, M E; Konge, L; Nørgaard, L N

    2014-01-01

    OBJECTIVE: To assess the validity and reliability of performance measures, develop credible performance standards and explore learning curves for a virtual-reality simulator designed for transvaginal gynecological ultrasound examination. METHODS: A group of 16 ultrasound novices, along with a group......-6), corresponding to an average of 219 min (range, 150-251 min) of training. The test/retest reliability was high, with an intraclass correlation coefficient of 0.93. CONCLUSIONS: Competence in the performance of gynecological ultrasound examination can be assessed in a valid and reliable way using virtual-reality...

  14. High availability using virtualization

    International Nuclear Information System (INIS)

    Calzolari, Federico; Arezzini, Silvia; Ciampa, Alberto; Mazzoni, Enrico; Domenici, Andrea; Vaglini, Gigliola

    2010-01-01

    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows the running virtual machines to be distributed over a small number of servers, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The 3RC system is based on a finite state machine, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtual hosts. The whole Grid data center SNS-PISA is running at the moment in virtual environment under the high availability system.

  15. Machine learning analysis of binaural rowing sounds

    DEFF Research Database (Denmark)

    Johard, Leonard; Ruffaldi, Emanuele; Hoffmann, Pablo F.

    2011-01-01

    Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition metho...... methodology and the evaluation of different machine learning techniques for classifying rowing-sound data. We see that a combination of principal component analysis and shallow networks perform equally well as deep architectures, while being much faster to train.......Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition...

  16. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  17. Academic Performance, Course Completion Rates, and Student Perception of the Quality and Frequency of Interaction in a Virtual High School

    Science.gov (United States)

    Hawkins, Abigail; Graham, Charles R.; Sudweeks, Richard R.; Barbour, Michael K.

    2013-01-01

    This study examined the relationship between students' perceptions of teacher-student interaction and academic performance at an asynchronous, self-paced, statewide virtual high school. Academic performance was measured by grade awarded and course completion. There were 2269 students who responded to an 18-item survey designed to measure student…

  18. Tangible interfaces in virtual environments, case study: Instituto de Engenharia Nuclear Virtual

    International Nuclear Information System (INIS)

    Santo, Andre Cotelli do E.; Mol, Antonio Carlos A.; Pinto, Emanuele Oliveira; Melo, Joao Victor da C.; Paula, Vanessa Marcia de; Freitas, Victor Goncalves Gloria; Machado, Daniel Mol

    2015-01-01

    Virtual Reality (VR) techniques allow the creation of realistic representations of an individual. These technologies are being applied in several fields such as training, simulations, virtual experiments and new applications are constantly being found. This work aims to present an interactive system in virtual environments without the use of peripherals typically found in computers such as mouse and keyboard. Through the movement of head and hands it is possible to control and navigate the virtual character (avatar) in a virtual environment, an improvement in the man-machine integration. The head movements are recognized using a virtual helmet with a tracking system. An infrared camera detects the position of infrared LEDs located in the operator's head and places the vision of the virtual character in accordance with the operator's vision. The avatar control is performed by a system that detects the movement of the hands, using infrared sensors, allowing the user to move it in the virtual environment. This interaction system was implemented in the virtual model of the Instituto de Engenharia Nuclear (IEN), which is located on the Ilha do Fundao - Rio de Janeiro - Brazil. This three-dimensional environment, in which avatars can move and interact according to the user movements, gives a feeling of realism to the operator. The results show an interface that allows a higher degree of immersion of the operator in the virtual environment, promoting a more engaging and dynamic way of working. (author)

  19. Tangible interfaces in virtual environments, case study: Instituto de Engenharia Nuclear Virtual

    Energy Technology Data Exchange (ETDEWEB)

    Santo, Andre Cotelli do E.; Mol, Antonio Carlos A.; Pinto, Emanuele Oliveira; Melo, Joao Victor da C.; Paula, Vanessa Marcia de; Freitas, Victor Goncalves Gloria [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Machado, Daniel Mol [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Instituto Alberto Luiz Coimbra

    2015-07-01

    Virtual Reality (VR) techniques allow the creation of realistic representations of an individual. These technologies are being applied in several fields such as training, simulations, virtual experiments and new applications are constantly being found. This work aims to present an interactive system in virtual environments without the use of peripherals typically found in computers such as mouse and keyboard. Through the movement of head and hands it is possible to control and navigate the virtual character (avatar) in a virtual environment, an improvement in the man-machine integration. The head movements are recognized using a virtual helmet with a tracking system. An infrared camera detects the position of infrared LEDs located in the operator's head and places the vision of the virtual character in accordance with the operator's vision. The avatar control is performed by a system that detects the movement of the hands, using infrared sensors, allowing the user to move it in the virtual environment. This interaction system was implemented in the virtual model of the Instituto de Engenharia Nuclear (IEN), which is located on the Ilha do Fundao - Rio de Janeiro - Brazil. This three-dimensional environment, in which avatars can move and interact according to the user movements, gives a feeling of realism to the operator. The results show an interface that allows a higher degree of immersion of the operator in the virtual environment, promoting a more engaging and dynamic way of working. (author)

  20. Predicting subject-driven actions and sensory experience in a virtual world with relevance vector machine regression of fMRI data.

    Science.gov (United States)

    Valente, Giancarlo; De Martino, Federico; Esposito, Fabrizio; Goebel, Rainer; Formisano, Elia

    2011-05-15

    In this work we illustrate the approach of the Maastricht Brain Imaging Center to the PBAIC 2007 competition, where participants had to predict, based on fMRI measurements of brain activity, subject driven actions and sensory experience in a virtual world. After standard pre-processing (slice scan time correction, motion correction), we generated rating predictions based on linear Relevance Vector Machine (RVM) learning from all brain voxels. Spatial and temporal filtering of the time series was optimized rating by rating. For some of the ratings (e.g. Instructions, Hits, Faces, Velocity), linear RVM regression was accurate and very consistent within and between subjects. For other ratings (e.g. Arousal, Valence) results were less satisfactory. Our approach ranked overall second. To investigate the role of different brain regions in ratings prediction we generated predictive maps, i.e. maps of the weighted contribution of each voxel to the predicted rating. These maps generally included (but were not limited to) "specialized" regions which are consistent with results from conventional neuroimaging studies and known functional neuroanatomy. In conclusion, Sparse Bayesian Learning models, such as RVM, appear to be a valuable approach to the multivariate regression of fMRI time series. The implementation of the Automatic Relevance Determination criterion is particularly suitable and provides a good generalization, despite the limited number of samples which is typically available in fMRI. Predictive maps allow disclosing multi-voxel patterns of brain activity that predict perceptual and behavioral subjective experience. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Virtualization Technologies for the Business

    OpenAIRE

    Sabina POPESCU

    2011-01-01

    There is a new trend of change in today's IT industry. It's called virtualization. In datacenter virtualization can occur on several levels, but the type of virtualization has created this trend change is the operating system offered or server virtualization. OS virtualization technologies come in two forms. First, there is a software component that is used to simulate a natural machine that has total control of an operating system operating on the host equipment. The second is a hypervisor, ...

  2. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    Science.gov (United States)

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  3. Navigation performance in virtual environments varies with fractal dimension of landscape.

    Science.gov (United States)

    Juliani, Arthur W; Bies, Alexander J; Boydston, Cooper R; Taylor, Richard P; Sereno, Margaret E

    2016-09-01

    Fractal geometry has been used to describe natural and built environments, but has yet to be studied in navigational research. In order to establish a relationship between the fractal dimension (D) of a natural environment and humans' ability to navigate such spaces, we conducted two experiments using virtual environments that simulate the fractal properties of nature. In Experiment 1, participants completed a goal-driven search task either with or without a map in landscapes that varied in D. In Experiment 2, participants completed a map-reading and location-judgment task in separate sets of fractal landscapes. In both experiments, task performance was highest at the low-to-mid range of D, which was previously reported as most preferred and discriminable in studies of fractal aesthetics and discrimination, respectively, supporting a theory of visual fluency. The applicability of these findings to architecture, urban planning, and the general design of constructed spaces is discussed.

  4. A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.

    Science.gov (United States)

    Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson

    2017-01-01

    In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.

  5. Virtual reality, ultrasound-guided liver biopsy simulator: development and performance discrimination

    Science.gov (United States)

    Johnson, S J; Hunt, C M; Woolnough, H M; Crawshaw, M; Kilkenny, C; Gould, D A; England, A; Sinha, A; Villard, P F

    2012-01-01

    Objectives The aim of this article was to identify and prospectively investigate simulated ultrasound-guided targeted liver biopsy performance metrics as differentiators between levels of expertise in interventional radiology. Methods Task analysis produced detailed procedural step documentation allowing identification of critical procedure steps and performance metrics for use in a virtual reality ultrasound-guided targeted liver biopsy procedure. Consultant (n=14; male=11, female=3) and trainee (n=26; male=19, female=7) scores on the performance metrics were compared. Ethical approval was granted by the Liverpool Research Ethics Committee (UK). Independent t-tests and analysis of variance (ANOVA) investigated differences between groups. Results Independent t-tests revealed significant differences between trainees and consultants on three performance metrics: targeting, p=0.018, t=−2.487 (−2.040 to −0.207); probe usage time, p = 0.040, t=2.132 (11.064 to 427.983); mean needle length in beam, p=0.029, t=−2.272 (−0.028 to −0.002). ANOVA reported significant differences across years of experience (0–1, 1–2, 3+ years) on seven performance metrics: no-go area touched, p=0.012; targeting, p=0.025; length of session, p=0.024; probe usage time, p=0.025; total needle distance moved, p=0.038; number of skin contacts, p<0.001; total time in no-go area, p=0.008. More experienced participants consistently received better performance scores on all 19 performance metrics. Conclusion It is possible to measure and monitor performance using simulation, with performance metrics providing feedback on skill level and differentiating levels of expertise. However, a transfer of training study is required. PMID:21304005

  6. Relationship between perceived competence and performance during real and virtual motor tasks by children with developmental coordination disorder.

    Science.gov (United States)

    Engel-Yeger, Batya; Sido, Rotem; Mimouni-Bloch, Aviva; Weiss, Patrice L

    2017-10-01

    (i) To compare children with DCD and typically developing participants via standard motor assessments, two interactive virtual games, measures of physical, social and cognitive self-competence and feedback while playing the virtual games and (ii) To examine the contribution of age and each motor assessment to predict self-competence. Participants were 25 boys with DCD and 25 typically developing boys, aged 5-9 years. They completed the M-ABC-2, the Pictorial Scale of Perceived Competence, the 6-Minute Walk Test, and then played the two Kinect games and completed the Short Feedback Questionnaire for Children. Children with DCD showed lower physical competence and lower performance than the typical controls in all standard motor assessments. This performance significantly correlated with the children achievements in part of virtual games and with their self-perceived experience while performing within virtual environments. Among the DCD group, Kinect Running game significantly predicted physical and social competence. The significant correlations between the virtual games and standard motor assessments support the feasibility of using these games when evaluating children with DCD for the richer profile they provide. Implications for rehabilitation Clinicians should refer to the impacts of DCD on child's self-competence and daily life. Technological rehabilitation and the use of VR games have the potential to improve self-competence of children with DCD. By including VR games that simulate real life in the intervention for DCD, clinicians may raise child's enjoyment, self-competence and involvement in therapy.

  7. Collaborative Systems – Finite State Machines

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2011-01-01

    Full Text Available In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.

  8. Performance analysis of parallel identical machines with a generalized shortest queue arrival mechanism

    NARCIS (Netherlands)

    van Houtum, Geert-Jan; Adan, I.J.B.F.; Wessels, J.; Zijm, Willem H.M.

    In this paper we study a production system consisting of a group of parallel machines producing multiple job types. Each machine has its own queue and it can process a restricted set of job types only. On arrival a job joins the shortest queue among all queues capable of serving that job. Under the

  9. Machine performance and its effects on experiments in JT-60U

    International Nuclear Information System (INIS)

    Kondo, I.

    1995-01-01

    The operational results of JT-60U were reviewed in light of the strategy made at the design stage. The operational plan for better confinement shifted from that of low q to high poloidal beta plasma configuration with higher q value according to the revealed machine properties. Some technical and operational skills helped bring about the recent results out of the machine. (orig.)

  10. Optimizing infrastructure for software testing using virtualization

    International Nuclear Information System (INIS)

    Khalid, O.; Shaikh, A.; Copy, B.

    2012-01-01

    Virtualization technology and cloud computing have brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or check-pointed for later re-deployment. At European Organization for Nuclear Research (CERN), we have been using virtualization technology to quickly setup virtual machines for our developers with pre-configured software to enable them to quickly test/deploy a new version of a software patch for a given application. This paper reports both on the techniques that have been used to setup a private cloud on a commodity hardware and also presents the optimization techniques we used to remove deployment specific performance bottlenecks. (authors)

  11. Virtual Distances Used for Optimization of Applicationsin the Pervasive Computing Domain

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye

    2004-01-01

    This paper presents the notion of virtual distances -- communication proximity -- to describe the quality of a connection between two devices. We use virtual distances as the baisis of optimizations performed by a virtual machine where a part of an application can be moved to another device if th...... advantage of temporarily available resources at the current local area network or through ad-hoc networks...

  12. A virtual reality dental simulator predicts performance in an operative dentistry manikin course.

    Science.gov (United States)

    Imber, S; Shapira, G; Gordon, M; Judes, H; Metzger, Z

    2003-11-01

    This study was designed to test the ability of a virtual reality dental simulator to predict the performance of students in a traditional operative dentistry manikin course. Twenty-six dental students were pre-tested on the simulator, prior to the course. They were briefly instructed and asked to prepare 12 class I cavities which were automatically graded by the simulator. The instructors in the manikin course that followed were unaware of the students' performances in the simulator pre-test. The scores achieved by each student in the last six simulator cavities were compared to their final comprehensive grades in the manikin course. Class standing of the students in the simulator pre-test positively correlated with their achievements in the manikin course with a correlation coefficient of 0.49 (P = 0.012). Eighty-nine percent of the students in the lower third of the class in the pre-test remained in the low performing half of the class in the manikin course. These results indicate that testing students in a dental simulator, prior to a manikin course, may be an efficient way to allow early identification of those who are likely to perform poorly. This in turn could enable early allocation of personal tutors to these students in order to improve their chances of success.

  13. Impact of examinees' stereopsis and near visual acuity on laparoscopic virtual reality performance.

    Science.gov (United States)

    Hoffmann, Henry; Ruiz-Schirinzi, Rebecca; Goldblum, David; Dell-Kuster, Salome; Oertli, Daniel; Hahnloser, Dieter; Rosenthal, Rachel

    2015-10-01

    Laparoscopic surgery represents specific challenges, such as the reduction of a three-dimensional anatomic environment to two dimensions. The aim of this study was to investigate the impact of the loss of the third dimension on laparoscopic virtual reality (VR) performance. We compared a group of examinees with impaired stereopsis (group 1, n = 28) to a group with accurate stereopsis (group 2, n = 29). The primary outcome was the difference between the mean total score (MTS) of all tasks taken together and the performance in task 3 (eye-hand coordination), which was a priori considered to be the most dependent on intact stereopsis. The MTS and performance in task 3 tended to be slightly, but not significantly, better in group 2 than in group 1 [MTS: -0.12 (95 % CI -0.32, 0.08; p = 0.234); task 3: -0.09 (95 % CI -0.29, 0.11; p = 0.385)]. The difference of MTS between simulated impaired stereopsis between group 2 (by attaching an eye patch on the adominant eye in the 2nd run) and the first run of group 1 was not significant (MTS: p = 0.981; task 3: p = 0.527). We were unable to demonstrate an impact of impaired examinees' stereopsis on laparoscopic VR performance. Individuals with accurate stereopsis seem to be able to compensate for the loss of the third dimension in laparoscopic VR simulations.

  14. Inventory management performance in machine tool SMEs: What factors do influence them?

    Directory of Open Access Journals (Sweden)

    Rajeev Narayana Pillai

    2010-12-01

    Full Text Available Small and Medium Enterprises (SMEs are one of the principal driving forces in the development of an economy because of its significant contribution in terms of number of enterprises, employment, output and exports in most developing as well as developed countries. But SMEs, particularly in developing countries like India, face constraints in key areas such as technology, finance, marketing and human resources. Moreover these SMEs have been exposed to intense competition since early 1990s because of globalization. However, globalization, the process of continuing integration of the countries in the world has opened up new opportunities for SMEs of developing countries to cater to wider international market which brings out the need for these SMEs to develop competitiveness for their survival as well as growth. It is observed from literature that pursuing appropriate IM practice is one of the ways of acquiring competitiveness among others, by effectively managing and minimizing inventory investment. Inventory management can therefore be one of the crucial determinants of competitiveness as well as operational performance of SMEs in inventory intensive manufacturing industries. The key issue is whether Indian SMEs pursue better IM practices with an intension to reduce their inventory cost and enhance their competitiveness. If so, what are the IM practices pursued by these enterprises? What are the factors which influence the inventory cost and IM performance of enterprises? These questions have been addressed in this study with reference to machine tool SMEs located in the city of Bangalore, India.

  15. Research on Dynamic Models and Performances of Shield Tunnel Boring Machine Cutterhead Driving System

    Directory of Open Access Journals (Sweden)

    Xianhong Li

    2013-01-01

    Full Text Available A general nonlinear time-varying (NLTV dynamic model and linear time-varying (LTV dynamic model are presented for shield tunnel boring machine (TBM cutterhead driving system, respectively. Different gear backlashes and mesh damped and transmission errors are considered in the NLTV dynamic model. The corresponding multiple-input and multiple-output (MIMO state space models are also presented. Through analyzing the linear dynamic model, the optimal reducer ratio (ORR and optimal transmission ratio (OTR are obtained for the shield TBM cutterhead driving system, respectively. The NLTV and LTV dynamic models are numerically simulated, and the effects of physical parameters under various conditions of NLTV dynamic model are analyzed. Physical parameters such as the load torque, gear backlash and transmission error, gear mesh stiffness and damped, pinions inertia and damped, large gear inertia and damped, and motor rotor inertia and damped are investigated in detail to analyze their effects on dynamic response and performances of the shield TBM cutterhead driving system. Some preliminary approaches are proposed to improve dynamic performances of the cutterhead driving system, and dynamic models will provide a foundation for shield TBM cutterhead driving system's cutterhead fault diagnosis, motion control, and torque synchronous control.

  16. Development and operating performance of the refuelling machine of the Fugen

    International Nuclear Information System (INIS)

    Kaneko, Jun; Kasai, Yoshimitsu; Takeshita, Norito; Ohta, Takeo

    1985-01-01

    In the advanced thermal reactor ''Fugen'' power station, with the refuelling machine the fuel replacement during operation is made through the reactor bottom. Its design was started in 1967 and up to 1975 various tests were conducted. Fugen's refuelling machine has thus been used from the initial fuel loading in 1978 and handled so far about 1300 fuel assemblies in seven times of the refuelling. In the stage of Fugen operation there occurred failure of the grab drive due to crud, etc. At present, with such troubles all eliminated, the refuelling machine is in steady operation with proper maintenance. The results with Fugen's refuelling machine are reflected in the development of the refuelling machine for the demonstration ATR. (Mori, K.)

  17. Towards the Efficient Creation of Accurate and High-Performance Virtual Prototypes

    OpenAIRE

    Hufnagel, Simon

    2014-01-01

    As the complexity of embedded systems continuously rises, their development becomes more and more challenging. One technique to cope with this complexity is the employment of virtual prototypes. The virtual prototypes are intended to represent the embedded system’s properties on different levels of detail like register transfer level or transaction level. Virtual prototypes can be used for different tasks throughout the development process. They can act as executable specification, can be use...

  18. Effect of cutting parameters on sustainable machining performance of coated carbide tool in dry turning process of stainless steel 316

    Science.gov (United States)

    Bagaber, Salem A.; Yusoff, Ahmed Razlan

    2017-04-01

    The manufacturing industry aims to produce many products of high quality with relatively less cost and time. Different cutting parameters affect the machining performance of surface roughness, cutting force, and material removal rate. Nevertheless, a few studies reported on the effects of sustainable factors such as power consumed, cycle time during machining, and tool life on the dry turning of AISI 316. The present study aims to evaluate the machining performance of coated carbide in the machining of hard steel AISI 316 under the dry turning process. The influence of cutting parameters of cutting speed, feed rate, and depth of cut with their five (5) levels is established by a central composite design. Highly significant parameters were determined by analysis of variance (ANOVA), and the main effects of power consumed and time during machining, surface roughness, and tool wear were observed. Results showed that the cutting speed was proportional to power consumption and tool wear. Meanwhile, insignificant to surface roughness, feed rate most significantly affected surface roughness and power consumption followed by depth of cut.

  19. AULA virtual reality test as an attention measure: convergent validity with Conners' Continuous Performance Test.

    Science.gov (United States)

    Díaz-Orueta, Unai; Garcia-López, Cristina; Crespo-Eguílaz, Nerea; Sánchez-Carpintero, Rocío; Climent, Gema; Narbona, Juan

    2014-01-01

    The majority of neuropsychological tests used to evaluate attention processes in children lack ecological validity. The AULA Nesplora (AULA) is a continuous performance test, developed in a virtual setting, very similar to a school classroom. The aim of the present study is to analyze the convergent validity between the AULA and the Continuous Performance Test (CPT) of Conners. The AULA and CPT were administered correlatively to 57 children, aged 6-16 years (26.3% female) with average cognitive ability (IQ mean = 100.56, SD = 10.38) who had a diagnosis of attention deficit/hyperactivity disorder (ADHD) according to DSM-IV-TR criteria. Spearman correlations analyses were conducted among the different variables. Significant correlations were observed between both tests in all the analyzed variables (omissions, commissions, reaction time, and variability of reaction time), including for those measures of the AULA based on different sensorial modalities, presentation of distractors, and task paradigms. Hence, convergent validity between both tests was confirmed. Moreover, the AULA showed differences by gender and correlation to Perceptual Reasoning and Working Memory indexes of the WISC-IV, supporting the relevance of IQ measures in the understanding of cognitive performance in ADHD. In addition, the AULA (but not Conners' CPT) was able to differentiate between ADHD children with and without pharmacological treatment for a wide range of measures related to inattention, impulsivity, processing speed, motor activity, and quality of attention focus. Additional measures and advantages of the AULA versus Conners' CPT are discussed.

  20. Contributions of sex, testosterone, and androgen receptor CAG repeat number to virtual Morris water maze performance.

    Science.gov (United States)

    Nowak, Nicole T; Diamond, Michael P; Land, Susan J; Moffat, Scott D

    2014-03-01

    The possibility that androgens contribute to the male advantage typically found on measures of spatial cognition has been investigated using a variety of approaches. To date, evidence to support the notion that androgens affect spatial cognition in healthy young adults is somewhat equivocal. The present study sought to clarify the association between testosterone (T) and spatial performance by extending measurements of androgenicity to include both measures of circulating T as well as an androgen receptor-specific genetic marker. The aims of this study were to assess the contributions of sex, T, and androgen receptor CAG repeat number (CAGr) on virtual Morris water task (vMWT) performance in a group of healthy young men and women. The hypothesis that men would outperform women on vMWT outcomes was supported. Results indicate that CAGr may interact with T to impact navigation performance and suggest that consideration of androgen receptor sensitivity is an important consideration in evaluating hormone-behavior relationships. Copyright © 2013 Elsevier Ltd. All rights reserved.